WorldWideScience

Sample records for curve scale technique

  1. Scaling of counter-current imbibition recovery curves using artificial neural networks

    Science.gov (United States)

    Jafari, Iman; Masihi, Mohsen; Nasiri Zarandi, Masoud

    2018-06-01

    Scaling imbibition curves are of great importance in the characterization and simulation of oil production from naturally fractured reservoirs. Different parameters such as matrix porosity and permeability, oil and water viscosities, matrix dimensions, and oil/water interfacial tensions have an effective on the imbibition process. Studies on the scaling imbibition curves along with the consideration of different assumptions have resulted in various scaling equations. In this work, using an artificial neural network (ANN) method, a novel technique is presented for scaling imbibition recovery curves, which can be used for scaling the experimental and field-scale imbibition cases. The imbibition recovery curves for training and testing the neural network were gathered through the simulation of different scenarios using a commercial reservoir simulator. In this ANN-based method, six parameters were assumed to have an effect on the imbibition process and were considered as the inputs for training the network. Using the ‘Bayesian regularization’ training algorithm, the network was trained and tested. Training and testing phases showed superior results in comparison with the other scaling methods. It is concluded that using the new technique is useful for scaling imbibition recovery curves, especially for complex cases, for which the common scaling methods are not designed.

  2. Image scaling curve generation

    NARCIS (Netherlands)

    2012-01-01

    The present invention relates to a method of generating an image scaling curve, where local saliency is detected in a received image. The detected local saliency is then accumulated in the first direction. A final scaling curve is derived from the detected local saliency and the image is then

  3. Image scaling curve generation.

    NARCIS (Netherlands)

    2011-01-01

    The present invention relates to a method of generating an image scaling curve, where local saliency is detected in a received image. The detected local saliency is then accumulated in the first direction. A final scaling curve is derived from the detected local saliency and the image is then

  4. Asymptotic scalings of developing curved pipe flow

    Science.gov (United States)

    Ault, Jesse; Chen, Kevin; Stone, Howard

    2015-11-01

    Asymptotic velocity and pressure scalings are identified for the developing curved pipe flow problem in the limit of small pipe curvature and high Reynolds numbers. The continuity and Navier-Stokes equations in toroidal coordinates are linearized about Dean's analytical curved pipe flow solution (Dean 1927). Applying appropriate scaling arguments to the perturbation pressure and velocity components and taking the limits of small curvature and large Reynolds number yields a set of governing equations and boundary conditions for the perturbations, independent of any Reynolds number and pipe curvature dependence. Direct numerical simulations are used to confirm these scaling arguments. Fully developed straight pipe flow is simulated entering a curved pipe section for a range of Reynolds numbers and pipe-to-curvature radius ratios. The maximum values of the axial and secondary velocity perturbation components along with the maximum value of the pressure perturbation are plotted along the curved pipe section. The results collapse when the scaling arguments are applied. The numerically solved decay of the velocity perturbation is also used to determine the entrance/development lengths for the curved pipe flows, which are shown to scale linearly with the Reynolds number.

  5. Trends in scale and shape of survival curves.

    Science.gov (United States)

    Weon, Byung Mook; Je, Jung Ho

    2012-01-01

    The ageing of the population is an issue in wealthy countries worldwide because of increasing costs for health care and welfare. Survival curves taken from demographic life tables may help shed light on the hypotheses that humans are living longer and that human populations are growing older. We describe a methodology that enables us to obtain separate measurements of scale and shape variances in survival curves. Specifically, 'living longer' is associated with the scale variance of survival curves, whereas 'growing older' is associated with the shape variance. We show how the scale and shape of survival curves have changed over time during recent decades, based on period and cohort female life tables for selected wealthy countries. Our methodology will be useful for performing better tracking of ageing statistics and it is possible that this methodology can help identify the causes of current trends in human ageing.

  6. Fermat's Technique of Finding Areas under Curves

    Science.gov (United States)

    Staples, Ed

    2004-01-01

    Perhaps next time teachers head towards the fundamental theorem of calculus in their classroom, they may wish to consider Fermat's technique of finding expressions for areas under curves, beautifully outlined in Boyer's History of Mathematics. Pierre de Fermat (1601-1665) developed some important results in the journey toward the discovery of the…

  7. Guidelines for using the Delphi Technique to develop habitat suitability index curves

    Science.gov (United States)

    Crance, Johnie H.

    1987-01-01

    Habitat Suitability Index (SI) curves are one method of presenting species habitat suitability criteria. The curves are often used with the Habitat Evaluation Procedures (HEP) and are necessary components of the Instream Flow Incremental Methodology (IFIM) (Armour et al. 1984). Bovee (1986) described three categories of SI curves or habitat suitability criteria based on the procedures and data used to develop the criteria. Category I curves are based on professional judgment, with 1ittle or no empirical data. Both Category II (utilization criteria) and Category III (preference criteria) curves have as their source data collected at locations where target species are observed or collected. Having Category II and Category III curves for all species of concern would be ideal. In reality, no SI curves are available for many species, and SI curves that require intensive field sampling often cannot be developed under prevailing constraints on time and costs. One alternative under these circumstances is the development and interim use of SI curves based on expert opinion. The Delphi technique (Pill 1971; Delbecq et al. 1975; Linstone and Turoff 1975) is one method used for combining the knowledge and opinions of a group of experts. The purpose of this report is to describe how the Delphi technique may be used to develop expert-opinion-based SI curves.

  8. Peak and Tail Scaling of Breakthrough Curves in Hydrologic Tracer Tests

    Science.gov (United States)

    Aquino, T.; Aubeneau, A. F.; Bolster, D.

    2014-12-01

    Power law tails, a marked signature of anomalous transport, have been observed in solute breakthrough curves time and time again in a variety of hydrologic settings, including in streams. However, due to the low concentrations at which they occur they are notoriously difficult to measure with confidence. This leads us to ask if there are other associated signatures of anomalous transport that can be sought. We develop a general stochastic transport framework and derive an asymptotic relation between the tail scaling of a breakthrough curve for a conservative tracer at a fixed downstream position and the scaling of the peak concentration of breakthrough curves as a function of downstream position, demonstrating that they provide equivalent information. We then quantify the relevant spatiotemporal scales for the emergence of this asymptotic regime, where the relationship holds, in the context of a very simple model that represents transport in an idealized river. We validate our results using random walk simulations. The potential experimental benefits and limitations of these findings are discussed.

  9. Scale effect on the water retention curve of a volcanic ash

    Science.gov (United States)

    Damiano, Emilia; Comegna, Luca; Greco, Roberto; Guida, Andrea; Olivares, Lucio; Picarelli, Luciano

    2015-04-01

    During the last decades, a number of flowslides and debris flows triggered by intense rainfall affected a wide mountainous area surrounding the "Campania Plain" (southern Italy). The involved slopes are constituted by shallow unsaturated air-fall deposits of pyroclastic nature, which stability is guaranteed by the contribution of suction on shear strength. To reliably predict the onset of slope failure triggered by critical precipitations, is essential to understand the infiltration process and the soil suction distribution in such granular deposits. The paper presents the results of a series of investigation performed at different scales to determine the soil water retention curve (SWRC) of a volcanic ash which is an es-sential element in the analysis of the infiltration processes. The soil, a silty sand, was taken at Cervinara hillslope, 30 km East of Naples, just aside an area which had been subjected to a catastrophic flowslide. The SWRC was obtained through: - standard tests in a suction-controlled triaxial apparatus (SCTX), in a pressure plate and by the Wind technique (1968) on small natural and reconstituted soil samples (sample dimensions in the order of the 1•10-6m3) ; - infiltration tests on small-scale model slopes reconstituted in an instrumented flume (sample dimensions in the order of 5•10-3m3); - suction and water content monitoring at the automatic station installed along the Cervinara hillslope. The experimental points generally were defined by coupling suction measurements through jet-fill tensiometers and water content through TDR probes installed close each others. The obtained data sets individuate three different curves characterized by different shapes in the transition zone: at larger volume element dimensions correspond curves which exhibit steeper slopes and lower values of the water content in the transition zone. This result confirms the great role of the volume element dimensions in the de-termination of hydraulic characteristics

  10. A location-scale model for non-crossing expectile curves

    NARCIS (Netherlands)

    Schnabel, S.K.; Eilers, P.H.C.

    2013-01-01

    In quantile smoothing, crossing of the estimated curves is a common nuisance, in particular with small data sets and dense sets of quantiles. Similar problems arise in expectile smoothing. We propose a novel method to avoid crossings. It is based on a location-scale model for expectiles and

  11. Machine Learning Techniques for Stellar Light Curve Classification

    Science.gov (United States)

    Hinners, Trisha A.; Tat, Kevin; Thorp, Rachel

    2018-07-01

    We apply machine learning techniques in an attempt to predict and classify stellar properties from noisy and sparse time-series data. We preprocessed over 94 GB of Kepler light curves from the Mikulski Archive for Space Telescopes (MAST) to classify according to 10 distinct physical properties using both representation learning and feature engineering approaches. Studies using machine learning in the field have been primarily done on simulated data, making our study one of the first to use real light-curve data for machine learning approaches. We tuned our data using previous work with simulated data as a template and achieved mixed results between the two approaches. Representation learning using a long short-term memory recurrent neural network produced no successful predictions, but our work with feature engineering was successful for both classification and regression. In particular, we were able to achieve values for stellar density, stellar radius, and effective temperature with low error (∼2%–4%) and good accuracy (∼75%) for classifying the number of transits for a given star. The results show promise for improvement for both approaches upon using larger data sets with a larger minority class. This work has the potential to provide a foundation for future tools and techniques to aid in the analysis of astrophysical data.

  12. Improvements in scaling of counter-current imbibition recovery curves using a shape factor including permeability anisotropy

    Science.gov (United States)

    Abbasi, Jassem; Sarafrazi, Shiva; Riazi, Masoud; Ghaedi, Mojtaba

    2018-02-01

    Spontaneous imbibition is the main oil production mechanism in the water invaded zone of a naturally fractured reservoir (NFR). Different scaling equations have been presented in the literature for upscaling of core scale imbibition recovery curves to field scale matrix blocks. Various scale dependent parameters such as gravity effects and boundary influences are required to be considered in the upscaling process. Fluid flow from matrix blocks to the fracture system is highly dependent on the permeability value in the horizontal and vertical directions. The purpose of this study is to include permeability anisotropy in the available scaling equations to improve the prediction of imbibition assisted oil production in NFRs. In this paper, a commercial reservoir simulator was used to obtain imbibition recovery curves for different scenarios. Then, the effect of permeability anisotropy on imbibition recovery curves was investigated, and the weakness of the existing scaling equations for anisotropic rocks was demonstrated. Consequently, an analytical shape factor was introduced that can better scale all the curves related to anisotropic matrix blocks.

  13. Local gray level S-curve transformation - A generalized contrast enhancement technique for medical images.

    Science.gov (United States)

    Gandhamal, Akash; Talbar, Sanjay; Gajre, Suhas; Hani, Ahmad Fadzil M; Kumar, Dileep

    2017-04-01

    Most medical images suffer from inadequate contrast and brightness, which leads to blurred or weak edges (low contrast) between adjacent tissues resulting in poor segmentation and errors in classification of tissues. Thus, contrast enhancement to improve visual information is extremely important in the development of computational approaches for obtaining quantitative measurements from medical images. In this research, a contrast enhancement algorithm that applies gray-level S-curve transformation technique locally in medical images obtained from various modalities is investigated. The S-curve transformation is an extended gray level transformation technique that results into a curve similar to a sigmoid function through a pixel to pixel transformation. This curve essentially increases the difference between minimum and maximum gray values and the image gradient, locally thereby, strengthening edges between adjacent tissues. The performance of the proposed technique is determined by measuring several parameters namely, edge content (improvement in image gradient), enhancement measure (degree of contrast enhancement), absolute mean brightness error (luminance distortion caused by the enhancement), and feature similarity index measure (preservation of the original image features). Based on medical image datasets comprising 1937 images from various modalities such as ultrasound, mammograms, fluorescent images, fundus, X-ray radiographs and MR images, it is found that the local gray-level S-curve transformation outperforms existing techniques in terms of improved contrast and brightness, resulting in clear and strong edges between adjacent tissues. The proposed technique can be used as a preprocessing tool for effective segmentation and classification of tissue structures in medical images. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Unraveling the photovoltaic technology learning curve by incorporation of input price changes and scale effects

    International Nuclear Information System (INIS)

    Yu, C.F.; van Sark, W.G.J.H.M.; Alsema, E.A.

    2011-01-01

    In a large number of energy models, the use of learning curves for estimating technological improvements has become popular. This is based on the assumption that technological development can be monitored by following cost development as a function of market size. However, recent data show that in some stages of photovoltaic technology (PV) production, the market price of PV modules stabilizes even though the cumulative capacity increases. This implies that no technological improvement takes place in these periods: the cost predicted by the learning curve in the PV study is lower than the market one. We propose that this bias results from ignoring the effects of input prices and scale effects, and that incorporating the input prices and scale effects into the learning curve theory is an important issue in making cost predictions more reliable. In this paper, a methodology is described to incorporate the scale and input-prices effect as the additional variables into the one factor learning curve, which leads to the definition of the multi-factor learning curve. This multi-factor learning curve is not only derived from economic theories, but also supported by an empirical study. The results clearly show that input prices and scale effects are to be included, and that, although market prices are stabilizing, learning is still taking place. (author)

  15. Measurement of scintillation decay curves by a single photon counting technique

    International Nuclear Information System (INIS)

    Noguchi, Tsutomu

    1978-01-01

    An improved apparatus suitable for the measurement of spectroscopic scintillation decay curves has been developed by combination of a single photon counting technique and a delayed coincidence method. The time resolution of the apparatus is improved up to 1.16 nsec (FWHM), which is obtained from the resolution function of the system for very weak Cherenkov light flashes. Systematic measurement of scintillation decay curves is made for liquid and crystal scintillators including PPO-toluene, PBD-xylene, PPO-POPOP-toluene, anthracene and stilbene. (auth.)

  16. Technique for improving the quality of images from digital cameras using ink-jet printers and smoothed RGB transfer curves

    Science.gov (United States)

    Sampat, Nitin; Grim, John F.; O'Hara, James E.

    1998-04-01

    The digital camera market is growing at an explosive rate. At the same time, the quality of photographs printed on ink- jet printers continues to improve. Most of the consumer cameras are designed with the monitor as the target output device and ont the printer. When a user is printing his images from a camera, he/she needs to optimize the camera and printer combination in order to maximize image quality. We describe the details of one such method for improving image quality using a AGFA digital camera and an ink jet printer combination. Using Adobe PhotoShop, we generated optimum red, green and blue transfer curves that match the scene content to the printers output capabilities. Application of these curves to the original digital image resulted in a print with more shadow detail, no loss of highlight detail, a smoother tone scale, and more saturated colors. The image also exhibited an improved tonal scale and visually more pleasing images than those captured and printed without any 'correction'. While we report the results for one camera-printer combination we tested this technique on numbers digital cameras and printer combinations and in each case produced a better looking image. We also discuss the problems we encountered in implementing this technique.

  17. Fourier techniques for an analysis of eclipsing binary light curves. Pt. 6b

    International Nuclear Information System (INIS)

    Demircan, O.

    1980-01-01

    This is a continuation of a previous paper which appeared in this journal (Demircan, 1980b) and aims at ascertaining some other relations between the integral transforms of the light curves of eclipsing binary systems. The appropriate use of these relations should facilitate the numerical computations for an analysis of eclipsing binary light curves by different Fourier techniques. (orig.)

  18. Contact mechanics at nanometric scale using nanoindentation technique for brittle and ductile materials.

    Science.gov (United States)

    Roa, J J; Rayon, E; Morales, M; Segarra, M

    2012-06-01

    In the last years, Nanoindentation or Instrumented Indentation Technique has become a powerful tool to study the mechanical properties at micro/nanometric scale (commonly known as hardness, elastic modulus and the stress-strain curve). In this review, the different contact mechanisms (elastic and elasto-plastic) are discussed, the recent patents for each mechanism (elastic and elasto-plastic) are summarized in detail, and the basic equations employed to know the mechanical behaviour for brittle and ductile materials are described.

  19. Evaluation of J-R curve testing of nuclear piping materials using the direct current potential drop technique

    International Nuclear Information System (INIS)

    Hackett, E.M.; Kirk, M.T.; Hays, R.A.

    1986-08-01

    A method is described for developing J-R curves for nuclear piping materials using the DC Potential Drop (DCPD) technique. Experimental calibration curves were developed for both three point bend and compact specimen geometries using ASTM A106 steel, a type 304 stainless steel and a high strength aluminum alloy. These curves were fit with a power law expression over the range of crack extension encountered during J-R curve tests (0.6 a/W to 0.8 a/W). The calibration curves were insensitive to both material and sidegrooving and depended solely on specimen geometry and lead attachment points. Crack initiation in J-R curve tests using DCPD was determined by a deviation from a linear region on a plot of COD vs. DCPD. The validity of this criterion for ASTM A106 steel was determined by a series of multispecimen tests that bracketed the initiation region. A statistical differential slope procedure for determination of the crack initiation point is presented and discussed. J-R curve tests were performed on ASTM A106 steel and type 304 stainless steel using both the elastic compliance and DCPD techniques to assess R-curve comparability. J-R curves determined using the two approaches were found to be in good agreement for ASTM A106 steel. The applicability of the DCPD technique to type 304 stainless steel and high rate loading of ferromagnetic materials is discussed. 15 refs., 33 figs

  20. Characteristics of soil water retention curve at macro-scale

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    Scale adaptable hydrological models have attracted more and more attentions in the hydrological modeling research community, and the constitutive relationship at the macro-scale is one of the most important issues, upon which there are not enough research activities yet. Taking the constitutive relationships of soil water movement--soil water retention curve (SWRC) as an example, this study extends the definition of SWRC at the micro-scale to that at the macro-scale, and aided by Monte Carlo method we demonstrate that soil property and the spatial distribution of soil moisture will affect the features of SWRC greatly. Furthermore, we assume that the spatial distribution of soil moisture is the result of self-organization of climate, soil, ground water and soil water movement under the specific boundary conditions, and we also carry out numerical experiments of soil water movement at the vertical direction in order to explore the relationship between SWRC at the macro-scale and the combinations of climate, soil, and groundwater. The results show that SWRCs at the macro-scale and micro-scale presents totally different features, e.g., the essential hysteresis phenomenon which is exaggerated with increasing aridity index and rising groundwater table. Soil property plays an important role in the shape of SWRC which will even lead to a rectangular shape under drier conditions, and power function form of SWRC widely adopted in hydrological model might be revised for most situations at the macro-scale.

  1. Mapping the Extinction Curve in 3D: Structure on Kiloparsec Scales

    Energy Technology Data Exchange (ETDEWEB)

    Schlafly, E. F. [Lawrence Berkeley National Laboratory, One Cyclotron Road, Berkeley, CA 94720 (United States); Peek, J. E. G. [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Finkbeiner, D. P. [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Green, G. M. [Kavli Institute for Particle Astrophysics and Cosmology, Physics and Astrophysics Building, 452 Lomita Mall, Stanford, CA 94305 (United States)

    2017-03-20

    Near-infrared spectroscopy from APOGEE and wide-field optical photometry from Pan-STARRS1 have recently made precise measurements of the shape of the extinction curve possible for tens of thousands of stars, parameterized by R ( V ). These measurements revealed structures in R ( V ) with large angular scales, which are challenging to explain in existing dust paradigms. In this work, we combine three-dimensional maps of dust column density with R ( V ) measurements to constrain the three-dimensional distribution of R ( V ) in the Milky Way. We find that the variations in R ( V ) are correlated on kiloparsec scales. In particular, most of the dust within one kiloparsec in the outer Galaxy, including many local molecular clouds (Orion, Taurus, Perseus, California, and Cepheus), has a significantly lower R ( V ) than more distant dust in the Milky Way. These results provide new input to models of dust evolution and processing, and complicate the application of locally derived extinction curves to more distant regions of the Milky Way and to other galaxies.

  2. Momentum-subtraction renormalization techniques in curved space-time

    Energy Technology Data Exchange (ETDEWEB)

    Foda, O.

    1987-10-01

    Momentum-subtraction techniques, specifically BPHZ and Zimmermann's Normal Product algorithm, are introduced as useful tools in the study of quantum field theories in the presence of background fields. In a model of a self-interacting massive scalar field, conformally coupled to a general asymptotically-flat curved space-time with a trivial topology, momentum-subtractions are shown to respect invariance under general coordinate transformations. As an illustration, general expressions for the trace anomalies are derived, and checked by explicit evaluation of the purely gravitational contributions in the free field theory limit. Furthermore, the trace of the renormalized energy-momentum tensor is shown to vanish at the Gell-Mann Low eigenvalue as it should.

  3. Momentum-subtraction renormalization techniques in curved space-time

    International Nuclear Information System (INIS)

    Foda, O.

    1987-01-01

    Momentum-subtraction techniques, specifically BPHZ and Zimmermann's Normal Product algorithm, are introduced as useful tools in the study of quantum field theories in the presence of background fields. In a model of a self-interacting massive scalar field, conformally coupled to a general asymptotically-flat curved space-time with a trivial topology, momentum-subtractions are shown to respect invariance under general coordinate transformations. As an illustration, general expressions for the trace anomalies are derived, and checked by explicit evaluation of the purely gravitational contributions in the free field theory limit. Furthermore, the trace of the renormalized energy-momentum tensor is shown to vanish at the Gell-Mann Low eigenvalue as it should

  4. The Rayleigh curve as a model for effort distribution over the life of medium scale software systems. M.S. Thesis - Maryland Univ.

    Science.gov (United States)

    Picasso, G. O.; Basili, V. R.

    1982-01-01

    It is noted that previous investigations into the applicability of Rayleigh curve model to medium scale software development efforts have met with mixed results. The results of these investigations are confirmed by analyses of runs and smoothing. The reasons for the models' failure are found in the subcycle effort data. There are four contributing factors: uniqueness of the environment studied, the influence of holidays, varying management techniques and differences in the data studied.

  5. Estimating GHG emission mitigation supply curves of large-scale biomass use on a country level

    International Nuclear Information System (INIS)

    Dornburg, Veronika; Dam, Jinke van; Faaij, Andre

    2007-01-01

    This study evaluates the possible influences of a large-scale introduction of biomass material and energy systems and their market volumes on land, material and energy market prices and their feedback to greenhouse gas (GHG) emission mitigation costs. GHG emission mitigation supply curves for large-scale biomass use were compiled using a methodology that combines a bottom-up analysis of biomass applications, biomass cost supply curves and market prices of land, biomaterials and bioenergy carriers. These market prices depend on the scale of biomass use and the market volume of materials and energy carriers and were estimated using own-price elasticities of demand. The methodology was demonstrated for a case study of Poland in the year 2015 applying different scenarios on economic development and trade in Europe. For the key technologies considered, i.e. medium density fibreboard, poly lactic acid, electricity and methanol production, GHG emission mitigation costs increase strongly with the scale of biomass production. Large-scale introduction of biomass use decreases the GHG emission reduction potential at costs below 50 Euro /Mg CO 2eq with about 13-70% depending on the scenario. Biomaterial production accounts for only a small part of this GHG emission reduction potential due to relatively small material markets and the subsequent strong decrease of biomaterial market prices at large scale of production. GHG emission mitigation costs depend strongly on biomass supply curves, own-price elasticity of land and market volumes of bioenergy carriers. The analysis shows that these influences should be taken into account for developing biomass implementations strategies

  6. Polygonal approximation and scale-space analysis of closed digital curves

    CERN Document Server

    Ray, Kumar S

    2013-01-01

    This book covers the most important topics in the area of pattern recognition, object recognition, computer vision, robot vision, medical computing, computational geometry, and bioinformatics systems. Students and researchers will find a comprehensive treatment of polygonal approximation and its real life applications. The book not only explains the theoretical aspects but also presents applications with detailed design parameters. The systematic development of the concept of polygonal approximation of digital curves and its scale-space analysis are useful and attractive to scholars in many fi

  7. Effectiveness of four different final irrigation activation techniques on smear layer removal in curved root canals : a scanning electron microscopy study.

    Directory of Open Access Journals (Sweden)

    Puneet Ahuja

    2014-02-01

    Full Text Available The aim of this study was to assess the efficacy of apical negative pressure (ANP, manual dynamic agitation (MDA, passive ultrasonic irrigation (PUI and needle irrigation (NI as final irrigation activation techniques for smear layer removal in curved root canals.Mesiobuccal root canals of 80 freshly extracted maxillary first molars with curvatures ranging between 25° and 35° were used. A glide path with #08-15 K files was established before cleaning and shaping with Mtwo rotary instruments (VDW, Munich, Germany up to size 35/0.04 taper. During instrumentation, 1 ml of 2.5% NaOCl was used at each change of file. Samples were divided into 4 equal groups (n=20 according to the final irrigation activation technique: group 1, apical negative pressure (ANP (EndoVac; group 2, manual dynamic agitation (MDA; group 3, passive ultrasonic irrigation (PUI; and group 4, needle irrigation (NI. Root canals were split longitudinally and subjected to scanning electron microscopy. The presence of smear layer at coronal, middle and apical levels was evaluated by superimposing 300-μm square grid over the obtained photomicrographs using a four-score scale with X1,000 magnification.Amongst all the groups tested, ANP showed the overall best smear layer removal efficacy (p < 0.05. Removal of smear layer was least effective with the NI technique.ANP (EndoVac system can be used as the final irrigation activation technique for effective smear layer removal in curved root canals.

  8. Numerical Integration Techniques for Curved-Element Discretizations of Molecule–Solvent Interfaces

    Science.gov (United States)

    Bardhan, Jaydeep P.; Altman, Michael D.; Willis, David J.; Lippow, Shaun M.; Tidor, Bruce; White, Jacob K.

    2012-01-01

    Surface formulations of biophysical modeling problems offer attractive theoretical and computational properties. Numerical simulations based on these formulations usually begin with discretization of the surface under consideration; often, the surface is curved, possessing complicated structure and possibly singularities. Numerical simulations commonly are based on approximate, rather than exact, discretizations of these surfaces. To assess the strength of the dependence of simulation accuracy on the fidelity of surface representation, we have developed methods to model several important surface formulations using exact surface discretizations. Following and refining Zauhar’s work (J. Comp.-Aid. Mol. Des. 9:149-159, 1995), we define two classes of curved elements that can exactly discretize the van der Waals, solvent-accessible, and solvent-excluded (molecular) surfaces. We then present numerical integration techniques that can accurately evaluate nonsingular and singular integrals over these curved surfaces. After validating the exactness of the surface discretizations and demonstrating the correctness of the presented integration methods, we present a set of calculations that compare the accuracy of approximate, planar-triangle-based discretizations and exact, curved-element-based simulations of surface-generalized-Born (sGB), surface-continuum van der Waals (scvdW), and boundary-element method (BEM) electrostatics problems. Results demonstrate that continuum electrostatic calculations with BEM using curved elements, piecewise-constant basis functions, and centroid collocation are nearly ten times more accurate than planartriangle BEM for basis sets of comparable size. The sGB and scvdW calculations give exceptional accuracy even for the coarsest obtainable discretized surfaces. The extra accuracy is attributed to the exact representation of the solute–solvent interface; in contrast, commonly used planar-triangle discretizations can only offer improved

  9. Generation of large-scale PV scenarios using aggregated power curves

    DEFF Research Database (Denmark)

    Nuño Martinez, Edgar; Cutululis, Nicolaos Antonio

    2017-01-01

    The contribution of solar photovoltaic (PV) power to the generation is becoming more relevant in modern power system. Therefore, there is a need to model the variability large-scale PV generation accurately. This paper presents a novel methodology to generate regional PV scenarios based...... on aggregated power curves rather than traditional physical PV conversion models. Our approach is based on hourly mesoscale reanalysis irradiation data and power measurements and do not require additional variables such as ambient temperature or wind speed. It was used to simulate the PV generation...... on the German system between 2012 and 2015 showing high levels of correlation with actual measurements (93.02–97.60%) and small deviations from the expected capacity factors (0.02–1.80%). Therefore, we are confident about the ability of the proposed model to accurately generate realistic large-scale PV...

  10. Residence time distribution measurements in a pilot-scale poison tank using radiotracer technique.

    Science.gov (United States)

    Pant, H J; Goswami, Sunil; Samantray, J S; Sharma, V K; Maheshwari, N K

    2015-09-01

    Various types of systems are used to control the reactivity and shutting down of a nuclear reactor during emergency and routine shutdown operations. Injection of boron solution (borated water) into the core of a reactor is one of the commonly used methods during emergency operation. A pilot-scale poison tank was designed and fabricated to simulate injection of boron poison into the core of a reactor along with coolant water. In order to design a full-scale poison tank, it was desired to characterize flow of liquid from the tank. Residence time distribution (RTD) measurement and analysis was adopted to characterize the flow dynamics. Radiotracer technique was applied to measure RTD of aqueous phase in the tank using Bromine-82 as a radiotracer. RTD measurements were carried out with two different modes of operation of the tank and at different flow rates. In Mode-1, the radiotracer was instantaneously injected at the inlet and monitored at the outlet, whereas in Mode-2, the tank was filled with radiotracer and its concentration was measured at the outlet. From the measured RTD curves, mean residence times (MRTs), dead volume and fraction of liquid pumped in with time were determined. The treated RTD curves were modeled using suitable mathematical models. An axial dispersion model with high degree of backmixing was found suitable to describe flow when operated in Mode-1, whereas a tanks-in-series model with backmixing was found suitable to describe flow of the poison in the tank when operated in Mode-2. The results were utilized to scale-up and design a full-scale poison tank for a nuclear reactor. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Learning-curve estimation techniques for nuclear industry

    Energy Technology Data Exchange (ETDEWEB)

    Vaurio, J.K.

    1983-01-01

    Statistical techniques are developed to estimate the progress made by the nuclear industry in learning to prevent accidents. Learning curves are derived for accident occurrence rates based on acturial data, predictions are made for the future, and compact analytical equations are obtained for the statistical accuracies of the estimates. Both maximum likelihood estimation and the method of moments are applied to obtain parameters for the learning models, and results are compared to each other and to earlier graphical and analytical results. An effective statistical test is also derived to assess the significance of trends. The models used associate learning directly to accidents, to the number of plants and to the cumulative number of operating years. Using as a data base nine core damage accidents in electricity-producing plants, it is estimated that the probability of a plant to have a serious flaw has decreased from 0.1 to 0.01 during the developmental phase of the nuclear industry. At the same time the frequency of accidents has decreased from 0.04 per reactor year to 0.0004 per reactor year.

  12. Learning-curve estimation techniques for nuclear industry

    International Nuclear Information System (INIS)

    Vaurio, J.K.

    1983-01-01

    Statistical techniques are developed to estimate the progress made by the nuclear industry in learning to prevent accidents. Learning curves are derived for accident occurrence rates based on acturial data, predictions are made for the future, and compact analytical equations are obtained for the statistical accuracies of the estimates. Both maximum likelihood estimation and the method of moments are applied to obtain parameters for the learning models, and results are compared to each other and to earlier graphical and analytical results. An effective statistical test is also derived to assess the significance of trends. The models used associate learning directly to accidents, to the number of plants and to the cumulative number of operating years. Using as a data base nine core damage accidents in electricity-producing plants, it is estimated that the probability of a plant to have a serious flaw has decreased from 0.1 to 0.01 during the developmental phase of the nuclear industry. At the same time the frequency of accidents has decreased from 0.04 per reactor year to 0.0004 per reactor year

  13. Learning curve estimation techniques for nuclear industry

    International Nuclear Information System (INIS)

    Vaurio, Jussi K.

    1983-01-01

    Statistical techniques are developed to estimate the progress made by the nuclear industry in learning to prevent accidents. Learning curves are derived for accident occurrence rates based on actuarial data, predictions are made for the future, and compact analytical equations are obtained for the statistical accuracies of the estimates. Both maximum likelihood estimation and the method of moments are applied to obtain parameters for the learning models, and results are compared to each other and to earlier graphical and analytical results. An effective statistical test is also derived to assess the significance of trends. The models used associate learning directly to accidents, to the number of plants and to the cumulative number of operating years. Using as a data base nine core damage accidents in electricity-producing plants, it is estimated that the probability of a plant to have a serious flaw has decreased from 0.1 to 0.01 during the developmental phase of the nuclear industry. At the same time the frequency of accidents has decreased from 0.04 per reactor year to 0.0004 per reactor year

  14. Measurement of activated rCBF by the 133Xe inhalation technique: a comparison of total versus partial curve analysis

    International Nuclear Information System (INIS)

    Leli, D.A.; Katholi, C.R.; Hazelrig, J.B.; Falgout, J.C.; Hannay, H.J.; Wilson, E.M.; Wills, E.L.; Halsey, J.H. Jr.

    1985-01-01

    An initial assessment of the differential sensitivity of total versus partial curve analysis in estimating task related focal changes in cortical blood flow measured by the 133 Xe inhalation technique was accomplished by comparing the patterns during the performance of two sensorimotor tasks by normal subjects. The validity of these patterns was evaluated by comparing them to the activation patterns expected from activation studies with the intra-arterial technique and the patterns expected from neuropsychological research literature. Subjects were 10 young adult nonsmoking healthy male volunteers. They were administered two tasks having identical sensory and cognitive components but different response requirements (oral versus manual). The regional activation patterns produced by the tasks varied with the method of curve analysis. The activation produced by the two tasks was very similar to that predicted from the research literature only for total curve analysis. To the extent that the predictions are correct, these data suggest that the 133 Xe inhalation technique is more sensitive to regional flow changes when flow parameters are estimated from the total head curve. The utility of the total head curve analysis will be strengthened if similar sensitivity is demonstrated in future studies assessing normal subjects and patients with neurological and psychiatric disorders

  15. Scaling Techniques for Massive Scale-Free Graphs in Distributed (External) Memory

    KAUST Repository

    Pearce, Roger

    2013-05-01

    We present techniques to process large scale-free graphs in distributed memory. Our aim is to scale to trillions of edges, and our research is targeted at leadership class supercomputers and clusters with local non-volatile memory, e.g., NAND Flash. We apply an edge list partitioning technique, designed to accommodate high-degree vertices (hubs) that create scaling challenges when processing scale-free graphs. In addition to partitioning hubs, we use ghost vertices to represent the hubs to reduce communication hotspots. We present a scaling study with three important graph algorithms: Breadth-First Search (BFS), K-Core decomposition, and Triangle Counting. We also demonstrate scalability on BG/P Intrepid by comparing to best known Graph500 results. We show results on two clusters with local NVRAM storage that are capable of traversing trillion-edge scale-free graphs. By leveraging node-local NAND Flash, our approach can process thirty-two times larger datasets with only a 39% performance degradation in Traversed Edges Per Second (TEPS). © 2013 IEEE.

  16. Novel hybrid (magnet plus curve grasper) technique during transumbilical cholecystectomy: initial experience of a promising approach.

    Science.gov (United States)

    Millan, Carolina; Bignon, Horacion; Bellia, Gaston; Buela, Enrique; Rabinovich, Fernando; Albertal, Mariano; Martinez Ferro, Marcelo

    2013-10-01

    The use of magnets in transumbilical cholecystectomy (TUC) improves triangulation and achieves an optimal critical view. Nonetheless, the tendency of the magnets to collide hinders the process. In order to simplify the surgical technique, we developed a hybrid model with a single magnet and a curved grasper. All TUCs performed with a hybrid strategy in our pediatric population between September 2009 and July 2012 were retrospectively reviewed. Of 260 surgical procedures in which at least one magnet was used, 87 were TUCs. Of those, 62 were hybrid: 33 in adults and 29 in pediatric patients. The technique combines a magnet and a curved grasper. Through a transumbilical incision, we placed a 12-mm trocar and another flexible 5-mm trocar. The laparoscope with the working channel used the 12-mm trocar. The magnetic grasper was introduced to the abdominal cavity using the working channel to provide cephalic retraction of the gallbladder fundus. Across the flexible trocar, the assistant manipulated the curved grasper to mobilize the infundibulum. The surgeon operated through the working channel of the laparoscope. In this pediatric population, the mean age was 14 years (range, 4-17 years), and mean weight was 50 kg (range, 18-90 kg); 65% were girls. Mean operative time was 62 minutes. All procedures achieved a critical view of safety with no instrumental collision. There were no intraoperative or postoperative complications. The hospital stay was 1.4±0.6 days, and the median follow-up was 201 days. A hybrid technique, combining magnets and a curved grasper, simplifies transumbilical surgery. It seems feasible and safe for TUC and potentially reproducible.

  17. Fabricating small-scale, curved, polymeric structures with convex and concave menisci through interfacial free energy equilibrium.

    Science.gov (United States)

    Cheng, Chao-Min; Matsuura, Koji; Wang, I-Jan; Kuroda, Yuka; LeDuc, Philip R; Naruse, Keiji

    2009-11-21

    Polymeric curved structures are widely used in imaging systems including optical fibers and microfluidic channels. Here, we demonstrate that small-scale, poly(dimethylsiloxane) (PDMS)-based, curved structures can be fabricated through controlling interfacial free energy equilibrium. Resultant structures have a smooth, symmetric, curved surface, and may be convex or concave in form based on surface tension balance. Their curvatures are controlled by surface characteristics (i.e., hydrophobicity and hydrophilicity) of the molds and semi-liquid PDMS. In addition, these structures are shown to be biocompatible for cell culture. Our system provides a simple, efficient and economical method for generating integrateable optical components without costly fabrication facilities.

  18. Scaling Techniques for Massive Scale-Free Graphs in Distributed (External) Memory

    KAUST Repository

    Pearce, Roger; Gokhale, Maya; Amato, Nancy M.

    2013-01-01

    We present techniques to process large scale-free graphs in distributed memory. Our aim is to scale to trillions of edges, and our research is targeted at leadership class supercomputers and clusters with local non-volatile memory, e.g., NAND Flash

  19. Renormalization and scaling behavior of non-Abelian gauge fields in curved spacetime

    International Nuclear Information System (INIS)

    Leen, T.K.

    1983-01-01

    In this article we discuss the one loop renormalization and scaling behavior of non-Abelian gauge field theories in a general curved spacetime. A generating functional is constructed which forms the basis for both the perturbation expansion and the Ward identifies. Local momentum space representations for the vector and ghost particles are developed and used to extract the divergent parts of Feynman integrals. The one loop diagram for the ghost propagator and the vector-ghost vertex are shown to have no divergences not present in Minkowski space. The Ward identities insure that this is true for the vector propagator as well. It is shown that the above renormalizations render the three- and four-vector vertices finite. Finally, a renormalization group equation valid in curved spacetimes is derived. Its solution is given and the theory is shown to be asymptotically free as in Minkowski space

  20. Learning curve estimation techniques for the nuclear industry

    International Nuclear Information System (INIS)

    Vaurio, J.K.

    1983-01-01

    Statistical techniques are developed to estimate the progress made by the nuclear industry in learning to prevent accidents. Learning curves are derived for accident occurrence rates based on actuarial data, predictions are made for the future, and compact analytical equations are obtained for the statistical accuracies of the estimates. Both maximum likelihood estimation and the method of moments are applied to obtain parameters for the learning models, and results are compared to each other and to earlier graphical and analytical results. An effective statistical test is also derived to assess the significance of trends. The models used associate learning directly to accidents, to the number of plants and to the cumulative number of operating years. Using as a data base nine core damage accidents in electricity-producing plants, it is estimated that the probability of a plant to have a serious flaw has decreased from 0.1 to 0.01 during the developmental phase of the nuclear industry. At the same time the frequency of accidents has decreased from 0.04 per reactor year to 0.0004 per reactor year

  1. Lightweight and Statistical Techniques for Petascale PetaScale Debugging

    Energy Technology Data Exchange (ETDEWEB)

    Miller, Barton

    2014-06-30

    This project investigated novel techniques for debugging scientific applications on petascale architectures. In particular, we developed lightweight tools that narrow the problem space when bugs are encountered. We also developed techniques that either limit the number of tasks and the code regions to which a developer must apply a traditional debugger or that apply statistical techniques to provide direct suggestions of the location and type of error. We extend previous work on the Stack Trace Analysis Tool (STAT), that has already demonstrated scalability to over one hundred thousand MPI tasks. We also extended statistical techniques developed to isolate programming errors in widely used sequential or threaded applications in the Cooperative Bug Isolation (CBI) project to large scale parallel applications. Overall, our research substantially improved productivity on petascale platforms through a tool set for debugging that complements existing commercial tools. Previously, Office Of Science application developers relied either on primitive manual debugging techniques based on printf or they use tools, such as TotalView, that do not scale beyond a few thousand processors. However, bugs often arise at scale and substantial effort and computation cycles are wasted in either reproducing the problem in a smaller run that can be analyzed with the traditional tools or in repeated runs at scale that use the primitive techniques. New techniques that work at scale and automate the process of identifying the root cause of errors were needed. These techniques significantly reduced the time spent debugging petascale applications, thus leading to a greater overall amount of time for application scientists to pursue the scientific objectives for which the systems are purchased. We developed a new paradigm for debugging at scale: techniques that reduced the debugging scenario to a scale suitable for traditional debuggers, e.g., by narrowing the search for the root-cause analysis

  2. Laparoscopic colorectal surgery in learning curve: Role of implementation of a standardized technique and recovery protocol. A cohort study

    Directory of Open Access Journals (Sweden)

    Gaetano Luglio

    2015-06-01

    Conclusion: Proper laparoscopic colorectal surgery is safe and leads to excellent results in terms of recovery and short term outcomes, even in a learning curve setting. Key factors for better outcomes and shortening the learning curve seem to be the adoption of a standardized technique and training model along with the strict supervision of an expert colorectal surgeon.

  3. Introducer Curving Technique for the Prevention of Tilting of Transfemoral Gunther Tulip Inferior Vena Cava Filter

    International Nuclear Information System (INIS)

    Xiao, Liang; Shen, Jing; Tong, Jia Jie; Huang, De Sheng

    2012-01-01

    To determine whether the introducer curving technique is useful in decreasing the degree of tilting of transfemoral Tulip filters. The study sample group consisted of 108 patients with deep vein thrombosis who were enrolled and planned to undergo thrombolysis, and who accepted transfemoral Tulip filter insertion procedure. The patients were randomly divided into Group C and Group T. The introducer curving technique was Adopted in Group T. The post-implantation filter tilting angle (ACF) was measured in an anteroposterior projection. The retrieval hook adhering to the vascular wall was measured via tangential cavogram during retrieval. The overall average ACF was 5.8 ± 4.14 degrees. In Group C, the average ACF was 7.1 ± 4.52 degrees. In Group T, the average ACF was 4.4 ± 3.20 degrees. The groups displayed a statistically significant difference (t = 3.573, p = 0.001) in ACF. Additionally, the difference of ACF between the left and right approaches turned out to be statistically significant (7.1 ± 4.59 vs. 5.1 ± 3.82, t = 2.301, p = 0.023). The proportion of severe tilt (ACF ≥ 10 degree) in Group T was significantly lower than that in Group C (9.3% vs. 24.1%, X 2 = 4.267, p = 0.039). Between the groups, the difference in the rate of the retrieval hook adhering to the vascular wall was also statistically significant (2.9% vs. 24.2%, X 2 = 5.030, p = 0.025). The introducer curving technique appears to minimize the incidence and extent of transfemoral Tulip filter tilting.

  4. Introducer Curving Technique for the Prevention of Tilting of Transfemoral Gunther Tulip Inferior Vena Cava Filter

    Energy Technology Data Exchange (ETDEWEB)

    Xiao, Liang; Shen, Jing; Tong, Jia Jie [The First Hospital of China Medical University, Shenyang (China); Huang, De Sheng [College of Basic Medical Science, China Medical University, Shenyang (China)

    2012-07-15

    To determine whether the introducer curving technique is useful in decreasing the degree of tilting of transfemoral Tulip filters. The study sample group consisted of 108 patients with deep vein thrombosis who were enrolled and planned to undergo thrombolysis, and who accepted transfemoral Tulip filter insertion procedure. The patients were randomly divided into Group C and Group T. The introducer curving technique was Adopted in Group T. The post-implantation filter tilting angle (ACF) was measured in an anteroposterior projection. The retrieval hook adhering to the vascular wall was measured via tangential cavogram during retrieval. The overall average ACF was 5.8 {+-} 4.14 degrees. In Group C, the average ACF was 7.1 {+-} 4.52 degrees. In Group T, the average ACF was 4.4 {+-} 3.20 degrees. The groups displayed a statistically significant difference (t = 3.573, p = 0.001) in ACF. Additionally, the difference of ACF between the left and right approaches turned out to be statistically significant (7.1 {+-} 4.59 vs. 5.1 {+-} 3.82, t = 2.301, p = 0.023). The proportion of severe tilt (ACF {>=} 10 degree) in Group T was significantly lower than that in Group C (9.3% vs. 24.1%, X{sup 2} = 4.267, p = 0.039). Between the groups, the difference in the rate of the retrieval hook adhering to the vascular wall was also statistically significant (2.9% vs. 24.2%, X{sup 2} = 5.030, p = 0.025). The introducer curving technique appears to minimize the incidence and extent of transfemoral Tulip filter tilting.

  5. Spatial reflection patterns of iridescent wings of male pierid butterflies: curved scales reflect at a wider angle than flat scales.

    Science.gov (United States)

    Pirih, Primož; Wilts, Bodo D; Stavenga, Doekele G

    2011-10-01

    The males of many pierid butterflies have iridescent wings, which presumably function in intraspecific communication. The iridescence is due to nanostructured ridges of the cover scales. We have studied the iridescence in the males of a few members of Coliadinae, Gonepteryx aspasia, G. cleopatra, G. rhamni, and Colias croceus, and in two members of the Colotis group, Hebomoia glaucippe and Colotis regina. Imaging scatterometry demonstrated that the pigmentary colouration is diffuse whereas the structural colouration creates a directional, line-shaped far-field radiation pattern. Angle-dependent reflectance measurements demonstrated that the directional iridescence distinctly varies among closely related species. The species-dependent scale curvature determines the spatial properties of the wing iridescence. Narrow beam illumination of flat scales results in a narrow far-field iridescence pattern, but curved scales produce broadened patterns. The restricted spatial visibility of iridescence presumably plays a role in intraspecific signalling.

  6. Spotted star light curve numerical modeling technique and its application to HII 1883 surface imaging

    Science.gov (United States)

    Kolbin, A. I.; Shimansky, V. V.

    2014-04-01

    We developed a code for imaging the surfaces of spotted stars by a set of circular spots with a uniform temperature distribution. The flux from the spotted surface is computed by partitioning the spots into elementary areas. The code takes into account the passing of spots behind the visible stellar limb, limb darkening, and overlapping of spots. Modeling of light curves includes the use of recent results of the theory of stellar atmospheres needed to take into account the temperature dependence of flux intensity and limb darkening coefficients. The search for spot parameters is based on the analysis of several light curves obtained in different photometric bands. We test our technique by applying it to HII 1883.

  7. Experiments with conjugate gradient algorithms for homotopy curve tracking

    Science.gov (United States)

    Irani, Kashmira M.; Ribbens, Calvin J.; Watson, Layne T.; Kamat, Manohar P.; Walker, Homer F.

    1991-01-01

    There are algorithms for finding zeros or fixed points of nonlinear systems of equations that are globally convergent for almost all starting points, i.e., with probability one. The essence of all such algorithms is the construction of an appropriate homotopy map and then tracking some smooth curve in the zero set of this homotopy map. HOMPACK is a mathematical software package implementing globally convergent homotopy algorithms with three different techniques for tracking a homotopy zero curve, and has separate routines for dense and sparse Jacobian matrices. The HOMPACK algorithms for sparse Jacobian matrices use a preconditioned conjugate gradient algorithm for the computation of the kernel of the homotopy Jacobian matrix, a required linear algebra step for homotopy curve tracking. Here, variants of the conjugate gradient algorithm are implemented in the context of homotopy curve tracking and compared with Craig's preconditioned conjugate gradient method used in HOMPACK. The test problems used include actual large scale, sparse structural mechanics problems.

  8. Establishment of 60Co dose calibration curve using fluorescent in situ hybridization assay technique: Result of preliminary study

    International Nuclear Information System (INIS)

    Rahimah Abdul Rahim; Noriah Jamal; Noraisyah Mohd Yusof; Juliana Mahamad Napiah; Nelly Bo Nai Lee

    2010-01-01

    This study aims at establishing an in-vitro 60 Co dose calibration curve using Fluorescent In-Situ Hybridization assay technique for the Malaysian National Bio dosimetry Laboratory. Blood samples collected from a female healthy donor were irradiated with several doses of 60 Co radiation. Following culturing of lymphocytes, microscopic slides are prepared, denatured and hybridized. The frequencies of translocation are estimated in the metaphases. A calibration curve was then generated using a regression technique. It shows a good fit to a linear-quadratic model. The results of this study might be useful in estimating absorbed dose for the individual exposed to ionizing radiation retrospectively. This information may be useful as a guide for medical treatment for the assessment of possible health consequences. (author)

  9. The learning curve of the three-port two-instrument complete thoracoscopic lobectomy for lung cancer—A feasible technique worthy of popularization

    Directory of Open Access Journals (Sweden)

    Yu-Jen Cheng

    2015-07-01

    Conclusion: Three-port complete thoracoscopic lobectomy with the two-instrument technique is feasible for lung cancer treatment. The length of the learning curve consisted of 28 cases. This TPTI technique should be popularized.

  10. Computational aspects of algebraic curves

    CERN Document Server

    Shaska, Tanush

    2005-01-01

    The development of new computational techniques and better computing power has made it possible to attack some classical problems of algebraic geometry. The main goal of this book is to highlight such computational techniques related to algebraic curves. The area of research in algebraic curves is receiving more interest not only from the mathematics community, but also from engineers and computer scientists, because of the importance of algebraic curves in applications including cryptography, coding theory, error-correcting codes, digital imaging, computer vision, and many more.This book cove

  11. Reflection curves—new computation and rendering techniques

    Directory of Open Access Journals (Sweden)

    Dan-Eugen Ulmet

    2004-05-01

    Full Text Available Reflection curves on surfaces are important tools for free-form surface interrogation. They are essential for industrial 3D CAD/CAM systems and for rendering purposes. In this note, new approaches regarding the computation and rendering of reflection curves on surfaces are introduced. These approaches are designed to take the advantage of the graphics libraries of recent releases of commercial systems such as the OpenInventor toolkit (developed by Silicon Graphics or Matlab (developed by The Math Works. A new relation between reflection curves and contour curves is derived; this theoretical result is used for a straightforward Matlab implementation of reflection curves. A new type of reflection curves is also generated using the OpenInventor texture and environment mapping implementations. This allows the computation, rendering, and animation of reflection curves at interactive rates, which makes it particularly useful for industrial applications.

  12. Characterization of KS-material by means of J-R-curves especially using the partial unloading technique

    International Nuclear Information System (INIS)

    Voss, B.; Blauel, J.G.; Schmitt, W.

    1983-01-01

    Essential components of nuclear reactor systems are fabricated from materials of high thoughness to exclude brittle failure. With increasing load, a crack tip will blunt, a plastic zone will be formed, voids may nucleate and coalesce thus initiating stable crack extension when the crack driving parameter, e.g. J, exceeds the initiation value Jsub(i). Further stable crack growth will occur with further increasing J prior to complete failure of the structure. The specific material resistance against crack extension is characterized by J resistance curves Jsub(R)=J(Δa). ASTM provides a standard to determine the initiation toughness Jsub(Ic) from a Jsub(R)-curve [1] and a tentative standard for determining the Jsub(R)-curve by a single specimen test [2]. To generate a Jsub(R)-curve values for the crack driving parameter J and the corresponding stable crack growth Δa have to be measured. Besides the multiple specimen technique [1], the potential drop and especially the partial unloading compliance method [2] are used to measure stable crack growth. Some special problems and some results for pressure vessel steels are discussed in this paper. (orig./RW)

  13. Introducer curving technique for the prevention of tilting of transfemoral Günther Tulip inferior vena cava filter.

    Science.gov (United States)

    Xiao, Liang; Huang, De-sheng; Shen, Jing; Tong, Jia-jie

    2012-01-01

    To determine whether the introducer curving technique is useful in decreasing the degree of tilting of transfemoral Tulip filters. The study sample group consisted of 108 patients with deep vein thrombosis who were enrolled and planned to undergo thrombolysis, and who accepted transfemoral Tulip filter insertion procedure. The patients were randomly divided into Group C and Group T. The introducer curving technique was Adopted in Group T. The post-implantation filter tilting angle (ACF) was measured in an anteroposterior projection. The retrieval hook adhering to the vascular wall was measured via tangential cavogram during retrieval. The overall average ACF was 5.8 ± 4.14 degrees. In Group C, the average ACF was 7.1 ± 4.52 degrees. In Group T, the average ACF was 4.4 ± 3.20 degrees. The groups displayed a statistically significant difference (t = 3.573, p = 0.001) in ACF. Additionally, the difference of ACF between the left and right approaches turned out to be statistically significant (7.1 ± 4.59 vs. 5.1 ± 3.82, t = 2.301, p = 0.023). The proportion of severe tilt (ACF ≥ 10°) in Group T was significantly lower than that in Group C (9.3% vs. 24.1%, χ(2) = 4.267, p = 0.039). Between the groups, the difference in the rate of the retrieval hook adhering to the vascular wall was also statistically significant (2.9% vs. 24.2%, χ(2) = 5.030, p = 0.025). The introducer curving technique appears to minimize the incidence and extent of transfemoral Tulip filter tilting.

  14. A simple transformation for converting CW-OSL curves to LM-OSL curves

    DEFF Research Database (Denmark)

    Bulur, E.

    2000-01-01

    A simple mathematical transformation is introduced to convert from OSL decay curves obtained in the conventional way to those obtained using a linear modulation technique based on a linear increase of the stimulation light intensity during OSL measurement. The validity of the transformation...... was tested by the IR-stimulated luminescence curves from feldspars, recorded using both the conventional and the linear modulation techniques. The transformation was further applied to green-light-stimulated OSL from K and Na feldspars. (C) 2000 Elsevier Science Ltd. All rights reserved....

  15. Theoretical foundations for environmental Kuznets curve analysis

    Science.gov (United States)

    Lantz, Van

    This thesis provides a dynamic theory for analyzing the paths of aggregate output and pollution in a country over time. An infinite horizon, competitive growth-pollution model is explored in order to determine the role that economic scale, production techniques, and pollution regulations play in explaining the inverted U-shaped relationship between output and some forms of pollution (otherwise known as the Environmental Kuznets Curve, or EKC). Results indicate that the output-pollution relationship may follow a strictly increasing, strictly decreasing (but bounded), inverted U-shaped, or some combination of curves. While the 'scale' effect may cause output and pollution to exhibit a monotonic relationship, 'technique' and 'regulation' effects may ultimately cause a de-linking of these two variables. Pollution-minimizing energy regulation policies are also investigated within this framework. It is found that the EKC may be 'flattened' or even eliminated moving from a poorly-regulated economy to one that minimizes pollution. The model is calibrated to the US economy for output (gross national product, GNP) and two pollutants (sulfur dioxide, SO2, and carbon dioxide, CO2) over the period 1900 to 1990. Results indicate that the model replicates the observations quite well. The predominance of 'scale' effects cause aggregate SO2 and CO2 levels to increase with GNP in the early stages of development. Then, in the case of SO 2, 'technique' and 'regulation' effects may be the cause of falling SO2 levels with continued economic growth (establishing the EKC). CO2 continues to monotonically increase as output levels increase over time. The positive relationship may be due to the lack of regulations on this pollutant. If stricter regulation policies were instituted in the two case studies, an improved allocation of resources may result. While GNP may be 2.596 to 20% lower than what has been realized in the US economy (depending on the pollution variable analyzed), individual

  16. Evaluation of convergence behavior of metamodeling techniques for bridging scales in multi-scale multimaterial simulation

    International Nuclear Information System (INIS)

    Sen, Oishik; Davis, Sean; Jacobs, Gustaaf; Udaykumar, H.S.

    2015-01-01

    The effectiveness of several metamodeling techniques, viz. the Polynomial Stochastic Collocation method, Adaptive Stochastic Collocation method, a Radial Basis Function Neural Network, a Kriging Method and a Dynamic Kriging Method is evaluated. This is done with the express purpose of using metamodels to bridge scales between micro- and macro-scale models in a multi-scale multimaterial simulation. The rate of convergence of the error when used to reconstruct hypersurfaces of known functions is studied. For sufficiently large number of training points, Stochastic Collocation methods generally converge faster than the other metamodeling techniques, while the DKG method converges faster when the number of input points is less than 100 in a two-dimensional parameter space. Because the input points correspond to computationally expensive micro/meso-scale computations, the DKG is favored for bridging scales in a multi-scale solver

  17. Optimum conditions for the determination of ionization potentials, appearance potentials and fine structure in ionization efficiency curves using edd technique

    International Nuclear Information System (INIS)

    Selim, Ezzat T.; El-Kholy, S.B.; Zahran, Nagwa F.

    1978-01-01

    The optimum conditions for determining ionization potentials as well as fine structure in electron impact ionization efficiency curves are studied using energy distribution difference technique. Applying these conditions to Ar + , Kr + , CO + 2 and N + from N 2 , very good agreement is obtained when compared with results determined by other techniques including UV spectroscopy. The merits and limitation of the technique are also discussed

  18. SEGMENTATION OF MITOCHONDRIA IN ELECTRON MICROSCOPY IMAGES USING ALGEBRAIC CURVES.

    Science.gov (United States)

    Seyedhosseini, Mojtaba; Ellisman, Mark H; Tasdizen, Tolga

    2013-01-01

    High-resolution microscopy techniques have been used to generate large volumes of data with enough details for understanding the complex structure of the nervous system. However, automatic techniques are required to segment cells and intracellular structures in these multi-terabyte datasets and make anatomical analysis possible on a large scale. We propose a fully automated method that exploits both shape information and regional statistics to segment irregularly shaped intracellular structures such as mitochondria in electron microscopy (EM) images. The main idea is to use algebraic curves to extract shape features together with texture features from image patches. Then, these powerful features are used to learn a random forest classifier, which can predict mitochondria locations precisely. Finally, the algebraic curves together with regional information are used to segment the mitochondria at the predicted locations. We demonstrate that our method outperforms the state-of-the-art algorithms in segmentation of mitochondria in EM images.

  19. Shock melting method to determine melting curve by molecular dynamics: Cu, Pd, and Al.

    Science.gov (United States)

    Liu, Zhong-Li; Zhang, Xiu-Lu; Cai, Ling-Cang

    2015-09-21

    A melting simulation method, the shock melting (SM) method, is proposed and proved to be able to determine the melting curves of materials accurately and efficiently. The SM method, which is based on the multi-scale shock technique, determines melting curves by preheating and/or prepressurizing materials before shock. This strategy was extensively verified using both classical and ab initio molecular dynamics (MD). First, the SM method yielded the same satisfactory melting curve of Cu with only 360 atoms using classical MD, compared to the results from the Z-method and the two-phase coexistence method. Then, it also produced a satisfactory melting curve of Pd with only 756 atoms. Finally, the SM method combined with ab initio MD cheaply achieved a good melting curve of Al with only 180 atoms, which agrees well with the experimental data and the calculated results from other methods. It turned out that the SM method is an alternative efficient method for calculating the melting curves of materials.

  20. Shock melting method to determine melting curve by molecular dynamics: Cu, Pd, and Al

    International Nuclear Information System (INIS)

    Liu, Zhong-Li; Zhang, Xiu-Lu; Cai, Ling-Cang

    2015-01-01

    A melting simulation method, the shock melting (SM) method, is proposed and proved to be able to determine the melting curves of materials accurately and efficiently. The SM method, which is based on the multi-scale shock technique, determines melting curves by preheating and/or prepressurizing materials before shock. This strategy was extensively verified using both classical and ab initio molecular dynamics (MD). First, the SM method yielded the same satisfactory melting curve of Cu with only 360 atoms using classical MD, compared to the results from the Z-method and the two-phase coexistence method. Then, it also produced a satisfactory melting curve of Pd with only 756 atoms. Finally, the SM method combined with ab initio MD cheaply achieved a good melting curve of Al with only 180 atoms, which agrees well with the experimental data and the calculated results from other methods. It turned out that the SM method is an alternative efficient method for calculating the melting curves of materials

  1. Nonparametric estimation of age-specific reference percentile curves with radial smoothing.

    Science.gov (United States)

    Wan, Xiaohai; Qu, Yongming; Huang, Yao; Zhang, Xiao; Song, Hanping; Jiang, Honghua

    2012-01-01

    Reference percentile curves represent the covariate-dependent distribution of a quantitative measurement and are often used to summarize and monitor dynamic processes such as human growth. We propose a new nonparametric method based on a radial smoothing (RS) technique to estimate age-specific reference percentile curves assuming the underlying distribution is relatively close to normal. We compared the RS method with both the LMS and the generalized additive models for location, scale and shape (GAMLSS) methods using simulated data and found that our method has smaller estimation error than the two existing methods. We also applied the new method to analyze height growth data from children being followed in a clinical observational study of growth hormone treatment, and compared the growth curves between those with growth disorders and the general population. Copyright © 2011 Elsevier Inc. All rights reserved.

  2. Effect of Non Submerged Vanes on Separation Zone at Strongly-curved Channel Bends, a Laboratory Scale Study

    Directory of Open Access Journals (Sweden)

    Ali Akbar Akhtari

    2010-03-01

    Full Text Available Bends along open channels always pose difficulties for water transfer systems. One undesirable effect of bends in such channels, i.e. separation of water from inner banks, was studied. For the purposes of this study, the literature on the subject was first reviewed, and a strongly-curved open channel was designed and constructed on the laboratory scale. Several tests were performed to evaluate the accuracy of the lab model, data homogeneity, and systematic errors. The model was then calibrated and the influence of curvature on flow pattern past the curve was investigated. Also, for the first time, the influence of separation walls on flow pattern was investigated. Experimental results on three strongly-curved open channels with a curvature radius to channel width ratio of 1.5 and curvature angles of 30°, 60°, and 90° showed that, in all the cases studied, the effect of flow separation could be observed immediately after the curve. In addition, the greatest effect of flow separation was seen at a distance equal to channel width from the bend end. In the presence of middle walls and flow separation, the effect of water separation reduced at the bend, especially for a curvature of 90°.

  3. Femtosecond laser-assisted cataract surgery with bimanual technique: learning curve for an experienced cataract surgeon.

    Science.gov (United States)

    Cavallini, Gian Maria; Verdina, Tommaso; De Maria, Michele; Fornasari, Elisa; Volpini, Elisa; Campi, Luca

    2017-11-29

    To describe the intraoperative complications and the learning curve of microincision cataract surgery assisted by femtosecond laser (FLACS) with bimanual technique performed by an experienced surgeon. It is a prospective, observational, comparative case series. A total of 120 eyes which underwent bimanual FLACS by the same experienced surgeon during his first experience were included in the study; we considered the first 60 cases as Group A and the second 60 cases as Group B. In both groups, only nuclear sclerosis of grade 2 or 3 was included; an intraocular lens was implanted through a 1.4-mm incision. Best-corrected visual acuity (BCVA), surgically induced astigmatism (SIA), central corneal thickness and endothelial cell loss (ECL) were evaluated before and at 1 and 3 months after surgery. Intraoperative parameters, and intra- and post-operative complications were recorded. In Group A, we had femtosecond laser-related minor complications in 11 cases (18.3%) and post-operative complications in 2 cases (3.3%); in Group B, we recorded 2 cases (3.3%) of femtosecond laser-related minor complications with no post-operative complications. Mean effective phaco time (EPT) was 5.32 ± 3.68 s in Group A and 4.34 ± 2.39 s in Group B with a significant difference (p = 0.046). We recorded a significant mean BCVA improvement at 3 months in both groups (p  0.05). Finally, we found significant ECL in both groups with a significant difference between the two groups (p = 0.042). FLACS with bimanual technique and low-energy LDV Z8 is associated with a necessary initial learning curve. After the first adjustments in the surgical technique, this technology seems to be safe and effective with rapid visual recovery and it helps surgeons to standardize the crucial steps of cataract surgery.

  4. Glycation and secondary conformational changes of human serum albumin: study of the FTIR spectroscopic curve-fitting technique

    Directory of Open Access Journals (Sweden)

    Yu-Ting Huang

    2016-05-01

    Full Text Available The aim of this study was attempted to investigate both the glycation kinetics and protein secondary conformational changes of human serum albumin (HSA after the reaction with ribose. The browning and fluorescence determinations as well as Fourier transform infrared (FTIR microspectroscopy with a curve-fitting technique were applied. Various concentrations of ribose were incubated over a 12-week period at 37 ± 0.5 oC under dark conditions. The results clearly shows that the glycation occurred in HSA-ribose reaction mixtures was markedly increased with the amount of ribose used and incubation time, leading to marked alterations of protein conformation of HSA after FTIR determination. In addition, the browning intensity of reaction solutions were colored from light to deep brown, as determined by optical observation. The increase in fluorescence intensity from HSA–ribose mixtures seemed to occur more quickly than browning, suggesting that the fluorescence products were produced earlier on in the process than compounds causing browning. Moreover, the predominant α-helical composition of HSA decreased with an increase in ribose concentration and incubation time, whereas total β-structure and random coil composition increased, as determined by curve-fitted FTIR microspectroscopy analysis. We also found that the peak intensity ratios at 1044 cm−1/1542 cm−1 markedly decreased prior to 4 weeks of incubation, then almost plateaued, implying that the consumption of ribose in the glycation reaction might have been accelerated over the first 4 weeks of incubation, and gradually decreased. This study first evidences that two unique IR peaks at 1710 cm−1 [carbonyl groups of irreversible products produced by the reaction and deposition of advanced glycation end products (AGEs] and 1621 cm−1 (aggregated HSA molecules were clearly observed from the curve-fitted FTIR spectra of HSA-ribose mixtures over the course of incubation time. This study

  5. Decomposing the trade-environment nexus for Malaysia: what do the technique, scale, composition, and comparative advantage effect indicate?

    Science.gov (United States)

    Ling, Chong Hui; Ahmed, Khalid; Binti Muhamad, Rusnah; Shahbaz, Muhammad

    2015-12-01

    This paper investigates the impact of trade openness on CO2 emissions using time series data over the period of 1970QI-2011QIV for Malaysia. We disintegrate the trade effect into scale, technique, composition, and comparative advantage effects to check the environmental consequence of trade at four different transition points. To achieve the purpose, we have employed augmented Dickey-Fuller (ADF) and Phillips-Perron (PP) unit root tests in order to examine the stationary properties of the variables. Later, the long-run association among the variables is examined by applying autoregressive distributed lag (ARDL) bounds testing approach to cointegration. Our results confirm the presence of cointegration. Further, we find that scale effect has positive and technique effect has negative impact on CO2 emissions after threshold income level and form inverted U-shaped relationship-hence validates the environmental Kuznets curve hypothesis. Energy consumption adds in CO2 emissions. Trade openness and composite effect improve environmental quality by lowering CO2 emissions. The comparative advantage effect increases CO2 emissions and impairs environmental quality. The results provide the innovative approach to see the impact of trade openness in four sub-dimensions of trade liberalization. Hence, this study attributes more comprehensive policy tool for trade economists to better design environmentally sustainable trade rules and agreements.

  6. Quantum fields in curved space

    International Nuclear Information System (INIS)

    Birrell, N.D.; Davies, P.C.W.

    1982-01-01

    The book presents a comprehensive review of the subject of gravitational effects in quantum field theory. Quantum field theory in Minkowski space, quantum field theory in curved spacetime, flat spacetime examples, curved spacetime examples, stress-tensor renormalization, applications of renormalization techniques, quantum black holes and interacting fields are all discussed in detail. (U.K.)

  7. A comparison of two different techniques for deriving the quiet day curve from SARINET riometer data

    Directory of Open Access Journals (Sweden)

    J. Moro

    2012-08-01

    Full Text Available In this work, an upgrade of the technique for estimating the Quiet Day Curve (QDC as proposed by Tanaka et al. (2007 is suggested. To validate our approach, the QDC is estimated from data acquired by the Imaging Riometer for Ionospheric Studies (IRIS installed at the Southern Space Observatory (SSO/CRS/CCR/INPE – MCT, 29°4´ S, 53°8´ W, 480 m a.s.l., São Martinho da Serra – Brazil. The evaluation was performed by comparing the difference between the QDCs derived using our upgrade technique with the one proposed by Tanaka et al. (2007. The results are discussed in terms of the seasonal variability and the level of magnetic disturbance. Also, the cosmic noise absorption (CNA images for IRIS data operated at SSO was built using both the techniques aiming to check the implications of the changes in the methods of QDC determination on the CNA that resulted from it.

  8. Deep-learnt classification of light curves

    DEFF Research Database (Denmark)

    Mahabal, Ashish; Gieseke, Fabian; Pai, Akshay Sadananda Uppinakudru

    2017-01-01

    Astronomy light curves are sparse, gappy, and heteroscedastic. As a result standard time series methods regularly used for financial and similar datasets are of little help and astronomers are usually left to their own instruments and techniques to classify light curves. A common approach is to d...

  9. Effects of statistical quality, sampling rate and temporal filtering techniques on the extraction of functional parameters from the left ventricular time-activity curves

    Energy Technology Data Exchange (ETDEWEB)

    Guignard, P.A.; Chan, W. (Royal Melbourne Hospital, Parkville (Australia). Dept. of Nuclear Medicine)

    1984-09-01

    Several techniques for the processing of a series of curves derived from two left ventricular time-activity curves acquired at rest and during exercise with a nuclear stethoscope were evaluated. They were three and five point time smoothing. Fourier filtering preserving one to four harmonics (H), truncated curve Fourier filtering, and third degree polynomial curve fitting. Each filter's ability to recover, with fidelity, systolic and diastolic function parameters was evaluated under increasingly 'noisy' conditions and at several sampling rates. Third degree polynomial curve fittings and truncated Fourier filters exhibited very high sensitivity to noise. Three and five point time smoothing had moderate sensitivity to noise, but were highly affected by sampling rate. Fourier filtering preserving 2H or 3H produced the best compromise with high resilience to noise and independence of sampling rate as far as the recovery of these functional parameters is concerned.

  10. Effects of statistical quality, sampling rate and temporal filtering techniques on the extraction of functional parameters from the left ventricular time-activity curves

    International Nuclear Information System (INIS)

    Guignard, P.A.; Chan, W.

    1984-01-01

    Several techniques for the processing of a series of curves derived from two left ventricular time-activity curves acquired at rest and during exercise with a nuclear stethoscope were evaluated. They were three and five point time smoothing. Fourier filtering preserving one to four harmonics (H), truncated curve Fourier filtering, and third degree polynomial curve fitting. Each filter's ability to recover, with fidelity, systolic and diastolic function parameters was evaluated under increasingly 'noisy' conditions and at several sampling rates. Third degree polynomial curve fittings and truncated Fourier filters exhibited very high sensitivity to noise. Three and five point time smoothing had moderate sensitivity to noise, but were highly affected by sampling rate. Fourier filtering preserving 2H or 3H produced the best compromise with high resilience to noise and independence of sampling rate as far as the recovery of these functional parameters is concerned. (author)

  11. Mathematical analysis of the dimensional scaling technique for the Schroedinger equation with power-law potentials

    International Nuclear Information System (INIS)

    Ding Zhonghai; Chen, Goong; Lin, Chang-Shou

    2010-01-01

    The dimensional scaling (D-scaling) technique is an innovative asymptotic expansion approach to study the multiparticle systems in molecular quantum mechanics. It enables the calculation of ground and excited state energies of quantum systems without having to solve the Schroedinger equation. In this paper, we present a mathematical analysis of the D-scaling technique for the Schroedinger equation with power-law potentials. By casting the D-scaling technique in an appropriate variational setting and studying the corresponding minimization problem, the D-scaling technique is justified rigorously. A new asymptotic dimensional expansion scheme is introduced to compute asymptotic expansions for ground state energies.

  12. The genus curve of the Abell clusters

    Science.gov (United States)

    Rhoads, James E.; Gott, J. Richard, III; Postman, Marc

    1994-01-01

    We study the topology of large-scale structure through a genus curve measurement of the recent Abell catalog redshift survey of Postman, Huchra, and Geller (1992). The structure is found to be spongelike near median density and to exhibit isolated superclusters and voids at high and low densities, respectively. The genus curve shows a slight shift toward 'meatball' topology, but remains consistent with the hypothesis of Gaussian random phase initial conditions. The amplitude of the genus curve corresponds to a power-law spectrum with index n = 0.21-0.47+0.43 on scales of 48/h Mpc or to a cold dark matter power spectrum with omega h = 0.36-0.17+0.46.

  13. The genus curve of the Abell clusters

    Science.gov (United States)

    Rhoads, James E.; Gott, J. Richard, III; Postman, Marc

    1994-01-01

    We study the topology of large-scale structure through a genus curve measurement of the recent Abell catalog redshift survey of Postman, Huchra, and Geller (1992). The structure is found to be spongelike near median density and to exhibit isolated superclusters and voids at high and low densities, respectively. The genus curve shows a slight shift toward 'meatball' topology, but remains consistent with the hypothesis of Gaussian random phase initial conditions. The amplitude of the genus curve corresponds to a power-law spectrum with index n = 0.21(sub -0.47 sup +0.43) on scales of 48/h Mpc or to a cold dark matter power spectrum with omega h = 0.36(sub -0.17 sup +0.46).

  14. Phonon dispersion curves for CsCN

    International Nuclear Information System (INIS)

    Gaur, N.K.; Singh, Preeti; Rini, E.G.; Galgale, Jyostna; Singh, R.K.

    2004-01-01

    The motivation for the present work was gained from the recent publication on phonon dispersion curves (PDCs) of CsCN from the neutron scattering technique. We have applied the extended three-body force shell model (ETSM) by incorporating the effect of coupling between the translation modes and the orientation of cyanide molecules for the description of phonon dispersion curves of CsCN between the temperatures 195 and 295 K. Our results on PDCs in symmetric direction are in good agreement with the experimental data measured with inelastic neutron scattering technique. (author)

  15. Estimating Aquifer Transmissivity Using the Recession-Curve-Displacement Method in Tanzania’s Kilombero Valley

    Directory of Open Access Journals (Sweden)

    William Senkondo

    2017-12-01

    Full Text Available Information on aquifer processes and characteristics across scales has long been a cornerstone for understanding water resources. However, point measurements are often limited in extent and representativeness. Techniques that increase the support scale (footprint of measurements or leverage existing observations in novel ways can thus be useful. In this study, we used a recession-curve-displacement method to estimate regional-scale aquifer transmissivity (T from streamflow records across the Kilombero Valley of Tanzania. We compare these estimates to local-scale estimates made from pumping tests across the Kilombero Valley. The median T from the pumping tests was 0.18 m2/min. This was quite similar to the median T estimated from the recession-curve-displacement method applied during the wet season for the entire basin (0.14 m2/min and for one of the two sub-basins tested (0.16 m2/min. On the basis of our findings, there appears to be reasonable potential to inform water resource management and hydrologic model development through streamflow-derived transmissivity estimates, which is promising for data-limited environments facing rapid development, such as the Kilombero Valley.

  16. A comparison of two centrifuge techniques for constructing vulnerability curves: insight into the 'open-vessel' artifact.

    Science.gov (United States)

    Yin, Pengxian; Meng, Feng; Liu, Qing; An, Rui; Cai, Jing; Du, Guangyuan

    2018-03-30

    A vulnerability curve (VC) describes the extent of xylem cavitation resistance. Centrifuges have been used to generate VCs for decades via static- and flow-centrifuge methods. Recently, the validity of the centrifuge techniques has been questioned. Researchers have hypothesized that the centrifuge techniques might yield unreliable VCs due to the open-vessel artifact. However, other researchers reject this hypothesis. The focus of the dispute is centred on whether exponential VCs are more reliable when the static-centrifuge method is used than with the flow-centrifuge method. To further test the reliability of the centrifuge technique, two centrifuges were manufactured to simulate the static- and flow-centrifuge methods. VCs of three species with open vessels of known lengths were constructed using the two centrifuges. The results showed that both centrifuge techniques produced invalid VCs for Robinia because the water flow through stems under mild tension in centrifuges led to an increasing loss of water conductivity. Additionally, the injection of water in the flow-centrifuge exacerbated the loss of water conductivity. However, both centrifuge techniques yielded reliable VCs for Prunus, regardless of the presence of open vessels in the tested samples. We conclude that centrifuge techniques can be used in species with open vessels only when the centrifuge produces a VC that matches the bench-dehydration VC. This article is protected by copyright. All rights reserved.

  17. String Sigma Models on Curved Supermanifolds

    Directory of Open Access Journals (Sweden)

    Roberto Catenacci

    2018-04-01

    Full Text Available We use the techniques of integral forms to analyze the easiest example of two-dimensional sigma models on a supermanifold. We write the action as an integral of a top integral form over a D = 2 supermanifold, and we show how to interpolate between different superspace actions. Then, we consider curved supermanifolds, and we show that the definitions used for flat supermanifolds can also be used for curved supermanifolds. We prove it by first considering the case of a curved rigid supermanifold and then the case of a generic curved supermanifold described by a single superfield E.

  18. Phonon transport across nano-scale curved thin films

    Energy Technology Data Exchange (ETDEWEB)

    Mansoor, Saad B.; Yilbas, Bekir S., E-mail: bsyilbas@kfupm.edu.sa

    2016-12-15

    Phonon transport across the curve thin silicon film due to temperature disturbance at film edges is examined. The equation for radiative transport is considered via incorporating Boltzmann transport equation for the energy transfer. The effect of the thin film curvature on phonon transport characteristics is assessed. In the analysis, the film arc length along the film centerline is considered to be constant and the film arc angle is varied to obtain various film curvatures. Equivalent equilibrium temperature is introduced to assess the phonon intensity distribution inside the curved thin film. It is found that equivalent equilibrium temperature decay along the arc length is sharper than that of in the radial direction, which is more pronounced in the region close to the film inner radius. Reducing film arc angle increases the film curvature; in which case, phonon intensity decay becomes sharp in the close region of the high temperature edge. Equivalent equilibrium temperature demonstrates non-symmetric distribution along the radial direction, which is more pronounced in the near region of the high temperature edge.

  19. Phonon transport across nano-scale curved thin films

    International Nuclear Information System (INIS)

    Mansoor, Saad B.; Yilbas, Bekir S.

    2016-01-01

    Phonon transport across the curve thin silicon film due to temperature disturbance at film edges is examined. The equation for radiative transport is considered via incorporating Boltzmann transport equation for the energy transfer. The effect of the thin film curvature on phonon transport characteristics is assessed. In the analysis, the film arc length along the film centerline is considered to be constant and the film arc angle is varied to obtain various film curvatures. Equivalent equilibrium temperature is introduced to assess the phonon intensity distribution inside the curved thin film. It is found that equivalent equilibrium temperature decay along the arc length is sharper than that of in the radial direction, which is more pronounced in the region close to the film inner radius. Reducing film arc angle increases the film curvature; in which case, phonon intensity decay becomes sharp in the close region of the high temperature edge. Equivalent equilibrium temperature demonstrates non-symmetric distribution along the radial direction, which is more pronounced in the near region of the high temperature edge.

  20. The composition-explicit distillation curve technique: Relating chemical analysis and physical properties of complex fluids.

    Science.gov (United States)

    Bruno, Thomas J; Ott, Lisa S; Lovestead, Tara M; Huber, Marcia L

    2010-04-16

    The analysis of complex fluids such as crude oils, fuels, vegetable oils and mixed waste streams poses significant challenges arising primarily from the multiplicity of components, the different properties of the components (polarity, polarizability, etc.) and matrix properties. We have recently introduced an analytical strategy that simplifies many of these analyses, and provides the added potential of linking compositional information with physical property information. This aspect can be used to facilitate equation of state development for the complex fluids. In addition to chemical characterization, the approach provides the ability to calculate thermodynamic properties for such complex heterogeneous streams. The technique is based on the advanced distillation curve (ADC) metrology, which separates a complex fluid by distillation into fractions that are sampled, and for which thermodynamically consistent temperatures are measured at atmospheric pressure. The collected sample fractions can be analyzed by any method that is appropriate. The analytical methods we have applied include gas chromatography (with flame ionization, mass spectrometric and sulfur chemiluminescence detection), thin layer chromatography, FTIR, corrosivity analysis, neutron activation analysis and cold neutron prompt gamma activation analysis. By far, the most widely used analytical technique we have used with the ADC is gas chromatography. This has enabled us to study finished fuels (gasoline, diesel fuels, aviation fuels, rocket propellants), crude oils (including a crude oil made from swine manure) and waste oils streams (used automotive and transformer oils). In this special issue of the Journal of Chromatography, specifically dedicated to extraction technologies, we describe the essential features of the advanced distillation curve metrology as an analytical strategy for complex fluids. Published by Elsevier B.V.

  1. Deep-learnt classification of light curves

    DEFF Research Database (Denmark)

    Mahabal, Ashish; Gieseke, Fabian; Pai, Akshay Sadananda Uppinakudru

    2017-01-01

    is to derive statistical features from the time series and to use machine learning methods, generally supervised, to separate objects into a few of the standard classes. In this work, we transform the time series to two-dimensional light curve representations in order to classify them using modern deep......Astronomy light curves are sparse, gappy, and heteroscedastic. As a result standard time series methods regularly used for financial and similar datasets are of little help and astronomers are usually left to their own instruments and techniques to classify light curves. A common approach...... learning techniques. In particular, we show that convolutional neural networks based classifiers work well for broad characterization and classification. We use labeled datasets of periodic variables from CRTS survey and show how this opens doors for a quick classification of diverse classes with several...

  2. Experimental Assessment on the Hysteretic Behavior of a Full-Scale Traditional Chinese Timber Structure Using a Synchronous Loading Technique

    Directory of Open Access Journals (Sweden)

    XiWang Shi

    2018-01-01

    Full Text Available In traditional Chinese timber structures, few tie beams were used between columns, and the column base was placed directly on a stone base. In order to study the hysteretic behavior of such structures, a full-scale model was established. The model size was determined according to the requirements of an eighth grade material system specified in the architectural treatise Ying-zao-fa-shi written during the Song Dynasty. In light of the vertical lift and drop of the test model during horizontal reciprocating motions, the horizontal low-cycle reciprocating loading experiments were conducted using a synchronous loading technique. By analyzing the load-displacement hysteresis curves, envelope curves, deformation capacity, energy dissipation, and change in stiffness under different vertical loads, it is found that the timber frame exhibits obvious signs of self-restoring and favorable plastic deformation capacity. As the horizontal displacement increases, the equivalent viscous damping coefficient generally declines first and then increases. At the same time, the stiffness degrades rapidly first and then decreases slowly. Increasing vertical loading will improve the deformation, energy-dissipation capacity, and stiffness of the timber frame.

  3. Multilayer Strip Dipole Antenna Using Stacking Technique and Its Application for Curved Surface

    Directory of Open Access Journals (Sweden)

    Charinsak Saetiaw

    2013-01-01

    Full Text Available This paper presents the design of multilayer strip dipole antenna by stacking a flexible copper-clad laminate utilized for curved surface on the cylindrical objects. The designed antenna will reduce the effects of curving based on relative lengths that are changed in each stacking flexible copper-clad laminate layer. Curving is different from each layer of the antenna, so the resonance frequency that resulted from an extended antenna provides better frequency response stability compared to modern antenna when it is curved or attached to cylindrical objects. The frequency of multilayer antenna is designed at 920 MHz for UHF RFID applications.

  4. Rational points, rational curves, and entire holomorphic curves on projective varieties

    CERN Document Server

    Gasbarri, Carlo; Roth, Mike; Tschinkel, Yuri

    2015-01-01

    This volume contains papers from the Short Thematic Program on Rational Points, Rational Curves, and Entire Holomorphic Curves and Algebraic Varieties, held from June 3-28, 2013, at the Centre de Recherches Mathématiques, Université de Montréal, Québec, Canada. The program was dedicated to the study of subtle interconnections between geometric and arithmetic properties of higher-dimensional algebraic varieties. The main areas of the program were, among others, proving density of rational points in Zariski or analytic topology on special varieties, understanding global geometric properties of rationally connected varieties, as well as connections between geometry and algebraic dynamics exploring new geometric techniques in Diophantine approximation.

  5. Variability of the Wind Turbine Power Curve

    Directory of Open Access Journals (Sweden)

    Mahesh M. Bandi

    2016-09-01

    Full Text Available Wind turbine power curves are calibrated by turbine manufacturers under requirements stipulated by the International Electrotechnical Commission to provide a functional mapping between the mean wind speed v ¯ and the mean turbine power output P ¯ . Wind plant operators employ these power curves to estimate or forecast wind power generation under given wind conditions. However, it is general knowledge that wide variability exists in these mean calibration values. We first analyse how the standard deviation in wind speed σ v affects the mean P ¯ and the standard deviation σ P of wind power. We find that the magnitude of wind power fluctuations scales as the square of the mean wind speed. Using data from three planetary locations, we find that the wind speed standard deviation σ v systematically varies with mean wind speed v ¯ , and in some instances, follows a scaling of the form σ v = C × v ¯ α ; C being a constant and α a fractional power. We show that, when applicable, this scaling form provides a minimal parameter description of the power curve in terms of v ¯ alone. Wind data from different locations establishes that (in instances when this scaling exists the exponent α varies with location, owing to the influence of local environmental conditions on wind speed variability. Since manufacturer-calibrated power curves cannot account for variability influenced by local conditions, this variability translates to forecast uncertainty in power generation. We close with a proposal for operators to perform post-installation recalibration of their turbine power curves to account for the influence of local environmental factors on wind speed variability in order to reduce the uncertainty of wind power forecasts. Understanding the relationship between wind’s speed and its variability is likely to lead to lower costs for the integration of wind power into the electric grid.

  6. Characterizing Synergistic Water and Energy Efficiency at the Residential Scale Using a Cost Abatement Curve Approach

    Science.gov (United States)

    Stillwell, A. S.; Chini, C. M.; Schreiber, K. L.; Barker, Z. A.

    2015-12-01

    Energy and water are two increasingly correlated resources. Electricity generation at thermoelectric power plants requires cooling such that large water withdrawal and consumption rates are associated with electricity consumption. Drinking water and wastewater treatment require significant electricity inputs to clean, disinfect, and pump water. Due to this energy-water nexus, energy efficiency measures might be a cost-effective approach to reducing water use and water efficiency measures might support energy savings as well. This research characterizes the cost-effectiveness of different efficiency approaches in households by quantifying the direct and indirect water and energy savings that could be realized through efficiency measures, such as low-flow fixtures, energy and water efficient appliances, distributed generation, and solar water heating. Potential energy and water savings from these efficiency measures was analyzed in a product-lifetime adjusted economic model comparing efficiency measures to conventional counterparts. Results were displayed as cost abatement curves indicating the most economical measures to implement for a target reduction in water and/or energy consumption. These cost abatement curves are useful in supporting market innovation and investment in residential-scale efficiency.

  7. A reduced scale two loop PWR core designed with particle swarm optimization technique

    International Nuclear Information System (INIS)

    Lima Junior, Carlos A. Souza; Pereira, Claudio M.N.A; Lapa, Celso M.F.; Cunha, Joao J.; Alvim, Antonio C.M.

    2007-01-01

    Reduced scale experiments are often employed in engineering projects because they are much cheaper than real scale testing. Unfortunately, designing reduced scale thermal-hydraulic circuit or equipment, with the capability of reproducing, both accurately and simultaneously, all physical phenomena that occur in real scale and at operating conditions, is a difficult task. To solve this problem, advanced optimization techniques, such as Genetic Algorithms, have been applied. Following this research line, we have performed investigations, using the Particle Swarm Optimization (PSO) Technique, to design a reduced scale two loop Pressurized Water Reactor (PWR) core, considering 100% of nominal power and non accidental operating conditions. Obtained results show that the proposed methodology is a promising approach for forced flow reduced scale experiments. (author)

  8. Construction of calibration curve for accountancy tank

    International Nuclear Information System (INIS)

    Kato, Takayuki; Goto, Yoshiki; Nidaira, Kazuo

    2009-01-01

    Tanks are equipped in a reprocessing plant for accounting solution of nuclear material. The careful measurement of volume in tanks is very important to implement rigorous accounting of nuclear material. The calibration curve relating the volume and level of solution needs to be constructed, where the level is determined by differential pressure of dip tubes. Several calibration curves are usually employed, but it's not explicitly decided how many segment are used, where to select segment, or what should be the degree of polynomial curve. These parameters, i.e., segment and degree of polynomial curve are mutually interrelated to give the better performance of calibration curve. Here we present the construction technique of giving optimum calibration curves and their characteristics. (author)

  9. Advanced techniques for energy-efficient industrial-scale continuous chromatography

    Energy Technology Data Exchange (ETDEWEB)

    DeCarli, J.P. II (Dow Chemical Co., Midland, MI (USA)); Carta, G. (Virginia Univ., Charlottesville, VA (USA). Dept. of Chemical Engineering); Byers, C.H. (Oak Ridge National Lab., TN (USA))

    1989-11-01

    Continuous annular chromatography (CAC) is a developing technology that allows truly continuous chromatographic separations. Previous work has demonstrated the utility of this technology for the separation of various materials by isocratic elution on a bench scale. Novel applications and improved operation of the process were studied in this work, demonstrating that CAC is a versatile apparatus which is capable of separations at high throughput. Three specific separation systems were investigated. Pilot-scale separations at high loadings were performed using an industrial sugar mixture as an example of scale-up for isocratic separations. Bench-scale experiments of a low concentration metal ion mixture were performed to demonstrate stepwise elution, a chromatographic technique which decreases dilution and increases sorbent capacity. Finally, the separation of mixtures of amino acids by ion exchange was investigated to demonstrate the use of displacement development on the CAC. This technique, which perhaps has the most potential, when applied to the CAC allowed simultaneous separation and concentration of multicomponent mixtures on a continuous basis. Mathematical models were developed to describe the CAC performance and optimize the operating conditions. For all the systems investigated, the continuous separation performance of the CAC was found to be very nearly the same as the batchwise performance of conventional chromatography. the technology appears, thus, to be very promising for industrial applications. 43 figs., 9 tabs.

  10. Hysteroscopic sterilization using a virtual reality simulator: assessment of learning curve.

    Science.gov (United States)

    Janse, Juliënne A; Goedegebuure, Ruben S A; Veersema, Sebastiaan; Broekmans, Frank J M; Schreuder, Henk W R

    2013-01-01

    To assess the learning curve using a virtual reality simulator for hysteroscopic sterilization with the Essure method. Prospective multicenter study (Canadian Task Force classification II-2). University and teaching hospital in the Netherlands. Thirty novices (medical students) and five experts (gynecologists who had performed >150 Essure sterilization procedures). All participants performed nine repetitions of bilateral Essure placement on the simulator. Novices returned after 2 weeks and performed a second series of five repetitions to assess retention of skills. Structured observations on performance using the Global Rating Scale and parameters derived from the simulator provided measurements for analysis. The learning curve is represented by improvement per procedure. Two-way repeated-measures analysis of variance was used to analyze learning curves. Effect size (ES) was calculated to express the practical significance of the results (ES ≥ 0.50 indicates a large learning effect). For all parameters, significant improvements were found in novice performance within nine repetitions. Large learning effects were established for six of eight parameters (p learning curve established in this study endorses future implementation of the simulator in curricula on hysteroscopic skill acquisition for clinicians who are interested in learning this sterilization technique. Copyright © 2013 AAGL. Published by Elsevier Inc. All rights reserved.

  11. Improving 3d Spatial Queries Search: Newfangled Technique of Space Filling Curves in 3d City Modeling

    Science.gov (United States)

    Uznir, U.; Anton, F.; Suhaibah, A.; Rahman, A. A.; Mioc, D.

    2013-09-01

    The advantages of three dimensional (3D) city models can be seen in various applications including photogrammetry, urban and regional planning, computer games, etc.. They expand the visualization and analysis capabilities of Geographic Information Systems on cities, and they can be developed using web standards. However, these 3D city models consume much more storage compared to two dimensional (2D) spatial data. They involve extra geometrical and topological information together with semantic data. Without a proper spatial data clustering method and its corresponding spatial data access method, retrieving portions of and especially searching these 3D city models, will not be done optimally. Even though current developments are based on an open data model allotted by the Open Geospatial Consortium (OGC) called CityGML, its XML-based structure makes it challenging to cluster the 3D urban objects. In this research, we propose an opponent data constellation technique of space-filling curves (3D Hilbert curves) for 3D city model data representation. Unlike previous methods, that try to project 3D or n-dimensional data down to 2D or 3D using Principal Component Analysis (PCA) or Hilbert mappings, in this research, we extend the Hilbert space-filling curve to one higher dimension for 3D city model data implementations. The query performance was tested using a CityGML dataset of 1,000 building blocks and the results are presented in this paper. The advantages of implementing space-filling curves in 3D city modeling will improve data retrieval time by means of optimized 3D adjacency, nearest neighbor information and 3D indexing. The Hilbert mapping, which maps a subinterval of the [0, 1] interval to the corresponding portion of the d-dimensional Hilbert's curve, preserves the Lebesgue measure and is Lipschitz continuous. Depending on the applications, several alternatives are possible in order to cluster spatial data together in the third dimension compared to its

  12. An efficient permeability scaling-up technique applied to the discretized flow equations

    Energy Technology Data Exchange (ETDEWEB)

    Urgelli, D.; Ding, Yu [Institut Francais du Petrole, Rueil Malmaison (France)

    1997-08-01

    Grid-block permeability scaling-up for numerical reservoir simulations has been discussed for a long time in the literature. It is now recognized that a full permeability tensor is needed to get an accurate reservoir description at large scale. However, two major difficulties are encountered: (1) grid-block permeability cannot be properly defined because it depends on boundary conditions; (2) discretization of flow equations with a full permeability tensor is not straightforward and little work has been done on this subject. In this paper, we propose a new method, which allows us to get around both difficulties. As the two major problems are closely related, a global approach will preserve the accuracy. So, in the proposed method, the permeability up-scaling technique is integrated in the discretized numerical scheme for flow simulation. The permeability is scaled-up via the transmissibility term, in accordance with the fluid flow calculation in the numerical scheme. A finite-volume scheme is particularly studied, and the transmissibility scaling-up technique for this scheme is presented. Some numerical examples are tested for flow simulation. This new method is compared with some published numerical schemes for full permeability tensor discretization where the full permeability tensor is scaled-up through various techniques. Comparing the results with fine grid simulations shows that the new method is more accurate and more efficient.

  13. Design techniques for large scale linear measurement systems

    International Nuclear Information System (INIS)

    Candy, J.V.

    1979-03-01

    Techniques to design measurement schemes for systems modeled by large scale linear time invariant systems, i.e., physical systems modeled by a large number (> 5) of ordinary differential equations, are described. The techniques are based on transforming the physical system model to a coordinate system facilitating the design and then transforming back to the original coordinates. An example of a three-stage, four-species, extraction column used in the reprocessing of spent nuclear fuel elements is presented. The basic ideas are briefly discussed in the case of noisy measurements. An example using a plutonium nitrate storage vessel (reprocessing) with measurement uncertainty is also presented

  14. Regional scales of fire danger rating in the forest: improved technique

    Directory of Open Access Journals (Sweden)

    A. V. Volokitina

    2017-04-01

    Full Text Available Wildland fires distribute unevenly in time and over area under the influence of weather and other factors. It is unfeasible to air patrol the whole forest area daily during a fire season as well as to keep all fire suppression forces constantly alert. Daily work and preparedness of forest fire protection services is regulated by the level of fire danger according to weather conditions (Nesterov’s index. PV-1 index, fire hazard class (Melekhov’s scale, regional scales (earlier called local scales. Unfortunately, there is still no unified comparable technique of making regional scales. As a result, it is difficult to maneuver forest fire protection resources, since the techniques currently used are not approved and not tested for their performance. They give fire danger rating incomparable even for neighboring regions. The paper analyzes the state-of-the-art in Russia and abroad. It is stated the irony is that with factors of fire danger measured quantitatively, the fire danger itself as a function has no quantitative expression. Thus, selection of an absolute criteria is of high importance for improvement of daily fire danger rating. On the example of the Chunsky forest ranger station (Krasnoyarsk Krai, an improved technique is suggested of making comparable local scales of forest fire danger rating based on an absolute criterion of fire danger rating – a probable density of active fires per million ha. A method and an algorithm are described of automatized local scales of fire danger that should facilitate effective creation of similar scales for any forest ranger station or aviation regional office using a database on forest fires and weather conditions. The information system of distant monitoring by Federal Forestry Agency of Russia is analyzed for its application in making local scales. To supplement the existing weather station net it is suggested that automatic compact weather stations or, if the latter is not possible, simple

  15. Development of a Watershed-Scale Long-Term Hydrologic Impact Assessment Model with the Asymptotic Curve Number Regression Equation

    Directory of Open Access Journals (Sweden)

    Jichul Ryu

    2016-04-01

    Full Text Available In this study, 52 asymptotic Curve Number (CN regression equations were developed for combinations of representative land covers and hydrologic soil groups. In addition, to overcome the limitations of the original Long-term Hydrologic Impact Assessment (L-THIA model when it is applied to larger watersheds, a watershed-scale L-THIA Asymptotic CN (ACN regression equation model (watershed-scale L-THIA ACN model was developed by integrating the asymptotic CN regressions and various modules for direct runoff/baseflow/channel routing. The watershed-scale L-THIA ACN model was applied to four watersheds in South Korea to evaluate the accuracy of its streamflow prediction. The coefficient of determination (R2 and Nash–Sutcliffe Efficiency (NSE values for observed versus simulated streamflows over intervals of eight days were greater than 0.6 for all four of the watersheds. The watershed-scale L-THIA ACN model, including the asymptotic CN regression equation method, can simulate long-term streamflow sufficiently well with the ten parameters that have been added for the characterization of streamflow.

  16. An Adaptive Pruning Algorithm for the Discrete L-Curve Criterion

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Jensen, Toke Koldborg; Rodriguez, Giuseppe

    2004-01-01

    SVD or regularizing CG iterations). Our algorithm needs no pre-defined parameters, and in order to capture the global features of the curve in an adaptive fashion, we use a sequence of pruned L-curves that correspond to considering the curves at different scales. We compare our new algorithm...

  17. 51Cr - erythrocyte survival curves

    International Nuclear Information System (INIS)

    Paiva Costa, J. de.

    1982-07-01

    Sixteen patients were studied, being fifteen patients in hemolytic state, and a normal individual as a witness. The aim was to obtain better techniques for the analysis of the erythrocytes, survival curves, according to the recommendations of the International Committee of Hematology. It was used the radiochromatic method as a tracer. Previously a revisional study of the International Literature was made in its aspects inherent to the work in execution, rendering possible to establish comparisons and clarify phonomena observed in cur investigation. Several parameters were considered in this study, hindering both the exponential and the linear curves. The analysis of the survival curves of the erythrocytes in the studied group, revealed that the elution factor did not present a homogeneous answer quantitatively to all, though, the result of the analysis of these curves have been established, through listed programs in the electronic calculator. (Author) [pt

  18. Multiperiodicity in the light curve of Alpha Orionis

    International Nuclear Information System (INIS)

    Karovska, M.

    1987-01-01

    Alpha Ori, a supergiant star classified as M2 Iab, is characterized by pronounced variability encompassing most of its observed parameters. Variability on two different time scales has been observed in the light and velocity curves: a long period variation of about 6 years and superposed on this, irregular fluctuations having a time scale of several hundred days. This paper reports the results of Fourier analysis of more than 60- years of AAVSO (American Association of Variable Stars Observers) data which suggest a multiperiodicity in the light curve of α Ori

  19. Fluid flow profile in a packed bead column using residence time curves and radiotracer techniques

    Energy Technology Data Exchange (ETDEWEB)

    Almeida, Ana Paula F. de; Gonçalves, Eduardo Ramos; Brandão, Luis Eduardo B.; Salgado, Cesar M., E-mail: anacamiqui@gmail.com, E-mail: egoncalves@con.ufrj.br, E-mail: brandao@ien.gov.br, E-mail: otero@ien.gov.br [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil)

    2017-07-01

    Filling columns are extremely important in the chemical industry and are used for purification, separation and treatment processes of gas or liquid mixtures. The objective of this work is to study the hydrodynamics of the fluid for a characterization of aqueous phase flow patterns in the filling column, associating with the methodology of the Curves of Residence Time Distribution (RTD) to analyze and associate theoretical models that put as conditions column operating. RTD can be obtained by using the pulse-stimulus response technique which is characterized by the instantaneous injection of a radiotracer into the system input. In this work, 68Ga was used as radiotracer. Five shielded and collimated NaI (Tl) 1 x 1″ scintillator detectors were suitably positioned to record the movement of the radiotracer's path in the conveying line and filling column. Making possible the analysis of the RTD curve in the regions of interest. With the data generated by the NaI (Tl) detectors with the passage of the radiotracer in the transport line and inside the column, it was possible to evaluate the flow profile of the aqueous phase and to identify operational failures, such as internal conduit and the existence of a retention zone in the inside the column. Theoretical models were used for different flow flows: the piston flow and perfect mixing. (author)

  20. Fluid flow profile in a packed bead column using residence time curves and radiotracer techniques

    International Nuclear Information System (INIS)

    Almeida, Ana Paula F. de; Gonçalves, Eduardo Ramos; Brandão, Luis Eduardo B.; Salgado, Cesar M.

    2017-01-01

    Filling columns are extremely important in the chemical industry and are used for purification, separation and treatment processes of gas or liquid mixtures. The objective of this work is to study the hydrodynamics of the fluid for a characterization of aqueous phase flow patterns in the filling column, associating with the methodology of the Curves of Residence Time Distribution (RTD) to analyze and associate theoretical models that put as conditions column operating. RTD can be obtained by using the pulse-stimulus response technique which is characterized by the instantaneous injection of a radiotracer into the system input. In this work, 68Ga was used as radiotracer. Five shielded and collimated NaI (Tl) 1 x 1″ scintillator detectors were suitably positioned to record the movement of the radiotracer's path in the conveying line and filling column. Making possible the analysis of the RTD curve in the regions of interest. With the data generated by the NaI (Tl) detectors with the passage of the radiotracer in the transport line and inside the column, it was possible to evaluate the flow profile of the aqueous phase and to identify operational failures, such as internal conduit and the existence of a retention zone in the inside the column. Theoretical models were used for different flow flows: the piston flow and perfect mixing. (author)

  1. Curve collection, extension of databases

    International Nuclear Information System (INIS)

    Gillemot, F.

    1992-01-01

    Full text: Databases: generally calculated data only. The original measurements: diagrams. Information loss between them Expensive research eg. irradiation, aging, creep etc. Original curves should be stored for reanalysing. The format of the stored curves: a. Data in ASCII files, only numbers b. Other information in strings in a second file Same name, but different extension. Extensions shows the type of the test and the type of the file. EXAMPLES. TEN is tensile information, TED is tensile data, CHN is Charpy informations, CHD is Charpy data. Storing techniques: digitalised measurements, digitalising old curves stored on paper. Use: making catalogues, reanalysing, comparison with new data. Tools: mathematical software packages like quattro, genplot, exel, mathcad, qbasic, pascal, fortran, mathlab, grapher etc. (author)

  2. An interactive editor for curve-skeletons: SkeletonLab

    OpenAIRE

    Barbieri, Simone; Meloni, P.; Usai, F.; Spano, L.D.; Scateni, R.

    2016-01-01

    Curve-skeletons are powerful shape descriptors able to provide higher level information on topology, structure and semantics of a given digital object. Their range of application is wide and encompasses computer animation, shape matching, modelling and remeshing. While a universally accepted definition of curve-skeleton is still lacking, there are currently many algorithms for the curve-skeleton computation (or skeletonization) as well as different techniques for building a mesh around a give...

  3. Testing the validity of stock-recruitment curve fits

    International Nuclear Information System (INIS)

    Christensen, S.W.; Goodyear, C.P.

    1988-01-01

    The utilities relied heavily on the Ricker stock-recruitment model as the basis for quantifying biological compensation in the Hudson River power case. They presented many fits of the Ricker model to data derived from striped bass catch and effort records compiled by the National Marine Fisheries Service. Based on this curve-fitting exercise, a value of 4 was chosen for the parameter alpha in the Ricker model, and this value was used to derive the utilities' estimates of the long-term impact of power plants on striped bass populations. A technique was developed and applied to address a single fundamental question: if the Ricker model were applicable to the Hudson River striped bass population, could the estimates of alpha from the curve-fitting exercise be considered reliable. The technique involved constructing a simulation model that incorporated the essential biological features of the population and simulated the characteristics of the available actual catch-per-unit-effort data through time. The ability or failure to retrieve the known parameter values underlying the simulation model via the curve-fitting exercise was a direct test of the reliability of the results of fitting stock-recruitment curves to the real data. The results demonstrated that estimates of alpha from the curve-fitting exercise were not reliable. The simulation-modeling technique provides an effective way to identify whether or not particular data are appropriate for use in fitting such models. 39 refs., 2 figs., 3 tabs

  4. Heterotic superstring and curved, scale-invariant superspace

    International Nuclear Information System (INIS)

    Kuusk, P.K.

    1988-01-01

    It is shown that the modified heterotic superstring [R. E. Kallosh, JETP Lett. 43, 456 (1986); Phys. Lett. 176B, 50 (1986)] demands a scale-invariant superspace for its existence. Explicit expressions are given for the connection, the torsion, and the curvature of an extended scale-invariant superspace with 506 bosonic and 16 fermionic coordinates

  5. Establishment of Accurate Calibration Curve for National Verification at a Large Scale Input Accountability Tank in RRP - For Strengthening State System for Meeting Safeguards Obligation

    International Nuclear Information System (INIS)

    Goto, Y.; Kato, T.; Nidaira, K.

    2010-01-01

    Tanks are installed in a reprocessing plant for spent fuel in order to account solution of nuclear material. The careful measurement of volume in tanks is crucial to implement accurate accounting of nuclear material. The calibration curve related with the volume and level of solution needs to be constructed, where the level is determined by differential pressure of dip tubes in tanks. More than one calibration curves depending on the height are commonly applied for each tank, but it's not explicitly decided how many segments are used, where to select segment, or what order of polynomial curve. Here we present the rational construction technique of giving optimum calibration curves and their characteristics. The tank calibration work has been conducted in the course of contract with Japan Safeguards Office (JSGO) about safeguards information treatment. (author)

  6. The link between the baryonic mass distribution and the rotation curve shape

    NARCIS (Netherlands)

    Swaters, R. A.; Sancisi, R.; van der Hulst, J. M.; van Albada, T. S.

    The observed rotation curves of disc galaxies, ranging from late-type dwarf galaxies to early-type spirals, can be fitted remarkably well simply by scaling up the contributions of the stellar and H?i discs. This baryonic scaling model can explain the full breadth of observed rotation curves with

  7. Medium scale test study of chemical cleaning technique for secondary side of SG in PWR

    International Nuclear Information System (INIS)

    Zhang Mengqin; Zhang Shufeng; Yu Jinghua; Hou Shufeng

    1997-08-01

    The medium scale test study of chemical cleaning technique for removing corrosion product (Fe 3 O 4 ) in secondary side of SG in PWR has been completed. The test has been carried out in a medium scale test loop. The medium scale test evaluated the effect of the chemical cleaning technique (temperature, flow rate, cleaning time, cleaning process), the state of corrosion product deposition on magnetite (Fe 3 O 4 ) solubility and safety of materials of SG in cleaning process. The inhibitor component of chemical cleaning agent has been improved by electrochemical linear polarization method, the effect of inhibitor on corrosion resistance of materials have been examined in the medium scale test loop, the most components of chemical cleaning agent have been obtained, the EDTA is main component in cleaning agent. The electrochemical method for monitor corrosion of materials during cleaning process has been completed in the laboratory. The study of the medium scale test of chemical cleaning technique have had the optimum chemical cleaning technique for remove corrosion product in SG secondary side of PWR. (9 refs., 4 figs., 11 tabs.)

  8. Satellite altimetry based rating curves throughout the entire Amazon basin

    Science.gov (United States)

    Paris, A.; Calmant, S.; Paiva, R. C.; Collischonn, W.; Silva, J. S.; Bonnet, M.; Seyler, F.

    2013-05-01

    The Amazonian basin is the largest hydrological basin all over the world. In the recent past years, the basin has experienced an unusual succession of extreme draughts and floods, which origin is still a matter of debate. Yet, the amount of data available is poor, both over time and space scales, due to factor like basin's size, access difficulty and so on. One of the major locks is to get discharge series distributed over the entire basin. Satellite altimetry can be used to improve our knowledge of the hydrological stream flow conditions in the basin, through rating curves. Rating curves are mathematical relationships between stage and discharge at a given place. The common way to determine the parameters of the relationship is to compute the non-linear regression between the discharge and stage series. In this study, the discharge data was obtained by simulation through the entire basin using the MGB-IPH model with TRMM Merge input rainfall data and assimilation of gage data, run from 1998 to 2010. The stage dataset is made of ~800 altimetry series at ENVISAT and JASON-2 virtual stations. Altimetry series span between 2002 and 2010. In the present work we present the benefits of using stochastic methods instead of probabilistic ones to determine a dataset of rating curve parameters which are consistent throughout the entire Amazon basin. The rating curve parameters have been computed using a parameter optimization technique based on Markov Chain Monte Carlo sampler and Bayesian inference scheme. This technique provides an estimate of the best parameters for the rating curve, but also their posterior probability distribution, allowing the determination of a credibility interval for the rating curve. Also is included in the rating curve determination the error over discharges estimates from the MGB-IPH model. These MGB-IPH errors come from either errors in the discharge derived from the gage readings or errors in the satellite rainfall estimates. The present

  9. Testing MONDian dark matter with galactic rotation curves

    International Nuclear Information System (INIS)

    Edmonds, Doug; Farrah, Duncan; Minic, Djordje; Takeuchi, Tatsu; Ho, Chiu Man; Ng, Y. Jack

    2014-01-01

    MONDian dark matter (MDM) is a new form of dark matter quantum that naturally accounts for Milgrom's scaling, usually associated with modified Newtonian dynamics (MOND), and theoretically behaves like cold dark matter (CDM) at cluster and cosmic scales. In this paper, we provide the first observational test of MDM by fitting rotation curves to a sample of 30 local spiral galaxies (z ≈ 0.003). For comparison, we also fit the galactic rotation curves using MOND and CDM. We find that all three models fit the data well. The rotation curves predicted by MDM and MOND are virtually indistinguishable over the range of observed radii (∼1 to 30 kpc). The best-fit MDM and CDM density profiles are compared. We also compare with MDM the dark matter density profiles arising from MOND if Milgrom's formula is interpreted as Newtonian gravity with an extra source term instead of as a modification of inertia. We find that discrepancies between MDM and MOND will occur near the center of a typical spiral galaxy. In these regions, instead of continuing to rise sharply, the MDM mass density turns over and drops as we approach the center of the galaxy. Our results show that MDM, which restricts the nature of the dark matter quantum by accounting for Milgrom's scaling, accurately reproduces observed rotation curves.

  10. Application of the thermoluminescent (TL) and optically stimulated luminescence (OSL) dosimetry techniques to determinate the isodose curves in a cancer treatment planning simulation using Volumetric Modulated Arc Therapy - VMAT

    International Nuclear Information System (INIS)

    Bravim, Amanda

    2015-01-01

    The Volumetric Modulated Arc Therapy (VMAT) is an advance technique of Intensity Modulated Radiation Therapy (IMRT). This progress is due to the continuous gantry rotation with the radiation beam modulation providing lower time of the patient treatment. This research aimed the verification of the isodose curves in a simulation of a vertebra treatment with spinal cord protection using the thermoluminescent (TL) and optically stimulated luminescence (OSL) dosimetry techniques and the LiF:Mg,Ti (TLD-100), CaS0 4 :Dy and Al 2 0 3 :C dosimeters and LiF:Mg,Ti micro dosimeters (TLD-100). The dosimeters were characterized using PMMA plates of 30 x 30 x 30 cm 3 and different thickness. All irradiations were done using Truebeam STx linear accelerator of Hospital Israelita Albert Einstein, with 6 MV photons beam. After the dosimeter characterization, they were irradiated according the specific planning simulation and using a PMMA phantom developed to VMAT measurements. This irradiation aimed to verify the isodose curves of the treatment simulation using the two dosimetry techniques. All types of dosimeters showed satisfactory results to determine the dose distribution but analysing the complexity of the isodose curves and the proximity of them, the LiF:Mg,Ti micro dosimeter showed the most appropriate for use due to its small dimensions. Regarding the best technique, as both technique showed satisfactory results, the TL technique presents less complex to be used because the most of the radiotherapy departments already have a TL laboratory. The OSL technique requires more care and greater investment in the hospital. (author)

  11. A measurable Lawson criterion and hydro-equivalent curves for inertial confinement fusion

    International Nuclear Information System (INIS)

    Zhou, C. D.; Betti, R.

    2008-01-01

    It is shown that the ignition condition (Lawson criterion) for inertial confinement fusion (ICF) can be cast in a form dependent on the only two parameters of the compressed fuel assembly that can be measured with existing techniques: the hot spot ion temperature (T i h ) and the total areal density (ρR tot ), which includes the cold shell contribution. A marginal ignition curve is derived in the ρR tot , T i h plane and current implosion experiments are compared with the ignition curve. On this plane, hydrodynamic equivalent curves show how a given implosion would perform with respect to the ignition condition when scaled up in the laser-driver energy. For 3 i h > n i h > n 2.6 · tot > n >50 keV 2.6 · g/cm 2 , where tot > n and i h > n are the burn-averaged total areal density and hot spot ion temperature, respectively. Both quantities are calculated without accounting for the alpha-particle energy deposition. Such a criterion can be used to determine how surrogate D 2 and subignited DT target implosions perform with respect to the one-dimensional ignition threshold.

  12. Millennial-scale climate variability recorded by gamma logging curve in Chaidam Basin

    International Nuclear Information System (INIS)

    Yuan Linwang; Chen Ye; Liu Zechun

    2000-01-01

    Using a natural gamma-ray logging curve of Dacan-1 core to inverse paleo-climate changes in Chaidam Basin, the process of environmental change of the past 150,000 years has been revealed. He in rich events and D-O cycles were identified, and can be matched well with those recorded in Greedland ice core. It suggests that the GR curve can identify tectonic and climatic events, is a sensitive proxy indicator of environmental and climatic changes

  13. Identification of the Scale of Changes in Personnel Motivation Techniques at Mechanical-Engineering Enterprises

    Directory of Open Access Journals (Sweden)

    Melnyk Olga G.

    2016-02-01

    Full Text Available The method for identification of the scale of changes in personnel motivation techniques at mechanical-engineering enterprises based on structural and logical sequence of implementation of relevant stages (identification of the mission, strategy and objectives of the enterprise; forecasting the development of the enterprise business environment; SWOT-analysis of actual motivation techniques, deciding on the scale of changes in motivation techniques, choosing providers for changing personnel motivation techniques, choosing an alternative to changing motivation techniques, implementation of changes in motivation techniques; control over changes in motivation techniques. It has been substantiated that the improved method enables providing a systematic and analytical justification for management decisionmaking in this field and choosing the best for the mechanical-engineering enterprise scale and variant of changes in motivation techniques. The method for identification of the scale of changes in motivation techniques at mechanical-engineering enterprises takes into account the previous, current and prospective character. Firstly, the approach is based on considering the past state in the motivational sphere of the mechanical-engineering enterprise; secondly, the method involves identifying the current state of personnel motivation techniques; thirdly, within the method framework the prospective, which is manifested in strategic vision of the enterprise development as well as in forecasting the development of its business environment, is taken into account. The advantage of the proposed method is that the level of its specification may vary depending on the set goals, resource constraints and necessity. Among other things, this method allows integrating various formalized and non-formalized causal relationships in the sphere of personnel motivation at machine-building enterprises and management of relevant processes. This creates preconditions for a

  14. Utilization of curve offsets in additive manufacturing

    Science.gov (United States)

    Haseltalab, Vahid; Yaman, Ulas; Dolen, Melik

    2018-05-01

    Curve offsets are utilized in different fields of engineering and science. Additive manufacturing, which lately becomes an explicit requirement in manufacturing industry, utilizes curve offsets widely. One of the necessities of offsetting is for scaling which is required if there is shrinkage after the fabrication or if the surface quality of the resulting part is unacceptable. Therefore, some post-processing is indispensable. But the major application of curve offsets in additive manufacturing processes is for generating head trajectories. In a point-wise AM process, a correct tool-path in each layer can reduce lots of costs and increase the surface quality of the fabricated parts. In this study, different curve offset generation algorithms are analyzed to show their capabilities and disadvantages through some test cases and improvements on their drawbacks are suggested.

  15. W-curve alignments for HIV-1 genomic comparisons.

    Directory of Open Access Journals (Sweden)

    Douglas J Cork

    2010-06-01

    Full Text Available The W-curve was originally developed as a graphical visualization technique for viewing DNA and RNA sequences. Its ability to render features of DNA also makes it suitable for computational studies. Its main advantage in this area is utilizing a single-pass algorithm for comparing the sequences. Avoiding recursion during sequence alignments offers advantages for speed and in-process resources. The graphical technique also allows for multiple models of comparison to be used depending on the nucleotide patterns embedded in similar whole genomic sequences. The W-curve approach allows us to compare large numbers of samples quickly.We are currently tuning the algorithm to accommodate quirks specific to HIV-1 genomic sequences so that it can be used to aid in diagnostic and vaccine efforts. Tracking the molecular evolution of the virus has been greatly hampered by gap associated problems predominantly embedded within the envelope gene of the virus. Gaps and hypermutation of the virus slow conventional string based alignments of the whole genome. This paper describes the W-curve algorithm itself, and how we have adapted it for comparison of similar HIV-1 genomes. A treebuilding method is developed with the W-curve that utilizes a novel Cylindrical Coordinate distance method and gap analysis method. HIV-1 C2-V5 env sequence regions from a Mother/Infant cohort study are used in the comparison.The output distance matrix and neighbor results produced by the W-curve are functionally equivalent to those from Clustal for C2-V5 sequences in the mother/infant pairs infected with CRF01_AE.Significant potential exists for utilizing this method in place of conventional string based alignment of HIV-1 genomes, such as Clustal X. With W-curve heuristic alignment, it may be possible to obtain clinically useful results in a short time-short enough to affect clinical choices for acute treatment. A description of the W-curve generation process, including a comparison

  16. W-curve alignments for HIV-1 genomic comparisons.

    Science.gov (United States)

    Cork, Douglas J; Lembark, Steven; Tovanabutra, Sodsai; Robb, Merlin L; Kim, Jerome H

    2010-06-01

    The W-curve was originally developed as a graphical visualization technique for viewing DNA and RNA sequences. Its ability to render features of DNA also makes it suitable for computational studies. Its main advantage in this area is utilizing a single-pass algorithm for comparing the sequences. Avoiding recursion during sequence alignments offers advantages for speed and in-process resources. The graphical technique also allows for multiple models of comparison to be used depending on the nucleotide patterns embedded in similar whole genomic sequences. The W-curve approach allows us to compare large numbers of samples quickly. We are currently tuning the algorithm to accommodate quirks specific to HIV-1 genomic sequences so that it can be used to aid in diagnostic and vaccine efforts. Tracking the molecular evolution of the virus has been greatly hampered by gap associated problems predominantly embedded within the envelope gene of the virus. Gaps and hypermutation of the virus slow conventional string based alignments of the whole genome. This paper describes the W-curve algorithm itself, and how we have adapted it for comparison of similar HIV-1 genomes. A treebuilding method is developed with the W-curve that utilizes a novel Cylindrical Coordinate distance method and gap analysis method. HIV-1 C2-V5 env sequence regions from a Mother/Infant cohort study are used in the comparison. The output distance matrix and neighbor results produced by the W-curve are functionally equivalent to those from Clustal for C2-V5 sequences in the mother/infant pairs infected with CRF01_AE. Significant potential exists for utilizing this method in place of conventional string based alignment of HIV-1 genomes, such as Clustal X. With W-curve heuristic alignment, it may be possible to obtain clinically useful results in a short time-short enough to affect clinical choices for acute treatment. A description of the W-curve generation process, including a comparison technique of

  17. Hot-blade cutting of EPS foam for double-curved surfaces—numerical simulation and experiments

    DEFF Research Database (Denmark)

    Petkov, Kiril P.; Hattel, Jesper Henri

    2017-01-01

    In the present paper, experimental and numerical studies of a newly developed process of Hot-Blade Cutting used for free forming of double-curved surfaces and cost effective rapid prototyping of expanded polystyrene foam is carried out. The experimental part of the study falls in two parts...... during the cutting process. A novel measurement method for determination of kerfwidth (i.e., the gap space after material removal) applying a commercially available large-scale optical 3D scanning technique was developed and used. A one-dimensional thermo-electro-mechanical numerical model for Hot...

  18. Section curve reconstruction and mean-camber curve extraction of a point-sampled blade surface.

    Directory of Open Access Journals (Sweden)

    Wen-long Li

    Full Text Available The blade is one of the most critical parts of an aviation engine, and a small change in the blade geometry may significantly affect the dynamics performance of the aviation engine. Rapid advancements in 3D scanning techniques have enabled the inspection of the blade shape using a dense and accurate point cloud. This paper proposes a new method to achieving two common tasks in blade inspection: section curve reconstruction and mean-camber curve extraction with the representation of a point cloud. The mathematical morphology is expanded and applied to restrain the effect of the measuring defects and generate an ordered sequence of 2D measured points in the section plane. Then, the energy and distance are minimized to iteratively smoothen the measured points, approximate the section curve and extract the mean-camber curve. In addition, a turbine blade is machined and scanned to observe the curvature variation, energy variation and approximation error, which demonstrates the availability of the proposed method. The proposed method is simple to implement and can be applied in aviation casting-blade finish inspection, large forging-blade allowance inspection and visual-guided robot grinding localization.

  19. Two-Point Codes for the Generalised GK curve

    DEFF Research Database (Denmark)

    Barelli, Élise; Beelen, Peter; Datta, Mrinmoy

    2017-01-01

    completely cover and in many cases improve on their results, using different techniques, while also supporting any GGK curve. Our method builds on the order bound for AG codes: to enable this, we study certain Weierstrass semigroups. This allows an efficient algorithm for computing our improved bounds. We......We improve previously known lower bounds for the minimum distance of certain two-point AG codes constructed using a Generalized Giulietti–Korchmaros curve (GGK). Castellanos and Tizziotti recently described such bounds for two-point codes coming from the Giulietti–Korchmaros curve (GK). Our results...

  20. Calibration curves for biological dosimetry

    International Nuclear Information System (INIS)

    Guerrero C, C.; Brena V, M. . E-mail cgc@nuclear.inin.mx

    2004-01-01

    The generated information by the investigations in different laboratories of the world, included the ININ, in which settles down that certain class of chromosomal leisure it increases in function of the dose and radiation type, has given by result the obtaining of calibrated curves that are applied in the well-known technique as biological dosimetry. In this work is presented a summary of the work made in the laboratory that includes the calibrated curves for gamma radiation of 60 Cobalt and X rays of 250 k Vp, examples of presumed exposure to ionizing radiation, resolved by means of aberration analysis and the corresponding dose estimate through the equations of the respective curves and finally a comparison among the dose calculations in those people affected by the accident of Ciudad Juarez, carried out by the group of Oak Ridge, USA and those obtained in this laboratory. (Author)

  1. From Curve Fitting to Machine Learning

    CERN Document Server

    Zielesny, Achim

    2011-01-01

    The analysis of experimental data is at heart of science from its beginnings. But it was the advent of digital computers that allowed the execution of highly non-linear and increasingly complex data analysis procedures - methods that were completely unfeasible before. Non-linear curve fitting, clustering and machine learning belong to these modern techniques which are a further step towards computational intelligence. The goal of this book is to provide an interactive and illustrative guide to these topics. It concentrates on the road from two dimensional curve fitting to multidimensional clus

  2. Bound states in curved quantum waveguides

    International Nuclear Information System (INIS)

    Exner, P.; Seba, P.

    1987-01-01

    We study free quantum particle living on a curved planar strip Ω of a fixed width d with Dirichlet boundary conditions. It can serve as a model for electrons in thin films on a cylindrical-type substrate, or in a curved quantum wire. Assuming that the boundary of Ω is infinitely smooth and its curvature decays fast enough at infinity, we prove that a bound state with energy below the first transversal mode exists for all sufficiently small d. A lower bound on the critical width is obtained using the Birman-Schwinger technique. (orig.)

  3. A neural network for the Bragg synthetic curves recognition

    International Nuclear Information System (INIS)

    Reynoso V, M.R.; Vega C, J.J.; Fernandez A, J.; Belmont M, E.; Policroniades R, R.; Moreno B, E.

    1996-01-01

    A ionization chamber was employed named Bragg curve spectroscopy. The Bragg peak amplitude is a monotone growing function of Z, which permits to identify elements through their measurement. A better technique for this measurement is to improve the use of neural networks with the purpose of the identification of the Bragg curve. (Author)

  4. Nonlinear Filtering Effects of Reservoirs on Flood Frequency Curves at the Regional Scale: RESERVOIRS FILTER FLOOD FREQUENCY CURVES

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Wei; Li, Hong-Yi; Leung, Lai-Yung; Yigzaw, Wondmagegn Y.; Zhao, Jianshi; Lu, Hui; Deng, Zhiqun; Demissie, Yonas; Bloschl, Gunter

    2017-10-01

    Anthropogenic activities, e.g., reservoir operation, may alter the characteristics of Flood Frequency Curve (FFC) and challenge the basic assumption of stationarity used in flood frequency analysis. This paper presents a combined data-modeling analysis of the nonlinear filtering effects of reservoirs on the FFCs over the contiguous United States. A dimensionless Reservoir Impact Index (RII), defined as the total upstream reservoir storage capacity normalized by the annual streamflow volume, is used to quantify reservoir regulation effects. Analyses are performed for 388 river stations with an average record length of 50 years. The first two moments of the FFC, mean annual maximum flood (MAF) and coefficient of variations (CV), are calculated for the pre- and post-dam periods and compared to elucidate the reservoir regulation effects as a function of RII. It is found that MAF generally decreases with increasing RII but stabilizes when RII exceeds a threshold value, and CV increases with RII until a threshold value beyond which CV decreases with RII. The processes underlying the nonlinear threshold behavior of MAF and CV are investigated using three reservoir models with different levels of complexity. All models capture the non-linear relationships of MAF and CV with RII, suggesting that the basic flood control function of reservoirs is key to the non-linear relationships. The relative roles of reservoir storage capacity, operation objectives, available storage prior to a flood event, and reservoir inflow pattern are systematically investigated. Our findings may help improve flood-risk assessment and mitigation in regulated river systems at the regional scale.

  5. Multiphoton absorption coefficients in solids: an universal curve

    International Nuclear Information System (INIS)

    Brandi, H.S.; Araujo, C.B. de

    1983-04-01

    An universal curve for the frequency dependence of the multiphoton absorption coefficient is proposed based on a 'non-perturbative' approach. Specific applications have been made to obtain two, three, four and five photons absorption coefficient in different materials. Properly scaling of the two photon absorption coefficient and the use of the universal curve yields results for the higher order absorption coefficients in good agreement with the experimental data. (Author) [pt

  6. Applying CFD in the Analysis of Heavy-Oil Transportation in Curved Pipes Using Core-Flow Technique

    Directory of Open Access Journals (Sweden)

    S Conceição

    2017-06-01

    Full Text Available Multiphase flow of oil, gas and water occurs in the petroleum industry from the reservoir to the processing units. The occurrence of heavy oils in the world is increasing significantly and points to the need for greater investment in the reservoirs exploitation and, consequently, to the development of new technologies for the production and transport of this oil. Therefore, it is interesting improve techniques to ensure an increase in energy efficiency in the transport of this oil. The core-flow technique is one of the most advantageous methods of lifting and transporting of oil. The core-flow technique does not alter the oil viscosity, but change the flow pattern and thus, reducing friction during heavy oil transportation. This flow pattern is characterized by a fine water pellicle that is formed close to the inner wall of the pipe, aging as lubricant of the oil flowing in the core of the pipe. In this sense, the objective of this paper is to study the isothermal flow of heavy oil in curved pipelines, employing the core-flow technique. A three-dimensional, transient and isothermal mathematical model that considers the mixture and k-e  turbulence models to address the gas-water-heavy oil three-phase flow in the pipe was applied for analysis. Simulations with different flow patterns of the involved phases (oil-gas-water have been done, in order to optimize the transport of heavy oils. Results of pressure and volumetric fraction distribution of the involved phases are presented and analyzed. It was verified that the oil core lubricated by a fine water layer flowing in the pipe considerably decreases pressure drop.

  7. IDF-curves for precipitation In Belgium

    International Nuclear Information System (INIS)

    Mohymont, Bernard; Demarde, Gaston R.

    2004-01-01

    The Intensity-Duration-Frequency (IDF) curves for precipitation constitute a relationship between the intensity, the duration and the frequency of rainfall amounts. The intensity of precipitation is expressed in mm/h, the duration or aggregation time is the length of the interval considered while the frequency stands for the probability of occurrence of the event. IDF-curves constitute a classical and useful tool that is primarily used to dimension hydraulic structures in general, as e.g., sewer systems and which are consequently used to assess the risk of inundation. In this presentation, the IDF relation for precipitation is studied for different locations in Belgium. These locations correspond to two long-term, high-quality precipitation networks of the RMIB: (a) the daily precipitation depths of the climatological network (more than 200 stations, 1951-2001 baseline period); (b) the high-frequency 10-minutes precipitation depths of the hydro meteorological network (more than 30 stations, 15 to 33 years baseline period). For the station of Uccle, an uninterrupted time-series of more than one hundred years of 10-minutes rainfall data is available. The proposed technique for assessing the curves is based on maximum annual values of precipitation. A new analytical formula for the IDF-curves was developed such that these curves stay valid for aggregation times ranging from 10 minutes to 30 days (when fitted with appropriate data). Moreover, all parameters of this formula have physical dimensions. Finally, adequate spatial interpolation techniques are used to provide nationwide extreme values precipitation depths for short- to long-term durations With a given return period. These values are estimated on the grid points of the Belgian ALADIN-domain used in the operational weather forecasts at the RMIB.(Author)

  8. Lagrangian Curves on Spectral Curves of Monopoles

    International Nuclear Information System (INIS)

    Guilfoyle, Brendan; Khalid, Madeeha; Ramon Mari, Jose J.

    2010-01-01

    We study Lagrangian points on smooth holomorphic curves in TP 1 equipped with a natural neutral Kaehler structure, and prove that they must form real curves. By virtue of the identification of TP 1 with the space LE 3 of oriented affine lines in Euclidean 3-space, these Lagrangian curves give rise to ruled surfaces in E 3 , which we prove have zero Gauss curvature. Each ruled surface is shown to be the tangent lines to a curve in E 3 , called the edge of regression of the ruled surface. We give an alternative characterization of these curves as the points in E 3 where the number of oriented lines in the complex curve Σ that pass through the point is less than the degree of Σ. We then apply these results to the spectral curves of certain monopoles and construct the ruled surfaces and edges of regression generated by the Lagrangian curves.

  9. Laparoscopic colorectal surgery in learning curve: Role of implementation of a standardized technique and recovery protocol. A cohort study

    Science.gov (United States)

    Luglio, Gaetano; De Palma, Giovanni Domenico; Tarquini, Rachele; Giglio, Mariano Cesare; Sollazzo, Viviana; Esposito, Emanuela; Spadarella, Emanuela; Peltrini, Roberto; Liccardo, Filomena; Bucci, Luigi

    2015-01-01

    Background Despite the proven benefits, laparoscopic colorectal surgery is still under utilized among surgeons. A steep learning is one of the causes of its limited adoption. Aim of the study is to determine the feasibility and morbidity rate after laparoscopic colorectal surgery in a single institution, “learning curve” experience, implementing a well standardized operative technique and recovery protocol. Methods The first 50 patients treated laparoscopically were included. All the procedures were performed by a trainee surgeon, supervised by a consultant surgeon, according to the principle of complete mesocolic excision with central vascular ligation or TME. Patients underwent a fast track recovery programme. Recovery parameters, short-term outcomes, morbidity and mortality have been assessed. Results Type of resections: 20 left side resections, 8 right side resections, 14 low anterior resection/TME, 5 total colectomy and IRA, 3 total panproctocolectomy and pouch. Mean operative time: 227 min; mean number of lymph-nodes: 18.7. Conversion rate: 8%. Mean time to flatus: 1.3 days; Mean time to solid stool: 2.3 days. Mean length of hospital stay: 7.2 days. Overall morbidity: 24%; major morbidity (Dindo–Clavien III): 4%. No anastomotic leak, no mortality, no 30-days readmission. Conclusion Proper laparoscopic colorectal surgery is safe and leads to excellent results in terms of recovery and short term outcomes, even in a learning curve setting. Key factors for better outcomes and shortening the learning curve seem to be the adoption of a standardized technique and training model along with the strict supervision of an expert colorectal surgeon. PMID:25859386

  10. Heat techniques in the light of the innovation curve. New technology for gas appliances and solar water heaters; Warmtetechnieken in het licht van de innovatiecurve. Nieuwe technologie gezocht voor gastoestel en zonneboilers

    Energy Technology Data Exchange (ETDEWEB)

    Vollebregt, R.

    2012-09-15

    The development, market introduction and deployment of new techniques often follows an S-shaped curve. The gas-fired boiler transformed from a conventional apparatus into the current high efficiency boiler. Will the high efficiency electricity boiler be the next breakthrough technique? The electrical and gas heat pumps are fully developed techniques, but this does not apply to their use in Dutch single-family dwellings [Dutch] De ontwikkeling, marktintroductie en toepassing van nieuwe technieken verloopt vaak volgens een S-vormige curve. De gasketel transformeerde van conventioneel toestel naar de huidige hr-ketel. Is de hre-ketel de volgende doorbraaktechniek? De elektrische en de gaswarmtepomp zijn uitontwikkelde technieken, maar nog niet voor de toepassing in een Nederlandse eengezinswoning.

  11. Codes and curves

    CERN Document Server

    Walker, Judy L

    2000-01-01

    When information is transmitted, errors are likely to occur. Coding theory examines efficient ways of packaging data so that these errors can be detected, or even corrected. The traditional tools of coding theory have come from combinatorics and group theory. Lately, however, coding theorists have added techniques from algebraic geometry to their toolboxes. In particular, by re-interpreting the Reed-Solomon codes, one can see how to define new codes based on divisors on algebraic curves. For instance, using modular curves over finite fields, Tsfasman, Vladut, and Zink showed that one can define a sequence of codes with asymptotically better parameters than any previously known codes. This monograph is based on a series of lectures the author gave as part of the IAS/PCMI program on arithmetic algebraic geometry. Here, the reader is introduced to the exciting field of algebraic geometric coding theory. Presenting the material in the same conversational tone of the lectures, the author covers linear codes, inclu...

  12. Surgical treatment of double thoracic adolescent idiopathic scoliosis with a rigid proximal thoracic curve.

    Science.gov (United States)

    Sudo, Hideki; Abe, Yuichiro; Abumi, Kuniyoshi; Iwasaki, Norimasa; Ito, Manabu

    2016-02-01

    There is limited consensus on the optimal surgical strategy for double thoracic adolescent idiopathic scoliosis (AIS). Recent studies have reported that pedicle screw constructs to maximize scoliosis correction cause further thoracic spine lordosis. The objective of this study was to apply a new surgical technique for double thoracic AIS with rigid proximal thoracic (PT) curves and assess its clinical outcomes. Twenty one consecutive patients with Lenke 2 AIS and a rigid PT curve (Cobb angle ≥30º on side-bending radiographs, flexibility ≤30 %) treated with the simultaneous double-rod rotation technique (SDRRT) were included. In this technique, a temporary rod is placed at the concave side of the PT curve. Then, distraction force is applied to correct the PT curve, which reforms a sigmoid double thoracic curve into an approximate single thoracic curve. As a result, the PT curve is typically converted from an apex left to an apex right curve before applying the correction rod for PT and main thoracic curve. All patients were followed for at least 2 years (average 2.7 years). The average main thoracic and PT Cobb angle correction rate at the final follow-up was 74.7 and 58.0 %, respectively. The average preoperative T5-T12 thoracic kyphosis was 9.3°, which improved significantly to 19.0° (p corrected using SDRRT for Lenke 2 AIS with a rigid PT curve.

  13. Detecting corner points from digital curves

    International Nuclear Information System (INIS)

    Sarfraz, M.

    2011-01-01

    Corners in digital images give important clues for shape representation, recognition, and analysis. Since dominant information regarding shape is usually available at the corners, they provide important features for various real life applications in the disciplines like computer vision, pattern recognition, computer graphics. Corners are the robust features in the sense that they provide important information regarding objects under translation, rotation and scale change. They are also important from the view point of understanding human perception of objects. They play crucial role in decomposing or describing the digital curves. They are also used in scale space theory, image representation, stereo vision, motion tracking, image matching, building mosaics and font designing systems. If the corner points are identified properly, a shape can be represented in an efficient and compact way with sufficient accuracy. Corner detection schemes, based on their applications, can be broadly divided into two categories: binary (suitable for binary images) and gray level (suitable for gray level images). Corner detection approaches for binary images usually involve segmenting the image into regions and extracting boundaries from those regions that contain them. The techniques for gray level images can be categorized into two classes: (a) Template based and (b) gradient based. The template based techniques utilize correlation between a sub-image and a template of a given angle. A corner point is selected by finding the maximum of the correlation output. Gradient based techniques require computing curvature of an edge that passes through a neighborhood in a gray level image. Many corner detection algorithms have been proposed in the literature which can be broadly divided into two parts. One is to detect corner points from grayscale images and other relates to boundary based corner detection. This contribution mainly deals with techniques adopted for later approach

  14. Sound concentration caused by curved surfaces

    NARCIS (Netherlands)

    Vercammen, M.L.S.

    2012-01-01

    In room acoustics the focusing effect of reflections from concave surfaces is a wellknown problem. Although curved surfaces are found throughout the history of architecture, the occurrence of concave surfaces has tended to increase in modern architecture, due to new techniques in design, materials

  15. Multi-scale analysis of lung computed tomography images

    CERN Document Server

    Gori, I; Fantacci, M E; Preite Martinez, A; Retico, A; De Mitri, I; Donadio, S; Fulcheri, C

    2007-01-01

    A computer-aided detection (CAD) system for the identification of lung internal nodules in low-dose multi-detector helical Computed Tomography (CT) images was developed in the framework of the MAGIC-5 project. The three modules of our lung CAD system, a segmentation algorithm for lung internal region identification, a multi-scale dot-enhancement filter for nodule candidate selection and a multi-scale neural technique for false positive finding reduction, are described. The results obtained on a dataset of low-dose and thin-slice CT scans are shown in terms of free response receiver operating characteristic (FROC) curves and discussed.

  16. Spiral blood flows in an idealized 180-degree curved artery model

    Science.gov (United States)

    Bulusu, Kartik V.; Kulkarni, Varun; Plesniak, Michael W.

    2017-11-01

    Understanding of cardiovascular flows has been greatly advanced by the Magnetic Resonance Velocimetry (MRV) technique and its potential for three-dimensional velocity encoding in regions of anatomic interest. The MRV experiments were performed on a 180-degree curved artery model using a Newtonian blood analog fluid at the Richard M. Lucas Center at Stanford University employing a 3 Tesla General Electric (Discovery 750 MRI system) whole body scanner with an eight-channel cardiac coil. Analysis in two regions of the model-artery was performed for flow with Womersley number=4.2. In the entrance region (or straight-inlet pipe) the unsteady pressure drop per unit length, in-plane vorticity and wall shear stress for the pulsatile, carotid artery-based flow rate waveform were calculated. Along the 180-degree curved pipe (curvature ratio =1/7) the near-wall vorticity and the stretching of the particle paths in the vorticity field are visualized. The resultant flow behavior in the idealized curved artery model is associated with parameters such as Dean number and Womersley number. Additionally, using length scales corresponding to the axial and secondary flow we attempt to understand the mechanisms leading to the formation of various structures observed during the pulsatile flow cycle. Supported by GW Center for Biomimetics and Bioinspired Engineering (COBRE), MRV measurements in collaboration with Prof. John K. Eaton and, Dr. Chris Elkins at Stanford University.

  17. Supply-cost curves for geographically distributed renewable-energy resources

    International Nuclear Information System (INIS)

    Izquierdo, Salvador; Dopazo, Cesar; Fueyo, Norberto

    2010-01-01

    The supply-cost curves of renewable-energy sources are an essential tool to synthesize and analyze large-scale energy-policy scenarios, both in the short and long terms. Here, we suggest and test a parametrization of such curves that allows their representation for modeling purposes with a minimal set of information. In essence, an economic potential is defined based on the mode of the marginal supply-cost curves; and, using this definition, a normalized log-normal distribution function is used to model these curves. The feasibility of this proposal is assessed with data from a GIS-based analysis of solar, wind and biomass technologies in Spain. The best agreement is achieved for solar energy.

  18. 3D CT cerebral angiography technique using a 320-detector machine with a time–density curve and low contrast medium volume: Comparison with fixed time delay technique

    International Nuclear Information System (INIS)

    Das, K.; Biswas, S.; Roughley, S.; Bhojak, M.; Niven, S.

    2014-01-01

    Aim: To describe a cerebral computed tomography angiography (CTA) technique using a 320-detector CT machine and a small contrast medium volume (35 ml, 15 ml for test bolus). Also, to compare the quality of these images with that of the images acquired using a larger contrast medium volume (90 or 120 ml) and a fixed time delay (FTD) of 18 s using a 16-detector CT machine. Materials and methods: Cerebral CTA images were acquired using a 320-detector machine by synchronizing the scanning time with the time of peak enhancement as determined from the time–density curve (TDC) using a test bolus dose. The quality of CTA images acquired using this technique was compared with that obtained using a FTD of 18 s (by 16-detector CT), retrospectively. Average densities in four different intracranial arteries, overall opacification of arteries, and the degree of venous contamination were graded and compared. Results: Thirty-eight patients were scanned using the TDC technique and 40 patients using the FTD technique. The arterial densities achieved by the TDC technique were higher (significant for supraclinoid and basilar arteries, p < 0.05). The proportion of images deemed as having “good” arterial opacification was 95% for TDC and 90% for FTD. The degree of venous contamination was significantly higher in images produced by the FTD technique (p < 0.001%). Conclusion: Good diagnostic quality CTA images with significant reduction of venous contamination can be achieved with a low contrast medium dose using a 320-detector machine by coupling the time of data acquisition with the time of peak enhancement

  19. Topographic characterization of nanostructures on curved polymer surfaces

    DEFF Research Database (Denmark)

    Feidenhans'l, Nikolaj Agentoft; Petersen, Jan C.; Taboryski, Rafael J.

    2014-01-01

    The availability of portable instrumentation for characterizing surface topography on the micro- and nanometer scale is very limited. Particular the handling of curved surfaces, both concave and convex, is complicated or not possible on current instrumentation. However, the currently growing use...... method with a portable instrument that can be used in a production environment, and topographically characterize nanometer-scale surface structures on both flat and curved surfaces. To facilitate the commercialization of injection moulded polymer parts featuring nanostructures, it is pivotal...... of injection moulding of polymer parts featuring nanostructured surfaces, requires an instrument that can characterize these structures to ensure replication-confidence between master structure and replicated polymer parts. This project concerns the development of a metrological traceable quality control...

  20. Interactions of cosmic rays in the atmosphere: growth curves revisited

    Energy Technology Data Exchange (ETDEWEB)

    Obermeier, A.; Boyle, P.; Müller, D. [Enrico Fermi Institute, University of Chicago, Chicago, IL 60637 (United States); Hörandel, J., E-mail: a.obermeier@astro.ru.nl [Radboud Universiteit Nijmegen, 6525-HP Nijmegen (Netherlands)

    2013-12-01

    Measurements of cosmic-ray abundances on balloons are affected by interactions in the residual atmosphere above the balloon. Corrections for such interactions are particularly important for observations of rare secondary particles such as boron, antiprotons, and positrons. These corrections either can be calculated if the relevant cross sections in the atmosphere are known or may be empirically determined by extrapolation of the 'growth curves', i.e., the individual particle intensities as functions of atmospheric depth. The growth-curve technique is particularly attractive for long-duration balloon flights where the periodic daily altitude variations permit rather precise determinations of the corresponding particle intensity variations. We determine growth curves for nuclei from boron (Z = 5) to iron (Z = 26) using data from the 2006 Arctic balloon flight of the TRACER detector for cosmic-ray nuclei, and we compare the growth curves with predictions from published cross section values. In general, good agreement is observed. We then study the boron/carbon abundance ratio and derive a simple and energy-independent correction term for this ratio. We emphasize that the growth-curve technique can be developed further to provide highly accurate tests of published interaction cross section values.

  1. A brief fatigue inventory of shoulder health developed by quality function deployment technique.

    Science.gov (United States)

    Liu, Shuo-Fang; Lee, Yannlong; Huang, Yiting

    2009-01-01

    The purpose of this study was to develop a diagnostic outcome instrument that has high reliability and low cost. The scale, called the Shoulder Fatigue Scale-30 Items (SFS-30) risk assessment, was used to determine the severity of patient neck and shoulder discomfort. The quality function deployment (QFD) technique was used in designing and developing a simple medical diagnostic scale with high degree of accuracy. Research data can be used to divide the common causes of neck and shoulder discomfort into 6 core categories: occupation, cumulative, psychologic, diseases, diet, and sleep quality. The SFS-30 was validated by using a group of individuals who had been previously diagnosed with different levels of neck and shoulder symptoms. The SFS-30 assessment determined that 78.57% of the participants experienced a neck and shoulder discomfort level above the SFS-30 risk curve and required immediate medical attention. The QFD technique can improve the accuracy and reliability of an assessment outcome instrument. This is mainly because the QFD technique is effective in prioritizing and assigning weight to the items in the scale. This research successfully developed a reliable risk assessment scale to diagnose neck and shoulder symptoms using QFD technique. This scale was proven to have high accuracy and closely represents reality.

  2. Considerations for reference pump curves

    International Nuclear Information System (INIS)

    Stockton, N.B.

    1992-01-01

    This paper examines problems associated with inservice testing (IST) of pumps to assess their hydraulic performance using reference pump curves to establish acceptance criteria. Safety-related pumps at nuclear power plants are tested under the American Society of Mechanical Engineers (ASME) Boiler and Pressure Vessel Code (the Code), Section 11. The Code requires testing pumps at specific reference points of differential pressure or flow rate that can be readily duplicated during subsequent tests. There are many cases where test conditions cannot be duplicated. For some pumps, such as service water or component cooling pumps, the flow rate at any time depends on plant conditions and the arrangement of multiple independent and constantly changing loads. System conditions cannot be controlled to duplicate a specific reference value. In these cases, utilities frequently request to use pump curves for comparison of test data for acceptance. There is no prescribed method for developing a pump reference curve. The methods vary and may yield substantially different results. Some results are conservative when compared to the Code requirements; some are not. The errors associated with different curve testing techniques should be understood and controlled within reasonable bounds. Manufacturer's pump curves, in general, are not sufficiently accurate to use as reference pump curves for IST. Testing using reference curves generated with polynomial least squares fits over limited ranges of pump operation, cubic spline interpolation, or cubic spline least squares fits can provide a measure of pump hydraulic performance that is at least as accurate as the Code required method. Regardless of the test method, error can be reduced by using more accurate instruments, by correcting for systematic errors, by increasing the number of data points, and by taking repetitive measurements at each data point

  3. Universal rescaling of flow curves for yield-stress fluids close to jamming

    Science.gov (United States)

    Dinkgreve, M.; Paredes, J.; Michels, M. A. J.; Bonn, D.

    2015-07-01

    The experimental flow curves of four different yield-stress fluids with different interparticle interactions are studied near the jamming concentration. By appropriate scaling with the distance to jamming all rheology data can be collapsed onto master curves below and above jamming that meet in the shear-thinning regime and satisfy the Herschel-Bulkley and Cross equations, respectively. In spite of differing interactions in the different systems, master curves characterized by universal scaling exponents are found for the four systems. A two-state microscopic theory of heterogeneous dynamics is presented to rationalize the observed transition from Herschel-Bulkley to Cross behavior and to connect the rheological exponents to microscopic exponents for the divergence of the length and time scales of the heterogeneous dynamics. The experimental data and the microscopic theory are compared with much of the available literature data for yield-stress systems.

  4. Real-Time Exponential Curve Fits Using Discrete Calculus

    Science.gov (United States)

    Rowe, Geoffrey

    2010-01-01

    An improved solution for curve fitting data to an exponential equation (y = Ae(exp Bt) + C) has been developed. This improvement is in four areas -- speed, stability, determinant processing time, and the removal of limits. The solution presented avoids iterative techniques and their stability errors by using three mathematical ideas: discrete calculus, a special relationship (be tween exponential curves and the Mean Value Theorem for Derivatives), and a simple linear curve fit algorithm. This method can also be applied to fitting data to the general power law equation y = Ax(exp B) + C and the general geometric growth equation y = Ak(exp Bt) + C.

  5. Parameter Deduction and Accuracy Analysis of Track Beam Curves in Straddle-type Monorail Systems

    Directory of Open Access Journals (Sweden)

    Xiaobo Zhao

    2015-12-01

    Full Text Available The accuracy of the bottom curve of a PC track beam is strongly related to the production quality of the entire beam. Many factors may affect the parameters of the bottom curve, such as the superelevation of the curve and the deformation of a PC track beam. At present, no effective method has been developed to determine the bottom curve of a PC track beam; therefore, a new technique is presented in this paper to deduce the parameters of such a curve and to control the accuracy of the computation results. First, the domain of the bottom curve of a PC track beam is assumed to be a spindle plane. Then, the corresponding supposed top curve domain is determined based on a geometrical relationship that is the opposite of that identified by the conventional method. Second, several optimal points are selected from the supposed top curve domain according to the dichotomy algorithm; the supposed top curve is thus generated by connecting these points. Finally, one rigorous criterion is established in the fractal dimension to assess the accuracy of the assumed top curve deduced in the previous step. If this supposed curve coincides completely with the known top curve, then the assumed bottom curve corresponding to the assumed top curve is considered to be the real bottom curve. This technique of determining the bottom curve of a PC track beam is thus proven to be efficient and accurate.

  6. Curved canals: Ancestral files revisited

    Directory of Open Access Journals (Sweden)

    Jain Nidhi

    2008-01-01

    Full Text Available The aim of this article is to provide an insight into different techniques of cleaning and shaping of curved root canals with hand instruments. Although a plethora of root canal instruments like ProFile, ProTaper, LightSpeed ® etc dominate the current scenario, the inexpensive conventional root canal hand files such as K-files and flexible files can be used to get optimum results when handled meticulously. Special emphasis has been put on the modifications in biomechanical canal preparation in a variety of curved canal cases. This article compiles a series of clinical cases of root canals with curvatures in the middle and apical third and with S-shaped curvatures that were successfully completed by employing only conventional root canal hand instruments.

  7. Flood damage curves for consistent global risk assessments

    Science.gov (United States)

    de Moel, Hans; Huizinga, Jan; Szewczyk, Wojtek

    2016-04-01

    Assessing potential damage of flood events is an important component in flood risk management. Determining direct flood damage is commonly done using depth-damage curves, which denote the flood damage that would occur at specific water depths per asset or land-use class. Many countries around the world have developed flood damage models using such curves which are based on analysis of past flood events and/or on expert judgement. However, such damage curves are not available for all regions, which hampers damage assessments in those regions. Moreover, due to different methodologies employed for various damage models in different countries, damage assessments cannot be directly compared with each other, obstructing also supra-national flood damage assessments. To address these problems, a globally consistent dataset of depth-damage curves has been developed. This dataset contains damage curves depicting percent of damage as a function of water depth as well as maximum damage values for a variety of assets and land use classes (i.e. residential, commercial, agriculture). Based on an extensive literature survey concave damage curves have been developed for each continent, while differentiation in flood damage between countries is established by determining maximum damage values at the country scale. These maximum damage values are based on construction cost surveys from multinational construction companies, which provide a coherent set of detailed building cost data across dozens of countries. A consistent set of maximum flood damage values for all countries was computed using statistical regressions with socio-economic World Development Indicators from the World Bank. Further, based on insights from the literature survey, guidance is also given on how the damage curves and maximum damage values can be adjusted for specific local circumstances, such as urban vs. rural locations, use of specific building material, etc. This dataset can be used for consistent supra

  8. Statistical assessment of the learning curves of health technologies.

    Science.gov (United States)

    Ramsay, C R; Grant, A M; Wallace, S A; Garthwaite, P H; Monk, A F; Russell, I T

    2001-01-01

    (1) To describe systematically studies that directly assessed the learning curve effect of health technologies. (2) Systematically to identify 'novel' statistical techniques applied to learning curve data in other fields, such as psychology and manufacturing. (3) To test these statistical techniques in data sets from studies of varying designs to assess health technologies in which learning curve effects are known to exist. METHODS - STUDY SELECTION (HEALTH TECHNOLOGY ASSESSMENT LITERATURE REVIEW): For a study to be included, it had to include a formal analysis of the learning curve of a health technology using a graphical, tabular or statistical technique. METHODS - STUDY SELECTION (NON-HEALTH TECHNOLOGY ASSESSMENT LITERATURE SEARCH): For a study to be included, it had to include a formal assessment of a learning curve using a statistical technique that had not been identified in the previous search. METHODS - DATA SOURCES: Six clinical and 16 non-clinical biomedical databases were searched. A limited amount of handsearching and scanning of reference lists was also undertaken. METHODS - DATA EXTRACTION (HEALTH TECHNOLOGY ASSESSMENT LITERATURE REVIEW): A number of study characteristics were abstracted from the papers such as study design, study size, number of operators and the statistical method used. METHODS - DATA EXTRACTION (NON-HEALTH TECHNOLOGY ASSESSMENT LITERATURE SEARCH): The new statistical techniques identified were categorised into four subgroups of increasing complexity: exploratory data analysis; simple series data analysis; complex data structure analysis, generic techniques. METHODS - TESTING OF STATISTICAL METHODS: Some of the statistical methods identified in the systematic searches for single (simple) operator series data and for multiple (complex) operator series data were illustrated and explored using three data sets. The first was a case series of 190 consecutive laparoscopic fundoplication procedures performed by a single surgeon; the second

  9. Construction of molecular potential energy curves by an optimization method

    Science.gov (United States)

    Wang, J.; Blake, A. J.; McCoy, D. G.; Torop, L.

    1991-01-01

    A technique for determining the potential energy curves for diatomic molecules from measurements of diffused or continuum spectra is presented. It is based on a numerical procedure which minimizes the difference between the calculated spectra and the experimental measurements and can be used in cases where other techniques, such as the conventional RKR method, are not applicable. With the aid of suitable spectral data, the associated dipole electronic transition moments can be simultaneously obtained. The method is illustrated by modeling the "longest band" of molecular oxygen to extract the E 3Σ u- and B 3Σ u- potential curves in analytical form.

  10. Trend analyses with river sediment rating curves

    Science.gov (United States)

    Warrick, Jonathan A.

    2015-01-01

    Sediment rating curves, which are fitted relationships between river discharge (Q) and suspended-sediment concentration (C), are commonly used to assess patterns and trends in river water quality. In many of these studies it is assumed that rating curves have a power-law form (i.e., C = aQb, where a and b are fitted parameters). Two fundamental questions about the utility of these techniques are assessed in this paper: (i) How well to the parameters, a and b, characterize trends in the data? (ii) Are trends in rating curves diagnostic of changes to river water or sediment discharge? As noted in previous research, the offset parameter, a, is not an independent variable for most rivers, but rather strongly dependent on b and Q. Here it is shown that a is a poor metric for trends in the vertical offset of a rating curve, and a new parameter, â, as determined by the discharge-normalized power function [C = â (Q/QGM)b], where QGM is the geometric mean of the Q values sampled, provides a better characterization of trends. However, these techniques must be applied carefully, because curvature in the relationship between log(Q) and log(C), which exists for many rivers, can produce false trends in â and b. Also, it is shown that trends in â and b are not uniquely diagnostic of river water or sediment supply conditions. For example, an increase in â can be caused by an increase in sediment supply, a decrease in water supply, or a combination of these conditions. Large changes in water and sediment supplies can occur without any change in the parameters, â and b. Thus, trend analyses using sediment rating curves must include additional assessments of the time-dependent rates and trends of river water, sediment concentrations, and sediment discharge.

  11. Size Reduction Techniques for Large Scale Permanent Magnet Generators in Wind Turbines

    Science.gov (United States)

    Khazdozian, Helena; Hadimani, Ravi; Jiles, David

    2015-03-01

    Increased wind penetration is necessary to reduce U.S. dependence on fossil fuels, combat climate change and increase national energy security. The U.S Department of Energy has recommended large scale and offshore wind turbines to achieve 20% wind electricity generation by 2030. Currently, geared doubly-fed induction generators (DFIGs) are typically employed in the drivetrain for conversion of mechanical to electrical energy. Yet, gearboxes account for the greatest downtime of wind turbines, decreasing reliability and contributing to loss of profit. Direct drive permanent magnet generators (PMGs) offer a reliable alternative to DFIGs by eliminating the gearbox. However, PMGs scale up in size and weight much more rapidly than DFIGs as rated power is increased, presenting significant challenges for large scale wind turbine application. Thus, size reduction techniques are needed for viability of PMGs in large scale wind turbines. Two size reduction techniques are presented. It is demonstrated that 25% size reduction of a 10MW PMG is possible with a high remanence theoretical permanent magnet. Additionally, the use of a Halbach cylinder in an outer rotor PMG is investigated to focus magnetic flux over the rotor surface in order to increase torque. This work was supported by the National Science Foundation under Grant No. 1069283 and a Barbara and James Palmer Endowment at Iowa State University.

  12. Some fundamental questions about R-curves

    International Nuclear Information System (INIS)

    Kolednik, O.

    1992-01-01

    With the help of two simple thought experiments it is demonstrated that there exist two physically different types of fracture toughness. The crack-growth toughness, which is identical to the Griffith crack growth resistance, R, is a measure of the non-reversible energy which is needed to produce an increment of new crack area. The size of R is reflected by the slopes of the R-curves commonly used. So an increasing J-Δa-curve does not mean that the crack-growth resistance increases. The fracture initiation toughness, J i , is a normalized total energy (related to the ligament area) which must be put into the specimen up to fracture initiation. Only for ideally brittle materials R and J i have equal sizes. For small-scale yielding a relationship exists between R and J i , ao a one-parameter description of fracture processes is applicable. For large-scale yielding R and J i are not strictly related and both parameters are necessary to describe the fracture process. (orig.) [de

  13. Industrial scale production of stable isotopes employing the technique of plasma separation

    International Nuclear Information System (INIS)

    Stevenson, N.R.; Bigelow, T.S.; Tarallo, F.J.

    2003-01-01

    Calutrons, centrifuges, diffusion and distillation processes are some of the devices and techniques that have been employed to produce substantial quantities of enriched stable isotopes. Nevertheless, the availability of enriched isotopes in sufficient quantities for industrial applications remains very restricted. Industries such as those involved with medicine, semiconductors, nuclear fuel, propulsion, and national defense have identified the potential need for various enriched isotopes in large quantities. Economically producing most enriched (non-gaseous) isotopes in sufficient quantities has so far eluded commercial producers. The plasma separation process is a commercial technique now available for producing large quantities of a wide range of enriched isotopes. Until recently, this technique has mainly been explored with small-scale ('proof-of-principle') devices that have been built and operated at research institutes. The new Theragenics TM facility at Oak Ridge, TN houses the only existing commercial scale PSP system. This device, which successfully operated in the 1980's, has recently been re-commissioned and is planned to be used to produce a variety of isotopes. Progress and the capabilities of this device and it's potential for impacting the world's supply of stable isotopes in the future is summarized. This technique now holds promise of being able to open the door to allowing new and exciting applications of these isotopes in the future. (author)

  14. Injury risk curves for the WorldSID 50th male dummy.

    Science.gov (United States)

    Petitjean, Audrey; Trosseille, Xavier; Petit, Philippe; Irwin, Annette; Hassan, Joe; Praxl, Norbert

    2009-11-01

    The development of the WorldSID 50th percentile male dummy was initiated in 1997 by the International Organisation for Standardisation (ISO/SC12/TC22/WG5) with the objective of developing a more biofidelic side impact dummy and supporting the adoption of a harmonised dummy into regulations. More than 45 organizations from all around the world have contributed to this effort including governmental agencies, research institutes, car manufacturers and dummy manufacturers. The first production version of the WorldSID 50th male dummy was released in March 2004 and demonstrated an improved biofidelity over existing side impact dummies. Full scale vehicle tests covering a wide range of side impact test procedures were performed worldwide with the WorldSID dummy. However, the vehicle safety performance could not be assessed due to lack of injury risk curves for this dummy. The development of these curves was initiated in 2004 within the framework of ISO/SC12/TC22/WG6 (Injury criteria). In 2008, the ACEA- Dummy Task Force (TFD) decided to contribute to this work and offered resources for a project manager to coordinate of the effort of a group of volunteer biomechanical experts from international institutions (ISO, EEVC, VRTC/NHTSA, JARI, Transport Canada), car manufacturers (ACEA, Ford, General Motors, Honda, Toyota, Chrysler) and universities (Wayne State University, Ohio State University, John Hopkins University, Medical College of Wisconsin) to develop harmonized injury risk curves. An in-depth literature review was conducted. All the available PMHS datasets were identified, the test configurations and the quality of the results were checked. Criteria were developed for inclusion or exclusion of PMHS tests in the development of the injury risk curves. Data were processed to account for differences in mass and age of the subjects. Finally, injury risk curves were developed using the following statistical techniques, the certainty method, the Mertz/Weber method, the

  15. The use of production management techniques in the construction of large scale physics detectors

    International Nuclear Information System (INIS)

    Bazan, A.; Chevenier, G.; Estrella, F.

    1999-01-01

    The construction process of detectors for the Large Hadron Collider (LHC) experiments is large scale, heavily constrained by resource availability and evolves with time. As a consequence, changes in detector component design need to be tracked and quickly reflected in the construction process. With similar problems in industry engineers employ so-called Product Data Management (PDM) systems to control access to documented versions of designs and managers employ so-called Product Data Management (PDM) systems to control access to documented versions of designs and managers employ so-called Workflow Management Software (WfMS) to coordinate production work processes. However, PDM and WfMS software are not generally integrated in industry. The scale of LHC experiments, like CMS, demands that industrial production techniques be applied in detector construction. This paper outlines the major functions and applications of the CRISTAL system (Cooperating Repositories and an Information System for Tracking Assembly Lifecycles) in use in CMS which successfully integrates PDM and WfMS techniques in managing large scale physics detector construction. This is the first time industrial production techniques have been deployed to this extent in detector construction

  16. On the distribution of Weierstrass points on Gorenstein quintic curves

    Directory of Open Access Journals (Sweden)

    Kamel Alwaleed

    2016-07-01

    Full Text Available This paper is concerned with developing a technique to compute in a very precise way the distribution of Weierstrass points on the members of any 1-parameter family Ca, a∈C, of Gorenstein quintic curves with respect to the dualizing sheaf KCa. The nicest feature of the procedure is that it gives a way to produce examples of existence of Weierstrass points with prescribed special gap sequences, by looking at plane curves or, more generally, to subcanonical curves embedded in some higher dimensional projective space.

  17. Spherical nanoindentation of proton irradiated 304 stainless steel: A comparison of small scale mechanical test techniques for measuring irradiation hardening

    Science.gov (United States)

    Weaver, Jordan S.; Pathak, Siddhartha; Reichardt, Ashley; Vo, Hi T.; Maloy, Stuart A.; Hosemann, Peter; Mara, Nathan A.

    2017-09-01

    Experimentally quantifying the mechanical effects of radiation damage in reactor materials is necessary for the development and qualification of new materials for improved performance and safety. This can be achieved in a high-throughput fashion through a combination of ion beam irradiation and small scale mechanical testing in contrast to the high cost and laborious nature of bulk testing of reactor irradiated samples. The current work focuses on using spherical nanoindentation stress-strain curves on unirradiated and proton irradiated (10 dpa at 360 °C) 304 stainless steel to quantify the mechanical effects of radiation damage. Spherical nanoindentation stress-strain measurements show a radiation-induced increase in indentation yield strength from 1.36 GPa to 2.72 GPa and a radiation-induced increase in indentation work hardening rate of 10 GPa-30 GPa. These measurements are critically compared against Berkovich nanohardness, micropillar compression, and micro-tension measurements on the same material and similar grain orientations. The ratio of irradiated to unirradiated yield strength increases by a similar factor of 2 when measured via spherical nanoindentation or Berkovich nanohardness testing. A comparison of spherical indentation stress-strain curves to uniaxial (micropillar and micro-tension) stress-strain curves was achieved using a simple scaling relationship which shows good agreement for the unirradiated condition and poor agreement in post-yield behavior for the irradiated condition. The disagreement between spherical nanoindentation and uniaxial stress-strain curves is likely due to the plastic instability that occurs during uniaxial tests but is absent during spherical nanoindentation tests.

  18. Applicability of laboratory data to large scale tests under dynamic loading conditions

    International Nuclear Information System (INIS)

    Kussmaul, K.; Klenk, A.

    1993-01-01

    The analysis of dynamic loading and subsequent fracture must be based on reliable data for loading and deformation history. This paper describes an investigation to examine the applicability of parameters which are determined by means of small-scale laboratory tests to large-scale tests. The following steps were carried out: (1) Determination of crack initiation by means of strain gauges applied in the crack tip field of compact tension specimens. (2) Determination of dynamic crack resistance curves of CT-specimens using a modified key-curve technique. The key curves are determined by dynamic finite element analyses. (3) Determination of strain-rate-dependent stress-strain relationships for the finite element simulation of small-scale and large-scale tests. (4) Analysis of the loading history for small-scale tests with the aid of experimental data and finite element calculations. (5) Testing of dynamically loaded tensile specimens taken as strips from ferritic steel pipes with a thickness of 13 mm resp. 18 mm. The strips contained slits and surface cracks. (6) Fracture mechanics analyses of the above mentioned tests and of wide plate tests. The wide plates (960x608x40 mm 3 ) had been tested in a propellant-driven 12 MN dynamic testing facility. For calculating the fracture mechanics parameters of both tests, a dynamic finite element simulation considering the dynamic material behaviour was employed. The finite element analyses showed a good agreement with the simulated tests. This prerequisite allowed to gain critical J-integral values. Generally the results of the large-scale tests were conservative. 19 refs., 20 figs., 4 tabs

  19. The use of production management techniques in the construction of large scale physics detectors

    CERN Document Server

    Bazan, A; Estrella, F; Kovács, Z; Le Flour, T; Le Goff, J M; Lieunard, S; McClatchey, R; Murray, S; Varga, L Z; Vialle, J P; Zsenei, M

    1999-01-01

    The construction process of detectors for the Large Hadron Collider (LHC) experiments is large scale, heavily constrained by resource availability and evolves with time. As a consequence, changes in detector component design need to be tracked and quickly reflected in the construction process. With similar problems in industry engineers employ so-called Product Data Management (PDM) systems to control access to documented versions of designs and managers employ so- called Workflow Management software (WfMS) to coordinate production work processes. However, PDM and WfMS software are not generally integrated in industry. The scale of LHC experiments, like CMS, demands that industrial production techniques be applied in detector construction. This paper outlines the major functions and applications of the CRISTAL system (Cooperating Repositories and an information System for Tracking Assembly Lifecycles) in use in CMS which successfully integrates PDM and WfMS techniques in managing large scale physics detector ...

  20. SU-G-IeP3-14: Updating Tools for Radiographic Technique Charts

    Energy Technology Data Exchange (ETDEWEB)

    Walz-Flannigan, A; Lucas, J; Buchanan, K; Schueler, B [Mayo Clinic, Rochester, MN (United States)

    2016-06-15

    Purpose: Manual technique selection in radiography is needed for imaging situations where there is difficulty in proper positioning for AEC, prosthesis, for non-bucky imaging, or for guiding image repeats. Basic information about how to provide consistent image signal and contrast for various kV and tissue thickness is needed to create manual technique charts, and relevant for physicists involved in technique chart optimization. Guidance on technique combinations and rules-of-thumb to provide consistent image signal still in use today are based on measurements with optical density of screen-film combinations and older generation x-ray systems. Tools such as a kV-scale chart can be useful to know how to modify mAs when kV is changed in order to maintain consistent image receptor signal level. We evaluate these tools for modern equipment for use in optimizing proper size scaled techniques. Methods: We used a water phantom to measure calibrated signal change for CR and DR (with grid) for various beam energies. Tube current values were calculated that would yield a consistent image signal response. Data was fit to provide sufficient granularity of detail to compose technique-scale chart. Tissue thickness approximated equivalence to 80% of water depth. Results: We created updated technique-scale charts, providing mAs and kV combinations to achieve consistent signal for CR and DR for various tissue equivalent thicknesses. We show how this information can be used to create properly scaled size-based manual technique charts. Conclusion: Relative scaling of mAs and kV for constant signal (i.e. the shape of the curve) appears substantially similar between film-screen and CR/DR. This supports the notion that image receptor related differences are minor factors for relative (not absolute) changes in mAs with varying kV. However, as demonstrated creation of these difficult to find detailed technique-scales are useful tools for manual chart optimization.

  1. Greater Activity in the Frontal Cortex on Left Curves: A Vector-Based fNIRS Study of Left and Right Curve Driving.

    Directory of Open Access Journals (Sweden)

    Noriyuki Oka

    Full Text Available In the brain, the mechanisms of attention to the left and the right are known to be different. It is possible that brain activity when driving also differs with different horizontal road alignments (left or right curves, but little is known about this. We found driver brain activity to be different when driving on left and right curves, in an experiment using a large-scale driving simulator and functional near-infrared spectroscopy (fNIRS.The participants were fifteen healthy adults. We created a course simulating an expressway, comprising straight line driving and gentle left and right curves, and monitored the participants under driving conditions, in which they drove at a constant speed of 100 km/h, and under non-driving conditions, in which they simply watched the screen (visual task. Changes in hemoglobin concentrations were monitored at 48 channels including the prefrontal cortex, the premotor cortex, the primary motor cortex and the parietal cortex. From orthogonal vectors of changes in deoxyhemoglobin and changes in oxyhemoglobin, we calculated changes in cerebral oxygen exchange, reflecting neural activity, and statistically compared the resulting values from the right and left curve sections.Under driving conditions, there were no sites where cerebral oxygen exchange increased significantly more during right curves than during left curves (p > 0.05, but cerebral oxygen exchange increased significantly more during left curves (p < 0.05 in the right premotor cortex, the right frontal eye field and the bilateral prefrontal cortex. Under non-driving conditions, increases were significantly greater during left curves (p < 0.05 only in the right frontal eye field.Left curve driving was thus found to require more brain activity at multiple sites, suggesting that left curve driving may require more visual attention than right curve driving. The right frontal eye field was activated under both driving and non-driving conditions.

  2. Greater Activity in the Frontal Cortex on Left Curves: A Vector-Based fNIRS Study of Left and Right Curve Driving

    Science.gov (United States)

    Oka, Noriyuki; Yoshino, Kayoko; Yamamoto, Kouji; Takahashi, Hideki; Li, Shuguang; Sugimachi, Toshiyuki; Nakano, Kimihiko; Suda, Yoshihiro; Kato, Toshinori

    2015-01-01

    Objectives In the brain, the mechanisms of attention to the left and the right are known to be different. It is possible that brain activity when driving also differs with different horizontal road alignments (left or right curves), but little is known about this. We found driver brain activity to be different when driving on left and right curves, in an experiment using a large-scale driving simulator and functional near-infrared spectroscopy (fNIRS). Research Design and Methods The participants were fifteen healthy adults. We created a course simulating an expressway, comprising straight line driving and gentle left and right curves, and monitored the participants under driving conditions, in which they drove at a constant speed of 100 km/h, and under non-driving conditions, in which they simply watched the screen (visual task). Changes in hemoglobin concentrations were monitored at 48 channels including the prefrontal cortex, the premotor cortex, the primary motor cortex and the parietal cortex. From orthogonal vectors of changes in deoxyhemoglobin and changes in oxyhemoglobin, we calculated changes in cerebral oxygen exchange, reflecting neural activity, and statistically compared the resulting values from the right and left curve sections. Results Under driving conditions, there were no sites where cerebral oxygen exchange increased significantly more during right curves than during left curves (p > 0.05), but cerebral oxygen exchange increased significantly more during left curves (p right premotor cortex, the right frontal eye field and the bilateral prefrontal cortex. Under non-driving conditions, increases were significantly greater during left curves (p right frontal eye field. Conclusions Left curve driving was thus found to require more brain activity at multiple sites, suggesting that left curve driving may require more visual attention than right curve driving. The right frontal eye field was activated under both driving and non-driving conditions

  3. Characterizing time series via complexity-entropy curves

    Science.gov (United States)

    Ribeiro, Haroldo V.; Jauregui, Max; Zunino, Luciano; Lenzi, Ervin K.

    2017-06-01

    The search for patterns in time series is a very common task when dealing with complex systems. This is usually accomplished by employing a complexity measure such as entropies and fractal dimensions. However, such measures usually only capture a single aspect of the system dynamics. Here, we propose a family of complexity measures for time series based on a generalization of the complexity-entropy causality plane. By replacing the Shannon entropy by a monoparametric entropy (Tsallis q entropy) and after considering the proper generalization of the statistical complexity (q complexity), we build up a parametric curve (the q -complexity-entropy curve) that is used for characterizing and classifying time series. Based on simple exact results and numerical simulations of stochastic processes, we show that these curves can distinguish among different long-range, short-range, and oscillating correlated behaviors. Also, we verify that simulated chaotic and stochastic time series can be distinguished based on whether these curves are open or closed. We further test this technique in experimental scenarios related to chaotic laser intensity, stock price, sunspot, and geomagnetic dynamics, confirming its usefulness. Finally, we prove that these curves enhance the automatic classification of time series with long-range correlations and interbeat intervals of healthy subjects and patients with heart disease.

  4. Thermoluminescence glow curve for UV induced ZrO2:Ti phosphor with variable concentration of dopant and various heating rate

    Directory of Open Access Journals (Sweden)

    Neha Tiwari

    2014-10-01

    Full Text Available The present paper reports the synthesis and characterization of Ti doped ZrO2 nanophosphors. The effects of variable concentration of titanium on thermoluminescence (TL behaviour are studied. The samples were prepared by combustion a synthesis technique which is suitable for less time taking techniques also for large scale production for nano phosphors. The starting material used for sample preparation are Zr(NO33 and Ti(NO33 and urea used as a fuel. The prepared sample was characterized by X-ray diffraction technique (XRD with variable concentration of Ti (0.05–0.5 mol% there is no any phase change found with increase the concentration of Ti. Sample shows cubic structure and the particle size calculated by Scherer's formula. The surface morphology of prepared phosphor was determined by field emission gun scanning electron microscopy (FEGSEM technique for optimized concentration of dopant. The good connectivity with grains and the semi-sphere like structure was found by FEGSEM. The functional group analysis was determined by Fourier transform infrared (FTIR spectroscopic techniques. The prepared phosphor examined by thermoluminescence technique. For recording TL glow curve every time 2 mg phosphor was irradiated by UV 254 nm source and fixed the heating rate at 5 °C s−1. Sample shows well resolved peak at 167 °C with a shoulder peak at 376 °C. The higher temperature peak shows the well stability and less fading in prepared phosphor. Also the effect of Ti concentration at fixed UV exposure time was studied. The effect of UV exposure time and dose versus intensity plot was studied. Sample shows linear response with dose and broaden peak with high temperature shows the more stability and less fading in TL glow curve. The linear dose response, high stability and less fading phenomenon shows the sample may be useful for thermoluminescence dosimetry application. Trapping parameters are calculated for every recorded glow curve. The

  5. Vortex depinning as a nonequilibrium phase transition phenomenon: Scaling of current-voltage curves near the low and the high critical-current states in 2 H -Nb S2 single crystals

    Science.gov (United States)

    Bag, Biplab; Sivananda, Dibya J.; Mandal, Pabitra; Banerjee, S. S.; Sood, A. K.; Grover, A. K.

    2018-04-01

    The vortex depinning phenomenon in single crystals of 2 H -Nb S2 superconductors is used as a prototype for investigating properties of the nonequilibrium (NEQ) depinning phase transition. The 2 H -Nb S2 is a unique system as it exhibits two distinct depinning thresholds, viz., a lower critical current Icl and a higher one Ich. While Icl is related to depinning of a conventional, static (pinned) vortex state, the state with Ich is achieved via a negative differential resistance (NDR) transition where the velocity abruptly drops. Using a generalized finite-temperature scaling ansatz, we study the scaling of current (I)-voltage (V) curves measured across Icl and Ich. Our analysis shows that for I >Icl , the moving vortex state exhibits Arrhenius-like thermally activated flow behavior. This feature persists up to a current value where an inflexion in the IV curves is encountered. While past measurements have often reported similar inflexion, our analysis shows that the inflexion is a signature of a NEQ phase transformation from a thermally activated moving vortex phase to a free flowing phase. Beyond this inflection in IV, a large vortex velocity flow regime is encountered in the 2 H -Nb S2 system, wherein the Bardeen-Stephen flux flow limit is crossed. In this regime the NDR transition is encountered, leading to the high Ich state. The IV curves above Ich we show do not obey the generalized finite-temperature scaling ansatz (as obeyed near Icl). Instead, they scale according to the Fisher's scaling form [Fisher, Phys. Rev. B 31, 1396 (1985), 10.1103/PhysRevB.31.1396] where we show thermal fluctuations do not affect the vortex flow, unlike that found for depinning near Icl.

  6. Equilibrium spherically curved two-dimensional Lennard-Jones systems

    NARCIS (Netherlands)

    Voogd, J.M.; Sloot, P.M.A.; van Dantzig, R.

    2005-01-01

    To learn about basic aspects of nano-scale spherical molecular shells during their formation, spherically curved two-dimensional N-particle Lennard-Jones systems are simulated, studying curvature evolution paths at zero-temperature. For many N-values (N < 800) equilibrium configu- rations are traced

  7. Can Low-Resolution Airborne Laser Scanning Data Be Used to Model Stream Rating Curves?

    Directory of Open Access Journals (Sweden)

    Steve W. Lyon

    2015-03-01

    Full Text Available This pilot study explores the potential of using low-resolution (0.2 points/m2 airborne laser scanning (ALS-derived elevation data to model stream rating curves. Rating curves, which allow the functional translation of stream water depth into discharge, making them integral to water resource monitoring efforts, were modeled using a physics-based approach that captures basic geometric measurements to establish flow resistance due to implicit channel roughness. We tested synthetically thinned high-resolution (more than 2 points/m2 ALS data as a proxy for low-resolution data at a point density equivalent to that obtained within most national-scale ALS strategies. Our results show that the errors incurred due to the effect of low-resolution versus high-resolution ALS data were less than those due to flow measurement and empirical rating curve fitting uncertainties. As such, although there likely are scale and technical limitations to consider, it is theoretically possible to generate rating curves in a river network from ALS data of the resolution anticipated within national-scale ALS schemes (at least for rivers with relatively simple geometries. This is promising, since generating rating curves from ALS scans would greatly enhance our ability to monitor streamflow by simplifying the overall effort required.

  8. Can low-resolution airborne laser scanning data be used to model stream rating curves?

    Science.gov (United States)

    Lyon, Steve; Nathanson, Marcus; Lam, Norris; Dahlke, Helen; Rutzinger, Martin; Kean, Jason W.; Laudon, Hjalmar

    2015-01-01

    This pilot study explores the potential of using low-resolution (0.2 points/m2) airborne laser scanning (ALS)-derived elevation data to model stream rating curves. Rating curves, which allow the functional translation of stream water depth into discharge, making them integral to water resource monitoring efforts, were modeled using a physics-based approach that captures basic geometric measurements to establish flow resistance due to implicit channel roughness. We tested synthetically thinned high-resolution (more than 2 points/m2) ALS data as a proxy for low-resolution data at a point density equivalent to that obtained within most national-scale ALS strategies. Our results show that the errors incurred due to the effect of low-resolution versus high-resolution ALS data were less than those due to flow measurement and empirical rating curve fitting uncertainties. As such, although there likely are scale and technical limitations to consider, it is theoretically possible to generate rating curves in a river network from ALS data of the resolution anticipated within national-scale ALS schemes (at least for rivers with relatively simple geometries). This is promising, since generating rating curves from ALS scans would greatly enhance our ability to monitor streamflow by simplifying the overall effort required.

  9. Extension and application of a scaling technique for duplication of in-flight aerodynamic heat flux in ground test facilities

    NARCIS (Netherlands)

    Veraar, R.G.

    2009-01-01

    To enable direct experimental duplication of the inflight heat flux distribution on supersonic and hypersonic vehicles, an aerodynamic heating scaling technique has been developed. The scaling technique is based on the analytical equations for convective heat transfer for laminar and turbulent

  10. Dealing with Non-stationarity in Intensity-Frequency-Duration Curve

    Science.gov (United States)

    Rengaraju, S.; Rajendran, V.; C T, D.

    2017-12-01

    Extremes like flood and drought are becoming frequent and more vulnerable in recent times, generally attributed to the recent revelation of climate change. One of the main concerns is that whether the present infrastructures like dams, storm water drainage networks, etc., which were designed following the so called `stationary' assumption, are capable of withstanding the expected severe extremes. Stationary assumption considers that extremes are not changing with respect to time. However, recent studies proved that climate change has altered the climate extremes both temporally and spatially. Traditionally, the observed non-stationary in the extreme precipitation is incorporated in the extreme value distributions in terms of changing parameters. Nevertheless, this raises a question which parameter needs to be changed, i.e. location or scale or shape, since either one or more of these parameters vary at a given location. Hence, this study aims to detect the changing parameters to reduce the complexity involved in the development of non-stationary IDF curve and to provide the uncertainty bound of estimated return level using Bayesian Differential Evolutionary Monte Carlo (DE-MC) algorithm. Firstly, the extreme precipitation series is extracted using Peak Over Threshold. Then, the time varying parameter(s) is(are) detected for the extracted series using Generalized Additive Models for Location Scale and Shape (GAMLSS). Then, the IDF curve is constructed using Generalized Pareto Distribution incorporating non-stationarity only if the parameter(s) is(are) changing with respect to time, otherwise IDF curve will follow stationary assumption. Finally, the posterior probability intervals of estimated return revel are computed through Bayesian DE-MC approach and the non-stationary based IDF curve is compared with the stationary based IDF curve. The results of this study emphasize that the time varying parameters also change spatially and the IDF curves should incorporate non

  11. Curves and surfaces for CAGD a practical guide

    CERN Document Server

    Farin, Gerald

    2002-01-01

    This fifth edition has been fully updated to cover the many advances made in CAGD and curve and surface theory since 1997, when the fourth edition appeared. Material has been restructured into theory and applications chapters. The theory material has been streamlined using the blossoming approach; the applications material includes least squares techniques in addition to the traditional interpolation methods. In all other respects, it is, thankfully, the same. This means you get the informal, friendly style and unique approach that has made Curves and Surfaces for CAGD: A Practical Gui

  12. A Note on Comparing the Elasticities of Demand Curves.

    Science.gov (United States)

    Nieswiadomy, Michael

    1986-01-01

    Demonstrates a simple and useful way to compare the elasticity of demand at each price (or quantity) for different demand curves. The technique is particularly useful for the intermediate microeconomic course. (Author)

  13. UNSUPERVISED TRANSIENT LIGHT CURVE ANALYSIS VIA HIERARCHICAL BAYESIAN INFERENCE

    International Nuclear Information System (INIS)

    Sanders, N. E.; Soderberg, A. M.; Betancourt, M.

    2015-01-01

    Historically, light curve studies of supernovae (SNe) and other transient classes have focused on individual objects with copious and high signal-to-noise observations. In the nascent era of wide field transient searches, objects with detailed observations are decreasing as a fraction of the overall known SN population, and this strategy sacrifices the majority of the information contained in the data about the underlying population of transients. A population level modeling approach, simultaneously fitting all available observations of objects in a transient sub-class of interest, fully mines the data to infer the properties of the population and avoids certain systematic biases. We present a novel hierarchical Bayesian statistical model for population level modeling of transient light curves, and discuss its implementation using an efficient Hamiltonian Monte Carlo technique. As a test case, we apply this model to the Type IIP SN sample from the Pan-STARRS1 Medium Deep Survey, consisting of 18,837 photometric observations of 76 SNe, corresponding to a joint posterior distribution with 9176 parameters under our model. Our hierarchical model fits provide improved constraints on light curve parameters relevant to the physical properties of their progenitor stars relative to modeling individual light curves alone. Moreover, we directly evaluate the probability for occurrence rates of unseen light curve characteristics from the model hyperparameters, addressing observational biases in survey methodology. We view this modeling framework as an unsupervised machine learning technique with the ability to maximize scientific returns from data to be collected by future wide field transient searches like LSST

  14. UNSUPERVISED TRANSIENT LIGHT CURVE ANALYSIS VIA HIERARCHICAL BAYESIAN INFERENCE

    Energy Technology Data Exchange (ETDEWEB)

    Sanders, N. E.; Soderberg, A. M. [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Betancourt, M., E-mail: nsanders@cfa.harvard.edu [Department of Statistics, University of Warwick, Coventry CV4 7AL (United Kingdom)

    2015-02-10

    Historically, light curve studies of supernovae (SNe) and other transient classes have focused on individual objects with copious and high signal-to-noise observations. In the nascent era of wide field transient searches, objects with detailed observations are decreasing as a fraction of the overall known SN population, and this strategy sacrifices the majority of the information contained in the data about the underlying population of transients. A population level modeling approach, simultaneously fitting all available observations of objects in a transient sub-class of interest, fully mines the data to infer the properties of the population and avoids certain systematic biases. We present a novel hierarchical Bayesian statistical model for population level modeling of transient light curves, and discuss its implementation using an efficient Hamiltonian Monte Carlo technique. As a test case, we apply this model to the Type IIP SN sample from the Pan-STARRS1 Medium Deep Survey, consisting of 18,837 photometric observations of 76 SNe, corresponding to a joint posterior distribution with 9176 parameters under our model. Our hierarchical model fits provide improved constraints on light curve parameters relevant to the physical properties of their progenitor stars relative to modeling individual light curves alone. Moreover, we directly evaluate the probability for occurrence rates of unseen light curve characteristics from the model hyperparameters, addressing observational biases in survey methodology. We view this modeling framework as an unsupervised machine learning technique with the ability to maximize scientific returns from data to be collected by future wide field transient searches like LSST.

  15. Identification of geometric faces in hand-sketched 3D objects containing curved lines

    Science.gov (United States)

    El-Sayed, Ahmed M.; Wahdan, A. A.; Youssif, Aliaa A. A.

    2017-07-01

    The reconstruction of 3D objects from 2D line drawings is regarded as one of the key topics in the field of computer vision. The ongoing research is mainly focusing on the reconstruction of 3D objects that are mapped only from 2D straight lines, and that are symmetric in nature. Commonly, this approach only produces basic and simple shapes that are mostly flat or rather polygonized in nature, which is normally attributed to inability to handle curves. To overcome the above-mentioned limitations, a technique capable of handling non-symmetric drawings that encompass curves is considered. This paper discusses a novel technique that can be used to reconstruct 3D objects containing curved lines. In addition, it highlights an application that has been developed in accordance with the suggested technique that can convert a freehand sketch to a 3D shape using a mobile phone.

  16. LEARNING CURVE IN ENDOSCOPIC TRANSNASAL SELLAR REGION SURGERY

    Directory of Open Access Journals (Sweden)

    Ananth G

    2016-07-01

    Full Text Available BACKGROUND The endoscopic endonasal approach for the sellar region lesions is a novel technique and an effective surgical option. The evidence thus far has been conflicting with reports in favour and against a learning curve. We attempt to determine the learning curve associated with this approach. METHODS Retrospective and prospective data of the patients who were surgically treated for sellar region lesions between the year 2013 and 2016 was collected, 32 patients were operated by the endoscopic endonasal approach at Vydehi Institute of Medical Sciences and Research Centre, Bangalore. Age, sex, presenting symptoms, length of hospital stay, surgical approach, type of dissection, duration of surgery, sellar floor repair, intraoperative and postoperative complications were noted. All the procedures were performed by a single neurosurgeon. RESULTS A total of 32 patients were operated amongst which 21 patients were non-functioning pituitary adenomas, 2 were growth hormone secreting functional adenomas, 1 was an invasive pituitary adenoma, 4 were craniopharyngiomas, 2 were meningiomas, 1 was Rathke’s cleft cyst and 1 was a clival chordoma. Headache was the mode of presentation in 12 patients, 12 patients had visual deficits, 6 patients presented with hormonal disturbances amongst which 4 patients presented with features of panhypopituitarism and 2 with acromegaly. Amongst the 4 patients with panhypopituitarism, 2 also had DI, two patients presented with CSF rhinorrhoea. There was a 100% improvement in the patients who presented with visual symptoms. Gross total resection was achieved in all 4 cases of craniopharyngiomas and 13 cases of pituitary adenomas. Postoperative CSF leak was seen in 4 patients who underwent re-exploration and sellar floor repair, 9 patients had postoperative Diabetes Insipidus (DI which was transient, the incidence of DI reduced towards the end of the study. There was a 25% decrease in the operating time towards the end of

  17. Application of the Particle Swarm Optimization (PSO) technique to the thermal-hydraulics project of a PWR reactor core in reduced scale

    International Nuclear Information System (INIS)

    Lima Junior, Carlos Alberto de Souza

    2008-09-01

    The reduced scale models design have been employed by engineers from several different industries fields such as offshore, spatial, oil extraction, nuclear industries and others. Reduced scale models are used in experiments because they are economically attractive than its own prototype (real scale) because in many cases they are cheaper than a real scale one and most of time they are also easier to build providing a way to lead the real scale design allowing indirect investigations and analysis to the real scale system (prototype). A reduced scale model (or experiment) must be able to represent all physical phenomena that occurs and further will do in the real scale one under operational conditions, e.g., in this case the reduced scale model is called similar. There are some different methods to design a reduced scale model and from those two are basic: the empiric method based on the expert's skill to determine which physical measures are relevant to the desired model; and the differential equation method that is based on a mathematical description of the prototype (real scale system) to model. Applying a mathematical technique to the differential equation that describes the prototype then highlighting the relevant physical measures so the reduced scale model design problem may be treated as an optimization problem. Many optimization techniques as Genetic Algorithm (GA), for example, have been developed to solve this class of problems and have also been applied to the reduced scale model design problem as well. In this work, Particle Swarm Optimization (PSO) technique is investigated as an alternative optimization tool for such problem. In this investigation a computational approach, based on particle swarm optimization technique (PSO), is used to perform a reduced scale two loop Pressurized Water Reactor (PWR) core, considering 100% of nominal power operation on a forced flow cooling circulation and non-accidental operating conditions. A performance comparison

  18. Long-Term Tracking of Free-Swimming Paramecium caudatum in Viscous Media Using a Curved Sample Chamber

    Directory of Open Access Journals (Sweden)

    Mohiuddin Khan Shourav

    2017-12-01

    Full Text Available It is technically difficult to acquire large-field images under the complexity and cost restrictions of a diagnostic and instant field research purpose. The goal of the introduced large-field imaging system is to achieve a tolerable resolution for detecting microscale particles or objects in the entire image field without the field-curvature effect, while maintaining a cost-effective procedure and simple design. To use a single commercial lens for imaging a large field, the design attempts to fabricate a curved microfluidic chamber. This imaging technique improves the field curvature and distortion at an acceptable level of particle detection. This study examines Paramecium caudatum microswimmers to track their motion dynamics in different viscous media with imaging techniques. In addition, the study found that the average speed for P. caudatum was 60 µm/s, with a standard deviation of ±12 µm/s from microscopic imaging of the original medium of the sample, which leads to a variation of 20% from the average measurement. In contrast, from large-field imaging, the average speeds of P. caudatum were 63 µm/s and 68 µm/s in the flat and curved chambers, respectively, with the same medium viscosity. Furthermore, the standard deviations that were observed were ±7 µm/s and ±4 µm/s and the variations from the average speed were calculated as 11% and 5.8% for the flat and curved chambers, respectively. The proposed methodology can be applied to measure the locomotion of the microswimmer at small scales with high precision.

  19. Studying the method of linearization of exponential calibration curves

    International Nuclear Information System (INIS)

    Bunzh, Z.A.

    1989-01-01

    The results of study of the method for linearization of exponential calibration curves are given. The calibration technique and comparison of the proposed method with piecewise-linear approximation and power series expansion, are given

  20. On a framework for generating PoD curves assisted by numerical simulations

    Energy Technology Data Exchange (ETDEWEB)

    Subair, S. Mohamed, E-mail: prajagopal@iitm.ac.in; Agrawal, Shweta, E-mail: prajagopal@iitm.ac.in; Balasubramaniam, Krishnan, E-mail: prajagopal@iitm.ac.in; Rajagopal, Prabhu, E-mail: prajagopal@iitm.ac.in [Indian Institute of Technology Madras, Department of Mechanical Engineering, Chennai, T.N. (India); Kumar, Anish; Rao, Purnachandra B.; Tamanna, Jayakumar [Indira Gandhi Centre for Atomic Research, Metallurgy and Materials Group, Kalpakkam, T.N. (India)

    2015-03-31

    The Probability of Detection (PoD) curve method has emerged as an important tool for the assessment of the performance of NDE techniques, a topic of particular interest to the nuclear industry where inspection qualification is very important. The conventional experimental means of generating PoD curves though, can be expensive, requiring large data sets (covering defects and test conditions), and equipment and operator time. Several methods of achieving faster estimates for PoD curves using physics-based modelling have been developed to address this problem. Numerical modelling techniques are also attractive, especially given the ever-increasing computational power available to scientists today. Here we develop procedures for obtaining PoD curves, assisted by numerical simulation and based on Bayesian statistics. Numerical simulations are performed using Finite Element analysis for factors that are assumed to be independent, random and normally distributed. PoD curves so generated are compared with experiments on austenitic stainless steel (SS) plates with artificially created notches. We examine issues affecting the PoD curve generation process including codes, standards, distribution of defect parameters and the choice of the noise threshold. We also study the assumption of normal distribution for signal response parameters and consider strategies for dealing with data that may be more complex or sparse to justify this. These topics are addressed and illustrated through the example case of generation of PoD curves for pulse-echo ultrasonic inspection of vertical surface-breaking cracks in SS plates.

  1. Transformation-invariant and nonparametric monotone smooth estimation of ROC curves.

    Science.gov (United States)

    Du, Pang; Tang, Liansheng

    2009-01-30

    When a new diagnostic test is developed, it is of interest to evaluate its accuracy in distinguishing diseased subjects from non-diseased subjects. The accuracy of the test is often evaluated by receiver operating characteristic (ROC) curves. Smooth ROC estimates are often preferable for continuous test results when the underlying ROC curves are in fact continuous. Nonparametric and parametric methods have been proposed by various authors to obtain smooth ROC curve estimates. However, there are certain drawbacks with the existing methods. Parametric methods need specific model assumptions. Nonparametric methods do not always satisfy the inherent properties of the ROC curves, such as monotonicity and transformation invariance. In this paper we propose a monotone spline approach to obtain smooth monotone ROC curves. Our method ensures important inherent properties of the underlying ROC curves, which include monotonicity, transformation invariance, and boundary constraints. We compare the finite sample performance of the newly proposed ROC method with other ROC smoothing methods in large-scale simulation studies. We illustrate our method through a real life example. Copyright (c) 2008 John Wiley & Sons, Ltd.

  2. Comparative analysis of the apparent saturation hysteresis approach and the domain theory of hysteresis in respect of prediction of scanning curves and air entrapment

    Science.gov (United States)

    Beriozkin, A.; Mualem, Y.

    2018-05-01

    This study theoretically analyzes the concept of apparent saturation hysteresis, combined with the Scott et al. (1983) scaling approach, as suggested by Parker and Lenhard (1987), to account for the effect of air entrapment and release on the soil water hysteresis. We found that the theory of Parker and Lenhard (1987) is comprised of some mutually canceling mathematical operations, and when cleared of the superfluous intermediate calculations, their model reduces to the original Scott et al.'s (1983) scaling method, supplemented with the requirement of closure of scanning loops. Our analysis reveals that actually there is no effect of their technique of accounting for the entrapped air on the final prediction of the effective saturation (or water content) scanning curves. Our consideration indicates that the use of the Land (1968) formula for assessing the amount of entrapped air is in disaccord with the apparent saturation concept as introduced by Parker and Lenhard (1987). In this paper, a proper routine is suggested for predicting hysteretic scanning curves of any order, given the two measured main curves, in the complete hysteretic domain and some verification tests are carried out versus measured results. Accordingly, explicit closed-form formulae for direct prediction (with no need of intermediate calculation) of scanning curves up to the third order are derived to sustain our analysis.

  3. Dual kinetic curves in reversible electrochemical systems.

    Directory of Open Access Journals (Sweden)

    Michael J Hankins

    Full Text Available We introduce dual kinetic chronoamperometry, in which reciprocal relations are established between the kinetic curves of electrochemical reactions that start from symmetrical initial conditions. We have performed numerical and experimental studies in which the kinetic curves of the electron-transfer processes are analyzed for a reversible first order reaction. Experimental tests were done with the ferrocyanide/ferricyanide system in which the concentrations of each component could be measured separately using the platinum disk/gold ring electrode. It is shown that the proper ratio of the transient kinetic curves obtained from cathodic and anodic mass transfer limited regions give thermodynamic time invariances related to the reaction quotient of the bulk concentrations. Therefore, thermodynamic time invariances can be observed at any time using the dual kinetic curves for reversible reactions. The technique provides a unique possibility to extract the non-steady state trajectory starting from one initial condition based only on the equilibrium constant and the trajectory which starts from the symmetrical initial condition. The results could impact battery technology by predicting the concentrations and currents of the underlying non-steady state processes in a wide domain from thermodynamic principles and limited kinetic information.

  4. Measurements of liquid phase residence time distributions in a pilot-scale continuous leaching reactor using radiotracer technique

    International Nuclear Information System (INIS)

    Pant, H.J.; Sharma, V.K.; Shenoy, K.T.; Sreenivas, T.

    2015-01-01

    An alkaline based continuous leaching process is commonly used for extraction of uranium from uranium ore. The reactor in which the leaching process is carried out is called a continuous leaching reactor (CLR) and is expected to behave as a continuously stirred tank reactor (CSTR) for the liquid phase. A pilot-scale CLR used in a Technology Demonstration Pilot Plant (TDPP) was designed, installed and operated; and thus needed to be tested for its hydrodynamic behavior. A radiotracer investigation was carried out in the CLR for measurement of residence time distribution (RTD) of liquid phase with specific objectives to characterize the flow behavior of the reactor and validate its design. Bromine-82 as ammonium bromide was used as a radiotracer and about 40–60 MBq activity was used in each run. The measured RTD curves were treated and mean residence times were determined and simulated using a tanks-in-series model. The result of simulation indicated no flow abnormality and the reactor behaved as an ideal CSTR for the range of the operating conditions used in the investigation. - Highlights: • Radiotracer technique was applied for evaluation of design of a pilot-scale continuous leaching reactor. • Mean residence time and dead volume were estimated. Dead volume was found to be ranging from 4% to 15% at different operating conditions. • Tank-in-series model was used to simulate the measured RTD data and was found suitable to describe the flow in the reactor. • No flow abnormality was found and the reactor behaved as a well-mixed system. The design of the reactor was validated

  5. Complementary techniques for solid oxide cell characterisation on micro- and nano-scale

    International Nuclear Information System (INIS)

    Wiedenmann, D.; Hauch, A.; Grobety, B.; Mogensen, M.; Vogt, U.

    2009-01-01

    High temperature steam electrolysis by solid oxide electrolysis cells (SOEC) is a way with great potential to transform clean and renewable energy from non-fossil sources to synthetic fuels such as hydrogen, methane or dimethyl ether, which have been identified as promising alternative energy carriers. Also, as SOEC can operate in the reverse mode as solid oxide fuel cells (SOFC), during high peak hours e.g. hydrogen can be used in a very efficient way to reconvert chemically stored energy into electrical energy. As solid oxide cells (SOC) are working at high temperatures (700-900 o C), material degradation and evaporation can occur e.g. from the cell sealing material, leading to poisoning effects and aging mechanisms which are decreasing the cell efficiency and long-term durability. In order to investigate such cell degradation processes, thorough examination on SOC often requires the chemical and structural characterisation on the microscopic and the nanoscopic level. The combination of different microscope techniques like conventional scanning electron microscopy (SEM), electron-probe microanalysis (EPMA) and the focused ion-beam (FIB) preparation technique for transmission electron microscopy (TEM) allows performing post mortem analysis on a multi scale level of cells after testing. These complementary techniques can be used to characterize structural and chemical changes over a large and representative sample area (micro-scale) on the one hand, and also on the nano-scale level for selected sample details on the other hand. This article presents a methodical approach for the structural and chemical characterisation of changes in aged cathode-supported electrolysis cells produced at Riso DTU, Denmark. Also, results from the characterisation of impurities at the electrolyte/hydrogen interface caused by evaporation from sealing material are discussed. (author)

  6. The environmental Kuznets curve hypothesis for water pollution. Do regions matter?

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Chien-Chiang; Chiu, Yi-Bin [Department of Finance, National Sun Yat-Sen University Kaohsiung (China); Sun, Chia-Hung [Department of Economics, National Chung Cheng University (China)

    2010-01-15

    This study revisits the environmental Kuznets curve (EKC) hypothesis for water pollution by using a recent dynamic technique, which is the generalized method of moments (GMM) approach, for a board sample of 97 countries during the period 1980-2001. On a global scale, as we cannot obtain the EKC relationship between real income and biological oxygen demand (BOD) emissions, this paper further classifies these countries into four regional groups - Africa, Asia and Oceania, America, and Europe - to explore whether the different regions have different ECK relationships. The empirical results show evidence of the inverted U-shaped EKC relationships' existence in America and Europe, but not in Africa and Asia and Oceania. Thus, the regional difference of EKC for water pollution is supported. Furthermore, the estimated turning points are, approximately, US$13,956 and US$38,221 for America and Europe, respectively. (author)

  7. The environmental Kuznets curve hypothesis for water pollution: Do regions matter?

    Energy Technology Data Exchange (ETDEWEB)

    Lee, C.-C., E-mail: leecc@seed.net.t [Department of Finance, National Sun Yat-Sen University Kaohsiung, Taiwan (China); Chiu, Y.-B. [Department of Finance, National Sun Yat-Sen University Kaohsiung, Taiwan (China); Sun, C.-H. [Department of Economics, National Chung Cheng University, Taiwan (China)

    2010-01-15

    This study revisits the environmental Kuznets curve (EKC) hypothesis for water pollution by using a recent dynamic technique, which is the generalized method of moments (GMM) approach, for a board sample of 97 countries during the period 1980-2001. On a global scale, as we cannot obtain the EKC relationship between real income and biological oxygen demand (BOD) emissions, this paper further classifies these countries into four regional groups - Africa, Asia and Oceania, America, and Europe - to explore whether the different regions have different ECK relationships. The empirical results show evidence of the inverted U-shaped EKC relationships' existence in America and Europe, but not in Africa and Asia and Oceania. Thus, the regional difference of EKC for water pollution is supported. Furthermore, the estimated turning points are, approximately, US$13,956 and US$38,221 for America and Europe, respectively.

  8. The environmental Kuznets curve hypothesis for water pollution: Do regions matter?

    International Nuclear Information System (INIS)

    Lee, C.-C.; Chiu, Y.-B.; Sun, C.-H.

    2010-01-01

    This study revisits the environmental Kuznets curve (EKC) hypothesis for water pollution by using a recent dynamic technique, which is the generalized method of moments (GMM) approach, for a board sample of 97 countries during the period 1980-2001. On a global scale, as we cannot obtain the EKC relationship between real income and biological oxygen demand (BOD) emissions, this paper further classifies these countries into four regional groups - Africa, Asia and Oceania, America, and Europe - to explore whether the different regions have different ECK relationships. The empirical results show evidence of the inverted U-shaped EKC relationships' existence in America and Europe, but not in Africa and Asia and Oceania. Thus, the regional difference of EKC for water pollution is supported. Furthermore, the estimated turning points are, approximately, US$13,956 and US$38,221 for America and Europe, respectively.

  9. The link between the baryonic mass distribution and the rotation curve shape

    Science.gov (United States)

    Swaters, R. A.; Sancisi, R.; van der Hulst, J. M.; van Albada, T. S.

    2012-09-01

    The observed rotation curves of disc galaxies, ranging from late-type dwarf galaxies to early-type spirals, can be fitted remarkably well simply by scaling up the contributions of the stellar and H I discs. This 'baryonic scaling model' can explain the full breadth of observed rotation curves with only two free parameters. For a small fraction of galaxies, in particular early-type spiral galaxies, H I scaling appears to fail in the outer parts, possibly due to observational effects or ionization of H I. The overall success of the baryonic scaling model suggests that the well-known global coupling between the baryonic mass of a galaxy and its rotation velocity (known as the baryonic Tully-Fisher relation) applies at a more local level as well, and it seems to imply a link between the baryonic mass distribution and the distribution of total mass (including dark matter).

  10. Rapid Determination of Appropriate Source Models for Tsunami Early Warning using a Depth Dependent Rigidity Curve: Method and Numerical Tests

    Science.gov (United States)

    Tanioka, Y.; Miranda, G. J. A.; Gusman, A. R.

    2017-12-01

    Recently, tsunami early warning technique has been improved using tsunami waveforms observed at the ocean bottom pressure gauges such as NOAA DART system or DONET and S-NET systems in Japan. However, for tsunami early warning of near field tsunamis, it is essential to determine appropriate source models using seismological analysis before large tsunamis hit the coast, especially for tsunami earthquakes which generated significantly large tsunamis. In this paper, we develop a technique to determine appropriate source models from which appropriate tsunami inundation along the coast can be numerically computed The technique is tested for four large earthquakes, the 1992 Nicaragua tsunami earthquake (Mw7.7), the 2001 El Salvador earthquake (Mw7.7), the 2004 El Astillero earthquake (Mw7.0), and the 2012 El Salvador-Nicaragua earthquake (Mw7.3), which occurred off Central America. In this study, fault parameters were estimated from the W-phase inversion, then the fault length and width were determined from scaling relationships. At first, the slip amount was calculated from the seismic moment with a constant rigidity of 3.5 x 10**10N/m2. The tsunami numerical simulation was carried out and compared with the observed tsunami. For the 1992 Nicaragua tsunami earthquake, the computed tsunami was much smaller than the observed one. For the 2004 El Astillero earthquake, the computed tsunami was overestimated. In order to solve this problem, we constructed a depth dependent rigidity curve, similar to suggested by Bilek and Lay (1999). The curve with a central depth estimated by the W-phase inversion was used to calculate the slip amount of the fault model. Using those new slip amounts, tsunami numerical simulation was carried out again. Then, the observed tsunami heights, run-up heights, and inundation areas for the 1992 Nicaragua tsunami earthquake were well explained by the computed one. The other tsunamis from the other three earthquakes were also reasonably well explained

  11. Bandwidth increasing mechanism by introducing a curve fixture to the cantilever generator

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Weiqun, E-mail: weiqunliu@home.swjtu.edu.cn; Liu, Congzhi; Ren, Bingyu; Zhu, Qiao; Hu, Guangdi [School of Mechanical Engineering, Southwest Jiaotong University, 610031 Chengdu (China); Yang, Weiqing [School of Materials Science and Engineering, Southwest Jiaotong University, 610031 Chengdu (China)

    2016-07-25

    A nonlinear wideband generator architecture by clamping the cantilever beam generator with a curve fixture is proposed. Devices with different nonlinear stiffness can be obtained by properly choosing the fixture curve according to the design requirements. Three available generator types are presented and discussed for polynomial curves. Experimental investigations show that the proposed mechanism effectively extends the operation bandwidth with good power performance. Especially, the simplicity and easy feasibility allow the mechanism to be widely applied for vibration generators in different scales and environments.

  12. Depth dose curves from 90Sr+90Y clinical applicators using the thermoluminescent technique

    International Nuclear Information System (INIS)

    Antonio, Patricia L.; Caldas, Linda V.E.; Oliveira, Mercia L.

    2009-01-01

    The 90 Sr+ 90 Y beta-ray sources widely used in brachytherapy applications were developed in the 1950's. Many of these sources, called clinical applicators, are still routinely used in several Brazilian radiotherapy clinics for the treatment of superficial lesions in the skin and eyes, although they are not commercialized anymore. These applicators have to be periodically calibrated, according to international recommendations, because these sources have to be very well specified in order to reach the traceability of calibration standards. In the case of beta-ray sources, the recommended quantity is the absorbed dose rate in water at a reference distance from the source. Moreover, there are other important quantities, as the depth dose curves and the source uniformity for beta-ray plaque sources. In this work, depth dose curves were obtained and studied of five dermatological applicators, using thin thermoluminescent dosimeters of CaSO 4 :Dy and phantoms of PMMA with different thicknesses (between 1.0 mm and 5.0 mm) positioned between each applicator and the TL pellets. The depth dose curves obtained presented the expected attenuation response in PMMA, and the results were compared with data obtained for a 90 Sr+ 90 Y standard source reported by the IAEA, and they were considered satisfactory. (author)

  13. The liquid–liquid coexistence curves of {benzonitrile + n-pentadecane} and {benzonitrile + n-heptadecane} in the critical region

    International Nuclear Information System (INIS)

    Chen, Zhiyun; Bai, Yongliang; Yin, Tianxiang; An, Xueqin; Shen, Weiguo

    2012-01-01

    Highlights: ► Coexistence curves of (benzonitrile + n-pentadecane) and (benzonitrile + n-heptadecane) were measured. ► The values of the critical exponent β are consistent with that predicted by the 3D-Ising model. ► The coexistence curves are well described by the critical crossover model. ► The asymmetry of the diameters of the coexistence curves were discussed by the complete scaling theory. - Abstract: Liquid + liquid coexistence curves for the binary solutions of {benzonitrile + n-pentadecane} and {benzonitrile + n-heptadecane} have been measured in the critical region. The critical exponent β and the critical amplitudes have been deduced and the former is consistent with the theoretic prediction. It was found that the coexistence curves may be well described by the crossover model proposed by Gutkowski et al. The asymmetries of the diameters of the coexistence curves were also discussed in the frame of the complete scaling theory.

  14. Combined Monte Carlo and path-integral method for simulated library of time-resolved reflectance curves from layered tissue models

    Science.gov (United States)

    Wilson, Robert H.; Vishwanath, Karthik; Mycek, Mary-Ann

    2009-02-01

    Monte Carlo (MC) simulations are considered the "gold standard" for mathematical description of photon transport in tissue, but they can require large computation times. Therefore, it is important to develop simple and efficient methods for accelerating MC simulations, especially when a large "library" of related simulations is needed. A semi-analytical method involving MC simulations and a path-integral (PI) based scaling technique generated time-resolved reflectance curves from layered tissue models. First, a zero-absorption MC simulation was run for a tissue model with fixed scattering properties in each layer. Then, a closed-form expression for the average classical path of a photon in tissue was used to determine the percentage of time that the photon spent in each layer, to create a weighted Beer-Lambert factor to scale the time-resolved reflectance of the simulated zero-absorption tissue model. This method is a unique alternative to other scaling techniques in that it does not require the path length or number of collisions of each photon to be stored during the initial simulation. Effects of various layer thicknesses and absorption and scattering coefficients on the accuracy of the method will be discussed.

  15. Effects of Pathologic Stage on the Learning Curve for Radical Prostatectomy: Evidence That Recurrence in Organ-Confined Cancer Is Largely Related to Inadequate Surgical Technique

    Science.gov (United States)

    Vickers, Andrew J.; Bianco, Fernando J.; Gonen, Mithat; Cronin, Angel M.; Eastham, James A.; Schrag, Deborah; Klein, Eric A.; Reuther, Alwyn M.; Kattan, Michael W.; Pontes, J. Edson; Scardino, Peter T.

    2008-01-01

    Objectives We previously demonstrated that there is a learning curve for open radical prostatectomy. We sought to determine whether the effects of the learning curve are modified by pathologic stage. Methods The study included 7765 eligible prostate cancer patients treated with open radical prostatectomy by one of 72 surgeons. Surgeon experience was coded as the total number of radical prostatectomies conducted by the surgeon prior to a patient’s surgery. Multivariable regression models of survival time were used to evaluate the association between surgeon experience and biochemical recurrence, with adjustment for PSA, stage, and grade. Analyses were conducted separately for patients with organ-confined and locally advanced disease. Results Five-year recurrence-free probability for patients with organ-confined disease approached 100% for the most experienced surgeons. Conversely, the learning curve for patients with locally advanced disease reached a plateau at approximately 70%, suggesting that about a third of these patients cannot be cured by surgery alone. Conclusions Excellent rates of cancer control for patients with organ-confined disease treated by the most experienced surgeons suggest that the primary reason such patients recur is inadequate surgical technique. PMID:18207316

  16. TL glow curve analysis of UV, beta and gamma induced limestone collected from Amarnath holy cave

    Directory of Open Access Journals (Sweden)

    Vikas Dubey

    2015-01-01

    Full Text Available The paper reports themoluminescence glow curve analysis of UV (ultraviolet, β (beta and γ (gamma induced limestone collected from Amarnath holy cave. The collected natural sample was characterized by X-ray diffraction (XRD technique and crystallite size calculated by Scherer's formula. Surface morphology and particle size was calculated by transmission electron microscopy (TEM study. Effect of annealing temperature on collected lime stone examined by TL glow curve study. The limestone was irradiated by UV radiation (254 nm source and the TL glow curve recorded for different UV exposure time. For beta irradiation Sr90 source was used and is shows intense peak at 256 °C with a shoulder peak at higher temperature range. For gamma radiation Co60 source and TL glow curve recorded for different doses of gamma. The kinetic parameters calculation was performed for different glow curve by computerized glow curve deconvolution (CGCD technique. The chemical composition of natural limestone was analyzed by energy dispersive X-ray spectroscopy (EDXS.

  17. Novel information theory techniques for phonon spectroscopy

    International Nuclear Information System (INIS)

    Hague, J P

    2007-01-01

    The maximum entropy method (MEM) and spectral reverse Monte Carlo (SRMC) techniques are applied to the determination of the phonon density of states (PDOS) from heat-capacity data. The approach presented here takes advantage of the standard integral transform relating the PDOS with the specific heat at constant volume. MEM and SRMC are highly successful numerical approaches for inverting integral transforms. The formalism and algorithms necessary to carry out the inversion of specific heat curves are introduced, and where possible, I have concentrated on algorithms and experimental details for practical usage. Simulated data are used to demonstrate the accuracy of the approach. The main strength of the techniques presented here is that the resulting spectra are always physical: Computed PDOS is always positive and properly applied information theory techniques only show statistically significant detail. The treatment set out here provides a simple, cost-effective and reliable method to determine phonon properties of new materials. In particular, the new technique is expected to be very useful for establishing where interesting phonon modes and properties can be found, before spending time at large scale facilities

  18. Protaper--hybrid technique.

    Science.gov (United States)

    Simon, Stephane; Lumley, Philip; Tomson, Phillip; Pertot, Wilhelm-Joseph; Machtou, Pierre

    2008-03-01

    Crown down preparation is the most known and described technique since the introduction of Nickel Titanium (NiTi) rotary instruments in endodontics. This technique gives good results but has limitations, such as not addressing the initial anatomy of oval or dumb-bell shaped canals. The specific design of the Protaper instruments allows use of them with a different technique and, specifically, with a brushing motion in the body of the canal. The recent introduction of hand Protaper files has expanded the range of application of this system, especially in curved canals. The 'hybrid technique', using rotary and hand files, and the advantages of the combination of both instruments, are clearly described in this article. Used with this technique, the Protaper is a very safe system to use, and more controllable, for both inexperienced and experienced practitioners alike, than other systems. To understand the precautions needed with rotary files, and how to use them to preserve the anatomy of the canal and get a tapered shaping, even in severely curved canals.

  19. Deriving Snow-Cover Depletion Curves for Different Spatial Scales from Remote Sensing and Snow Telemetry Data

    Science.gov (United States)

    Fassnacht, Steven R.; Sexstone, Graham A.; Kashipazha, Amir H.; Lopez-Moreno, Juan Ignacio; Jasinski, Michael F.; Kampf, Stephanie K.; Von Thaden, Benjamin C.

    2015-01-01

    During the melting of a snowpack, snow water equivalent (SWE) can be correlated to snow-covered area (SCA) once snow-free areas appear, which is when SCA begins to decrease below 100%. This amount of SWE is called the threshold SWE. Daily SWE data from snow telemetry stations were related to SCA derived from moderate-resolution imaging spectro radiometer images to produce snow-cover depletion curves. The snow depletion curves were created for an 80,000 sq km domain across southern Wyoming and northern Colorado encompassing 54 snow telemetry stations. Eight yearly snow depletion curves were compared, and it is shown that the slope of each is a function of the amount of snow received. Snow-cover depletion curves were also derived for all the individual stations, for which the threshold SWE could be estimated from peak SWE and the topography around each station. A stations peak SWE was much more important than the main topographic variables that included location, elevation, slope, and modelled clear sky solar radiation. The threshold SWE mostly illustrated inter-annual consistency.

  20. Boundary curves of individual items in the distribution of total depressive symptom scores approximate an exponential pattern in a general population.

    Science.gov (United States)

    Tomitaka, Shinichiro; Kawasaki, Yohei; Ide, Kazuki; Akutagawa, Maiko; Yamada, Hiroshi; Furukawa, Toshiaki A; Ono, Yutaka

    2016-01-01

    Previously, we proposed a model for ordinal scale scoring in which individual thresholds for each item constitute a distribution by each item. This lead us to hypothesize that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores follow a common mathematical model, which is expressed as the product of the frequency of the total depressive symptom scores and the probability of the cumulative distribution function of each item threshold. To verify this hypothesis, we investigated the boundary curves of the distribution of total depressive symptom scores in a general population. Data collected from 21,040 subjects who had completed the Center for Epidemiologic Studies Depression Scale (CES-D) questionnaire as part of a national Japanese survey were analyzed. The CES-D consists of 20 items (16 negative items and four positive items). The boundary curves of adjacent item scores in the distribution of total depressive symptom scores for the 16 negative items were analyzed using log-normal scales and curve fitting. The boundary curves of adjacent item scores for a given symptom approximated a common linear pattern on a log normal scale. Curve fitting showed that an exponential fit had a markedly higher coefficient of determination than either linear or quadratic fits. With negative affect items, the gap between the total score curve and boundary curve continuously increased with increasing total depressive symptom scores on a log-normal scale, whereas the boundary curves of positive affect items, which are not considered manifest variables of the latent trait, did not exhibit such increases in this gap. The results of the present study support the hypothesis that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores commonly follow the predicted mathematical model, which was verified to approximate an exponential mathematical pattern.

  1. Boundary curves of individual items in the distribution of total depressive symptom scores approximate an exponential pattern in a general population

    Directory of Open Access Journals (Sweden)

    Shinichiro Tomitaka

    2016-10-01

    Full Text Available Background Previously, we proposed a model for ordinal scale scoring in which individual thresholds for each item constitute a distribution by each item. This lead us to hypothesize that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores follow a common mathematical model, which is expressed as the product of the frequency of the total depressive symptom scores and the probability of the cumulative distribution function of each item threshold. To verify this hypothesis, we investigated the boundary curves of the distribution of total depressive symptom scores in a general population. Methods Data collected from 21,040 subjects who had completed the Center for Epidemiologic Studies Depression Scale (CES-D questionnaire as part of a national Japanese survey were analyzed. The CES-D consists of 20 items (16 negative items and four positive items. The boundary curves of adjacent item scores in the distribution of total depressive symptom scores for the 16 negative items were analyzed using log-normal scales and curve fitting. Results The boundary curves of adjacent item scores for a given symptom approximated a common linear pattern on a log normal scale. Curve fitting showed that an exponential fit had a markedly higher coefficient of determination than either linear or quadratic fits. With negative affect items, the gap between the total score curve and boundary curve continuously increased with increasing total depressive symptom scores on a log-normal scale, whereas the boundary curves of positive affect items, which are not considered manifest variables of the latent trait, did not exhibit such increases in this gap. Discussion The results of the present study support the hypothesis that the boundary curves of each depressive symptom score in the distribution of total depressive symptom scores commonly follow the predicted mathematical model, which was verified to approximate an

  2. Modeling of a Robust Confidence Band for the Power Curve of a Wind Turbine.

    Science.gov (United States)

    Hernandez, Wilmar; Méndez, Alfredo; Maldonado-Correa, Jorge L; Balleteros, Francisco

    2016-12-07

    Having an accurate model of the power curve of a wind turbine allows us to better monitor its operation and planning of storage capacity. Since wind speed and direction is of a highly stochastic nature, the forecasting of the power generated by the wind turbine is of the same nature as well. In this paper, a method for obtaining a robust confidence band containing the power curve of a wind turbine under test conditions is presented. Here, the confidence band is bound by two curves which are estimated using parametric statistical inference techniques. However, the observations that are used for carrying out the statistical analysis are obtained by using the binning method, and in each bin, the outliers are eliminated by using a censorship process based on robust statistical techniques. Then, the observations that are not outliers are divided into observation sets. Finally, both the power curve of the wind turbine and the two curves that define the robust confidence band are estimated using each of the previously mentioned observation sets.

  3. Small-scale dislocation plasticity in strontium titanate

    Energy Technology Data Exchange (ETDEWEB)

    Stukowski, Alexander; Javaid, Farhan; Durst, Karsten; Albe, Karsten [Technische Universitaet Darmstadt (Germany)

    2016-07-01

    Strontium titanate (STO) is an optically transparent perovskite oxide ceramic material. In contrast to other ceramics, single crystal STO plastically deforms under ambient condition, without showing a phase transition or early fracture. This remarkable ductility makes it a prime candidate for different technological applications. However, while the mechanical behavior of bulk STO has been studied extensively using uniaxial compression testing techniques, little is known about the local, small-scale behavior and the details of dislocation-based nanoplasticity in this perovskite material. In this contribution we compare results obtained from new nanoindentation experiments and corresponding large-scale molecular dynamics simulations. The evolution of the plastic zone and dislocation structures that form underneath the indenter is investigated using etch-pit methods in experiments and a novel three-dimensional defect identification technique in atomistic computer models. The latter allows tracing the evolution of the complete dislocation line network as function of indentation depth, quantifying the activity of different slip systems, and correlating this information with the recorded load-displacement curves and hardness data.

  4. Preionization Techniques in a kJ-Scale Dense Plasma Focus

    Science.gov (United States)

    Povilus, Alexander; Shaw, Brian; Chapman, Steve; Podpaly, Yuri; Cooper, Christopher; Falabella, Steve; Prasad, Rahul; Schmidt, Andrea

    2016-10-01

    A dense plasma focus (DPF) is a type of z-pinch device that uses a high current, coaxial plasma gun with an implosion phase to generate dense plasmas. These devices can accelerate a beam of ions to MeV-scale energies through strong electric fields generated by instabilities during the implosion of the plasma sheath. The formation of these instabilities, however, relies strongly on the history of the plasma sheath in the device, including the evolution of the gas breakdown in the device. In an effort to reduce variability in the performance of the device, we attempt to control the initial gas breakdown in the device by seeding the system with free charges before the main power pulse arrives. We report on the effectiveness of two techniques developed for a kJ-scale DPF at LLNL, a miniature primer spark gap and pulsed, 255nm LED illumination. Prepared by LLNL under Contract DE-AC52-07NA27344.

  5. Using Curved Crystals to Study Terrace-Width Distributions.

    Science.gov (United States)

    Einstein, Theodore L.

    Recent experiments on curved crystals of noble and late transition metals (Ortega and Juurlink groups) have renewed interest in terrace width distributions (TWD) for vicinal surfaces. Thus, it is timely to discuss refinements of TWD analysis that are absent from the standard reviews. Rather than by Gaussians, TWDs are better described by the generalized Wigner surmise, with a power-law rise and a Gaussian decay, thereby including effects evident for weak step repulsion: skewness and peak shifts down from the mean spacing. Curved crystals allow analysis of several mean spacings with the same substrate, so that one can check the scaling with the mean width. This is important since such scaling confirms well-established theory. Failure to scale also can provide significant insights. Complicating factors can include step touching (local double-height steps), oscillatory step interactions mediated by metallic (but not topological) surface states, short-range corrections to the inverse-square step repulsion, and accounting for the offset between adjacent layers of almost all surfaces. We discuss how to deal with these issues. For in-plane misoriented steps there are formulas to describe the stiffness but not yet the strength of the elastic interstep repulsion. Supported in part by NSF-CHE 13-05892.

  6. Nudging technique for scale bridging in air quality/climate atmospheric composition modelling

    Directory of Open Access Journals (Sweden)

    A. Maurizi

    2012-04-01

    Full Text Available The interaction between air quality and climate involves dynamical scales that cover a very wide range. Bridging these scales in numerical simulations is fundamental in studies devoted to megacity/hot-spot impacts on larger scales. A technique based on nudging is proposed as a bridging method that can couple different models at different scales.

    Here, nudging is used to force low resolution chemical composition models with a run of a high resolution model on a critical area. A one-year numerical experiment focused on the Po Valley hot spot is performed using the BOLCHEM model to asses the method.

    The results show that the model response is stable to perturbation induced by the nudging and that, taking the high resolution run as a reference, performances of the nudged run increase with respect to the non-forced run. The effect outside the forcing area depends on transport and is significant in a relevant number of events although it becomes weak on seasonal or yearly basis.

  7. Detection of flaws below curved surfaces

    International Nuclear Information System (INIS)

    Elsley, R.K.; Addison, R.C.; Graham, L.J.

    1983-01-01

    A measurement model has been developed to describe ultrasonic measurements made with circular piston transducers in parts with flat or cylindrically curved surfaces. The model includes noise terms to describe electrical noise, scatterer noise and echo noise as well as effects of attenuation, diffraction and Fresnel loss. An experimental procedure for calibrating the noise terms of the model was developed. Experimental measurements were made on a set of known flaws located beneath a cylindrically curved surface. The model was verified by using it to correct the experimental measurements to obtain the absolute scattering amplitude of the flaws. For longitudinal wave propagation within the part, the derived scattering amplitudes were consistent with predictions at internal angles of less than 30 0 . At larger angles, focusing and aberrations caused a lack of agreement; the model needs further refinement in this case. For shear waves, it was found that the frequency for optimum flaw detection in the presence of material noise is lower than that for longitudinal waves; lower frequency measurements are currently in progress. The measurement model was then used to make preliminary predictions of the best experimental measurement technique for the detection of cracks located under cylindrically curved surfaces

  8. New measurement technique of ductility curve for ductility-dip cracking susceptibility in Alloy 690 welds

    Energy Technology Data Exchange (ETDEWEB)

    Kadoi, Kota, E-mail: kadoi@hiroshima-u.ac.jp [Graduate School of Engineering, Hiroshima University, 1-4-1 Kagamiyama, Higashi-Hiroshima, Hiroshima 739-8527 (Japan); Uegaki, Takanori; Shinozaki, Kenji; Yamamoto, Motomichi [Graduate School of Engineering, Hiroshima University, 1-4-1 Kagamiyama, Higashi-Hiroshima, Hiroshima 739-8527 (Japan)

    2016-08-30

    The coupling of a hot tensile test with a novel in situ observation technique using a high-speed camera was investigated as a high-accuracy quantitative evaluation method for ductility-dip cracking (DDC) susceptibility. Several types of Alloy 690 filler wire were tested in this study owing to its susceptibility to DDC. The developed test method was used to directly measure the critical strain for DDC and high temperature ductility curves with a gauge length of 0.5 mm. Minimum critical strains of 1.3%, 4.0%, and 3.9% were obtained for ERNiCrFe-7, ERNiCrFe-13, and ERNiCrFe-15, respectively. The DDC susceptibilities of ERNiCrFe-13 and ERNiCrFe-15 were nearly the same and quite low compared with that of ERNiCrFe-7. This was likely caused by the tortuosity of the grain boundaries arising from the niobium content of around 2.5% in the former samples. Besides, ERNiCrFe-13 and ERNiCrFe-15 indicated higher minimum critical strains even though these specimens include higher content of sulfur and phosphorus than ERNiCrFe-7. Thus, containing niobium must be more effective to improve the susceptibility compared to sulfur and phosphorous in the alloy system.

  9. New measurement technique of ductility curve for ductility-dip cracking susceptibility in Alloy 690 welds

    International Nuclear Information System (INIS)

    Kadoi, Kota; Uegaki, Takanori; Shinozaki, Kenji; Yamamoto, Motomichi

    2016-01-01

    The coupling of a hot tensile test with a novel in situ observation technique using a high-speed camera was investigated as a high-accuracy quantitative evaluation method for ductility-dip cracking (DDC) susceptibility. Several types of Alloy 690 filler wire were tested in this study owing to its susceptibility to DDC. The developed test method was used to directly measure the critical strain for DDC and high temperature ductility curves with a gauge length of 0.5 mm. Minimum critical strains of 1.3%, 4.0%, and 3.9% were obtained for ERNiCrFe-7, ERNiCrFe-13, and ERNiCrFe-15, respectively. The DDC susceptibilities of ERNiCrFe-13 and ERNiCrFe-15 were nearly the same and quite low compared with that of ERNiCrFe-7. This was likely caused by the tortuosity of the grain boundaries arising from the niobium content of around 2.5% in the former samples. Besides, ERNiCrFe-13 and ERNiCrFe-15 indicated higher minimum critical strains even though these specimens include higher content of sulfur and phosphorus than ERNiCrFe-7. Thus, containing niobium must be more effective to improve the susceptibility compared to sulfur and phosphorous in the alloy system.

  10. Pre-nebular Light Curves of SNe I

    Energy Technology Data Exchange (ETDEWEB)

    Arnett, W. David [Steward Observatory, University of Arizona, 933 N. Cherry Avenue, Tucson, AZ 85721 (United States); Fryer, Christopher [Los Alamos National Laboratory, Los Alamos, NM (United States); Matheson, Thomas [National Optical Astronomy Observatory, Tucson, AZ (United States)

    2017-09-01

    We compare analytic predictions of supernova light curves with recent high-quality data from SN2011fe (Ia), KSN2011b (Ia), and the Palomar Transient Factory and the La Silla-QUEST variability survey (LSQ) (Ia). Because of the steady, fast cadence of observations, KSN2011b provides unique new information on SNe Ia: the smoothness of the light curve, which is consistent with significant large-scale mixing during the explosion, possibly due to 3D effects (e.g., Rayleigh–Taylor instabilities), and provides support for a slowly varying leakage (mean opacity). For a more complex light curve (SN2008D, SN Ib), we separate the luminosity due to multiple causes and indicate the possibility of a radioactive plume. The early rise in luminosity is shown to be affected by the opacity (leakage rate) for thermal and non-thermal radiation. A general derivation of Arnett’s rule again shows that it depends upon all processes heating the plasma, not just radioactive ones, so that SNe Ia will differ from SNe Ibc if the latter have multiple heating processes.

  11. Characterization of time series via Rényi complexity-entropy curves

    Science.gov (United States)

    Jauregui, M.; Zunino, L.; Lenzi, E. K.; Mendes, R. S.; Ribeiro, H. V.

    2018-05-01

    One of the most useful tools for distinguishing between chaotic and stochastic time series is the so-called complexity-entropy causality plane. This diagram involves two complexity measures: the Shannon entropy and the statistical complexity. Recently, this idea has been generalized by considering the Tsallis monoparametric generalization of the Shannon entropy, yielding complexity-entropy curves. These curves have proven to enhance the discrimination among different time series related to stochastic and chaotic processes of numerical and experimental nature. Here we further explore these complexity-entropy curves in the context of the Rényi entropy, which is another monoparametric generalization of the Shannon entropy. By combining the Rényi entropy with the proper generalization of the statistical complexity, we associate a parametric curve (the Rényi complexity-entropy curve) with a given time series. We explore this approach in a series of numerical and experimental applications, demonstrating the usefulness of this new technique for time series analysis. We show that the Rényi complexity-entropy curves enable the differentiation among time series of chaotic, stochastic, and periodic nature. In particular, time series of stochastic nature are associated with curves displaying positive curvature in a neighborhood of their initial points, whereas curves related to chaotic phenomena have a negative curvature; finally, periodic time series are represented by vertical straight lines.

  12. Validation of a large-scale audit technique for CT dose optimisation

    International Nuclear Information System (INIS)

    Wood, T. J.; Davis, A. W.; Moore, C. S.; Beavis, A. W.; Saunderson, J. R.

    2008-01-01

    The expansion and increasing availability of computed tomography (CT) imaging means that there is a greater need for the development of efficient optimisation strategies that are able to inform clinical practice, without placing a significant burden on limited departmental resources. One of the most fundamental aspects to any optimisation programme is the collection of patient dose information, which can be compared with appropriate diagnostic reference levels. This study has investigated the implementation of a large-scale audit technique, which utilises data that already exist in the radiology information system, to determine typical doses for a range of examinations on four CT scanners. This method has been validated against what is considered the 'gold standard' technique for patient dose audits, and it has been demonstrated that results equivalent to the 'standard-sized patient' can be inferred from this much larger data set. This is particularly valuable where CT optimisation is concerned as it is considered a 'high dose' technique, and hence close monitoring of patient dose is particularly important. (authors)

  13. Estimating daily flow duration curves from monthly streamflow data

    CSIR Research Space (South Africa)

    Smakhtin, VU

    2000-01-01

    Full Text Available The paper describes two techniques by which to establish 1-day (1d) flow duration curves at an ungauged site where only a simulated or calculated monthly flow time series is available. Both methods employ the straightforward relationships between...

  14. Quasistatic zooming of FDTD E-field computations: the impact of down-scaling techniques

    Energy Technology Data Exchange (ETDEWEB)

    Van de Kamer, J.B.; Kroeze, H.; De Leeuw, A.A.C.; Lagendijk, J.J.W. [Department of Radiotherapy, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX, Utrecht (Netherlands)

    2001-05-01

    Due to current computer limitations, regional hyperthermia treatment planning (HTP) is practically limited to a resolution of 1 cm, whereas a millimetre resolution is desired. Using the centimetre resolution E-vector-field distribution, computed with, for example, the finite-difference time-domain (FDTD) method and the millimetre resolution patient anatomy it is possible to obtain a millimetre resolution SAR distribution in a volume of interest (VOI) by means of quasistatic zooming. To compute the required low-resolution E-vector-field distribution, a low-resolution dielectric geometry is needed which is constructed by down-scaling the millimetre resolution dielectric geometry. In this study we have investigated which down-scaling technique results in a dielectric geometry that yields the best low-resolution E-vector-field distribution as input for quasistatic zooming. A segmented 2 mm resolution CT data set of a patient has been down-scaled to 1 cm resolution using three different techniques: 'winner-takes-all', 'volumetric averaging' and 'anisotropic volumetric averaging'. The E-vector-field distributions computed for those low-resolution dielectric geometries have been used as input for quasistatic zooming. The resulting zoomed-resolution SAR distributions were compared with a reference: the 2 mm resolution SAR distribution computed with the FDTD method. The E-vector-field distribution for both a simple phantom and the complex partial patient geometry down-scaled using 'anisotropic volumetric averaging' resulted in zoomed-resolution SAR distributions that best approximate the corresponding high-resolution SAR distribution (correlation 97, 96% and absolute averaged difference 6, 14% respectively). (author)

  15. Spin structures on algebraic curves and their applications in string theories

    International Nuclear Information System (INIS)

    Ferrari, F.

    1990-01-01

    The free fields on a Riemann surface carrying spin structures live on an unramified r-covering of the surface itself. When the surface is represented as an algebraic curve related to the vanishing of the Weierstrass polynomial, its r-coverings are algebraic curves as well. We construct explicitly the Weierstrass polynomial associated to the r-coverings of an algebraic curve. Using standard techniques of algebraic geometry it is then possible to solve the inverse Jacobi problem for the odd spin structures. As an application we derive the partition functions of bosonic string theories in many examples, including two general curves of genus three and four. The partition functions are explicitly expressed in terms of branch points apart from a factor which is essentially a theta constant. 53 refs., 4 figs. (Author)

  16. Eliminating line of sight in elliptic guides using gravitational curving

    International Nuclear Information System (INIS)

    Kleno, Kaspar H.; Willendrup, Peter K.; Knudsen, Erik; Lefmann, Kim

    2011-01-01

    Eliminating fast neutrons (λ<0.5A) by removing direct line of sight between the source and the target sample is a well established technique. This can be done with little loss of transmission for a straight neutron guide by horizontal curving. With an elliptic guide shape, however, curving the guide would result in a breakdown of the geometrical focusing mechanism inherent to the elliptical shape, resulting in unwanted reflections and loss of transmission. We present a new and yet untried idea by curving a guide in such a way as to follow the ballistic curve of a neutron in the gravitational field, while still retaining the elliptic shape seen from the accelerated reference frame of the neutron. Analytical calculations and ray-tracing simulations show that this method is useful for cold neutrons at guide lengths in excess of 100 m. We will present some of the latest results for guide optimization relevant for instrument design at the ESS, in particular an off-backscattering spectrometer which utilizes the gravitational curving, for 6.66 A neutrons over a guide length of 300 m.

  17. Scaling Robotic Displays: Displays and Techniques for Dismounted Movement with Robots

    Science.gov (United States)

    2010-04-01

    you are performing the low crawl 4.25 5.00 Drive the robot while you are negotiating the hill 6.00 5.00 Drive the robot while you are climbing the... stairs 4.67 5.00 Drive the robot while you are walking 5.70 5.27 HMD It was fairly doable. 1 When you’re looking through the lens, it’s not...Scaling Robotic Displays: Displays and Techniques for Dismounted Movement with Robots by Elizabeth S. Redden, Rodger A. Pettitt

  18. Experimental analysis of waveform effects on satellite and ligament behavior via in situ measurement of the drop-on-demand drop formation curve and the instantaneous jetting speed curve

    International Nuclear Information System (INIS)

    Kwon, Kye-Si

    2010-01-01

    In situ techniques to measure the drop-on-demand (DOD) drop formation curve and the instantaneous jetting speed curve are developed such that ligament behavior and satellite behavior of inkjet droplets can be analyzed effectively. It is known that the droplet jetting behavior differs by ink properties and the driving waveform voltage. In this study, to reduce possible droplet placement errors due to satellite drops or long ligaments during printing, waveform effects on drop formation are investigated based on the measured DOD drop formation curve and the instantaneous jetting speed curve. Experimental results show that a dwell time greater than the so-called efficient dwell time was effective in reducing placement errors due to satellite drops during the printing process

  19. Compact TXRF system using doubly curved crystal optics

    International Nuclear Information System (INIS)

    Chen, Z.W.

    2000-01-01

    Doubly curved crystal optics can provide large collection solid angle from a small x-ray source but were difficult to be fabricated in the past. The recent innovative doubly curved crystal optic technology provides accurate bending figure of thin crystal and produces high performance doubly curved crystal optics. The high quality doubly curved crystal can increase the intensity of the primary beam significantly for total reflection x-ray fluorescence application based on a low power x-ray source. In this report, toroidal Si(220) crystals are used to focused Cu Kα and Mo Kα x-rays from low power compact x-ray tubes that have maximum power setting at 50 kV and 1 mA. With a slit aperture to control the convergent angle, a fan Cu Kα1 beam with 15 degree x 0.2 degree convergent angles is obtained for TXRF excitation. Similarly, a fan Mo Kα1 beam with 6 degree x 0.1 degree convergent angles is used for high energy excitation. Si wafer based TXRF samples will be prepared and measured using this technique and the experimental data. (author)

  20. Learning curve for laparoscopic Heller myotomy and Dor fundoplication for achalasia.

    Science.gov (United States)

    Yano, Fumiaki; Omura, Nobuo; Tsuboi, Kazuto; Hoshino, Masato; Yamamoto, Seryung; Akimoto, Shunsuke; Masuda, Takahiro; Kashiwagi, Hideyuki; Yanaga, Katsuhiko

    2017-01-01

    Although laparoscopic Heller myotomy and Dor fundoplication (LHD) is widely performed to address achalasia, little is known about the learning curve for this technique. We assessed the learning curve for performing LHD. Of the 514 cases with LHD performed between August 1994 and March 2016, the surgical outcomes of 463 cases were evaluated after excluding 50 cases with reduced port surgery and one case with the simultaneous performance of laparoscopic distal partial gastrectomy. A receiver operating characteristic (ROC) curve analysis was used to identify the cut-off value for the number of surgical experiences necessary to become proficient with LHD, which was defined as the completion of the learning curve. We defined the completion of the learning curve when the following 3 conditions were satisfied. 1) The operation time was less than 165 minutes. 2) There was no blood loss. 3) There was no intraoperative complication. In order to establish the appropriate number of surgical experiences required to complete the learning curve, the cut-off value was evaluated by using a ROC curve (AUC 0.717, p < 0.001). Finally, we identified the cut-off value as 16 surgical cases (sensitivity 0.706, specificity 0.646). Learning curve seems to complete after performing 16 cases.

  1. Learning curve for laparoscopic Heller myotomy and Dor fundoplication for achalasia.

    Directory of Open Access Journals (Sweden)

    Fumiaki Yano

    Full Text Available Although laparoscopic Heller myotomy and Dor fundoplication (LHD is widely performed to address achalasia, little is known about the learning curve for this technique. We assessed the learning curve for performing LHD.Of the 514 cases with LHD performed between August 1994 and March 2016, the surgical outcomes of 463 cases were evaluated after excluding 50 cases with reduced port surgery and one case with the simultaneous performance of laparoscopic distal partial gastrectomy. A receiver operating characteristic (ROC curve analysis was used to identify the cut-off value for the number of surgical experiences necessary to become proficient with LHD, which was defined as the completion of the learning curve.We defined the completion of the learning curve when the following 3 conditions were satisfied. 1 The operation time was less than 165 minutes. 2 There was no blood loss. 3 There was no intraoperative complication. In order to establish the appropriate number of surgical experiences required to complete the learning curve, the cut-off value was evaluated by using a ROC curve (AUC 0.717, p < 0.001. Finally, we identified the cut-off value as 16 surgical cases (sensitivity 0.706, specificity 0.646.Learning curve seems to complete after performing 16 cases.

  2. Evaluation of syngas production unit cost of bio-gasification facility using regression analysis techniques

    Energy Technology Data Exchange (ETDEWEB)

    Deng, Yangyang; Parajuli, Prem B.

    2011-08-10

    Evaluation of economic feasibility of a bio-gasification facility needs understanding of its unit cost under different production capacities. The objective of this study was to evaluate the unit cost of syngas production at capacities from 60 through 1800Nm 3/h using an economic model with three regression analysis techniques (simple regression, reciprocal regression, and log-log regression). The preliminary result of this study showed that reciprocal regression analysis technique had the best fit curve between per unit cost and production capacity, with sum of error squares (SES) lower than 0.001 and coefficient of determination of (R 2) 0.996. The regression analysis techniques determined the minimum unit cost of syngas production for micro-scale bio-gasification facilities of $0.052/Nm 3, under the capacity of 2,880 Nm 3/h. The results of this study suggest that to reduce cost, facilities should run at a high production capacity. In addition, the contribution of this technique could be the new categorical criterion to evaluate micro-scale bio-gasification facility from the perspective of economic analysis.

  3. Modeling of Triangular Lattice Space Structures with Curved Battens

    Science.gov (United States)

    Chen, Tzikang; Wang, John T.

    2005-01-01

    Techniques for simulating an assembly process of lattice structures with curved battens were developed. The shape of the curved battens, the tension in the diagonals, and the compression in the battens were predicted for the assembled model. To be able to perform the assembly simulation, a cable-pulley element was implemented, and geometrically nonlinear finite element analyses were performed. Three types of finite element models were created from assembled lattice structures for studying the effects of design and modeling variations on the load carrying capability. Discrepancies in the predictions from these models were discussed. The effects of diagonal constraint failure were also studied.

  4. Fitness analysis method for magnesium in drinking water with atomic absorption using quadratic curve calibration

    Directory of Open Access Journals (Sweden)

    Esteban Pérez-López

    2014-11-01

    Full Text Available Because of the importance of quantitative chemical analysis in research, quality control, sales of services and other areas of interest , and the limiting of some instrumental analysis methods for quantification with linear calibration curve, sometimes because the short linear dynamic ranges of the analyte, and sometimes by limiting the technique itself, is that there is a need to investigate a little more about the convenience of using quadratic curves for analytical quantification, which seeks demonstrate that it is a valid calculation model for chemical analysis instruments. To this was taken as an analysis method based on the technique and atomic absorption spectroscopy in particular a determination of magnesium in a sample of drinking water Tacares sector Northern Grecia, employing a nonlinear calibration curve and a curve specific quadratic behavior, which was compared with the test results obtained for the same analysis with a linear calibration curve. The results show that the methodology is valid for the determination referred to, with all confidence, since the concentrations are very similar, and as used hypothesis testing can be considered equal.

  5. Systematic study of the effects of scaling techniques in numerical simulations with application to enhanced geothermal systems

    Science.gov (United States)

    Heinze, Thomas; Jansen, Gunnar; Galvan, Boris; Miller, Stephen A.

    2016-04-01

    Numerical modeling is a well established tool in rock mechanics studies investigating a wide range of problems. Especially for estimating seismic risk of a geothermal energy plants a realistic rock mechanical model is needed. To simulate a time evolving system, two different approaches need to be separated: Implicit methods for solving linear equations are unconditionally stable, while explicit methods are limited by the time step. However, explicit methods are often preferred because of their limited memory demand, their scalability in parallel computing, and simple implementation of complex boundary conditions. In numerical modeling of explicit elastoplastic dynamics the time step is limited by the rock density. Mass scaling techniques, which increase the rock density artificially by several orders, can be used to overcome this limit and significantly reduce computation time. In the context of geothermal energy this is of great interest because in a coupled hydro-mechanical model the time step of the mechanical part is significantly smaller than for the fluid flow. Mass scaling can also be combined with time scaling, which increases the rate of physical processes, assuming that processes are rate independent. While often used, the effect of mass and time scaling and how it may influence the numerical results is rarely-mentioned in publications, and choosing the right scaling technique is typically performed by trial and error. Also often scaling techniques are used in commercial software packages, hidden from the untrained user. To our knowledge, no systematic studies have addressed how mass scaling might affect the numerical results. In this work, we present results from an extensive and systematic study of the influence of mass and time scaling on the behavior of a variety of rock-mechanical models. We employ a finite difference scheme to model uniaxial and biaxial compression experiments using different mass and time scaling factors, and with physical models

  6. Development of theoretical oxygen saturation calibration curve based on optical density ratio and optical simulation approach

    Science.gov (United States)

    Jumadi, Nur Anida; Beng, Gan Kok; Ali, Mohd Alauddin Mohd; Zahedi, Edmond; Morsin, Marlia

    2017-09-01

    The implementation of surface-based Monte Carlo simulation technique for oxygen saturation (SaO2) calibration curve estimation is demonstrated in this paper. Generally, the calibration curve is estimated either from the empirical study using animals as the subject of experiment or is derived from mathematical equations. However, the determination of calibration curve using animal is time consuming and requires expertise to conduct the experiment. Alternatively, an optical simulation technique has been used widely in the biomedical optics field due to its capability to exhibit the real tissue behavior. The mathematical relationship between optical density (OD) and optical density ratios (ODR) associated with SaO2 during systole and diastole is used as the basis of obtaining the theoretical calibration curve. The optical properties correspond to systolic and diastolic behaviors were applied to the tissue model to mimic the optical properties of the tissues. Based on the absorbed ray flux at detectors, the OD and ODR were successfully calculated. The simulation results of optical density ratio occurred at every 20 % interval of SaO2 is presented with maximum error of 2.17 % when comparing it with previous numerical simulation technique (MC model). The findings reveal the potential of the proposed method to be used for extended calibration curve study using other wavelength pair.

  7. Mentorship, learning curves, and balance.

    Science.gov (United States)

    Cohen, Meryl S; Jacobs, Jeffrey P; Quintessenza, James A; Chai, Paul J; Lindberg, Harald L; Dickey, Jamie; Ungerleider, Ross M

    2007-09-01

    Professionals working in the arena of health care face a variety of challenges as their careers evolve and develop. In this review, we analyze the role of mentorship, learning curves, and balance in overcoming challenges that all such professionals are likely to encounter. These challenges can exist both in professional and personal life. As any professional involved in health care matures, complex professional skills must be mastered, and new professional skills must be acquired. These skills are both technical and judgmental. In most circumstances, these skills must be learned. In 2007, despite the continued need for obtaining new knowledge and learning new skills, the professional and public tolerance for a "learning curve" is much less than in previous decades. Mentorship is the key to success in these endeavours. The success of mentorship is two-sided, with responsibilities for both the mentor and the mentee. The benefits of this relationship must be bidirectional. It is the responsibility of both the student and the mentor to assure this bidirectional exchange of benefit. This relationship requires time, patience, dedication, and to some degree selflessness. This mentorship will ultimately be the best tool for mastering complex professional skills and maturing through various learning curves. Professional mentorship also requires that mentors identify and explicitly teach their mentees the relational skills and abilities inherent in learning the management of the triad of self, relationships with others, and professional responsibilities.Up to two decades ago, a learning curve was tolerated, and even expected, while professionals involved in healthcare developed the techniques that allowed for the treatment of previously untreatable diseases. Outcomes have now improved to the point that this type of learning curve is no longer acceptable to the public. Still, professionals must learn to perform and develop independence and confidence. The responsibility to

  8. Spectral curve for open strings attached to the Y=0 brane

    International Nuclear Information System (INIS)

    Bajnok, Zoltán; Kim, Minkyoo; Palla, László

    2014-01-01

    The concept of spectral curve is generalized to open strings in AdS/CFT with integrability preserving boundary conditions. Our definition is based on the logarithms of the eigenvalues of the open monodromy matrix and makes possible to determine all the analytic, symmetry and asymptotic properties of the quasimomenta. We work out the details of the whole construction for the Y=0 brane boundary condition. The quasimomenta of open circular strings are explicitly calculated. We use the asymptotic solutions of the Y-system and the boundary Bethe Ansatz equations to recover the spectral curve in the strong coupling scaling limit. Using the curve the quasiclassical fluctuations of some open string solutions are also studied

  9. Global structure of curves from generalized unitarity cut of three-loop diagrams

    International Nuclear Information System (INIS)

    Hauenstein, Jonathan D.; Huang, Rijun; Mehta, Dhagash; Zhang, Yang

    2015-01-01

    This paper studies the global structure of algebraic curves defined by generalized unitarity cut of four-dimensional three-loop diagrams with eleven propagators. The global structure is a topological invariant that is characterized by the geometric genus of the algebraic curve. We use the Riemann-Hurwitz formula to compute the geometric genus of algebraic curves with the help of techniques involving convex hull polytopes and numerical algebraic geometry. Some interesting properties of genus for arbitrary loop orders are also explored where computing the genus serves as an initial step for integral or integrand reduction of three-loop amplitudes via an algebraic geometric approach.

  10. The Surface Density Profile of the Galactic Disk from the Terminal Velocity Curve

    Science.gov (United States)

    McGaugh, Stacy S.

    2016-01-01

    The mass distribution of the Galactic disk is constructed from the terminal velocity curve and the mass discrepancy-acceleration relation. Mass models numerically quantifying the detailed surface density profiles are tabulated. For R0 = 8 kpc, the models have stellar mass 5 spiral galaxy that obeys scaling relations like the Tully-Fisher relation, the size-mass relation, and the disk maximality-surface brightness relation. The stellar disk is maximal, and the spiral arms are massive. The bumps and wiggles in the terminal velocity curve correspond to known spiral features (e.g., the Centaurus arm is a ˜50% overdensity). The rotation curve switches between positive and negative over scales of hundreds of parsecs. The rms amplitude { }1/2≈ 14 {km} {{{s}}}-1 {{kpc}}-1, implying that commonly neglected terms in the Jeans equations may be nonnegligible. The spherically averaged local dark matter density is ρ0,DM ≈ 0.009 {M}⊙ {{pc}}-3 (0.34 {GeV} {{cm}}-3). Adiabatic compression of the dark matter halo may help reconcile the Milky Way with the c-V200 relation expected in ΛCDM while also helping to mitigate the too-big-to-fail problem, but it remains difficult to reconcile the inner bulge/bar-dominated region with a cuspy halo. We note that NGC 3521 is a near twin to the Milky Way, having a similar luminosity, scale length, and rotation curve.

  11. Flow characteristics of curved ducts

    Directory of Open Access Journals (Sweden)

    Rudolf P.

    2007-10-01

    Full Text Available Curved channels are very often present in real hydraulic systems, e.g. curved diffusers of hydraulic turbines, S-shaped bulb turbines, fittings, etc. Curvature brings change of velocity profile, generation of vortices and production of hydraulic losses. Flow simulation using CFD techniques were performed to understand these phenomena. Cases ranging from single elbow to coupled elbows in shapes of U, S and spatial right angle position with circular cross-section were modeled for Re = 60000. Spatial development of the flow was studied and consequently it was deduced that minor losses are connected with the transformation of pressure energy into kinetic energy and vice versa. This transformation is a dissipative process and is reflected in the amount of the energy irreversibly lost. Least loss coefficient is connected with flow in U-shape elbows, biggest one with flow in Sshape elbows. Finally, the extent of the flow domain influenced by presence of curvature was examined. This isimportant for proper placement of mano- and flowmeters during experimental tests. Simulations were verified with experimental results presented in literature.

  12. Test for the statistical significance of differences between ROC curves

    International Nuclear Information System (INIS)

    Metz, C.E.; Kronman, H.B.

    1979-01-01

    A test for the statistical significance of observed differences between two measured Receiver Operating Characteristic (ROC) curves has been designed and evaluated. The set of observer response data for each ROC curve is assumed to be independent and to arise from a ROC curve having a form which, in the absence of statistical fluctuations in the response data, graphs as a straight line on double normal-deviate axes. To test the significance of an apparent difference between two measured ROC curves, maximum likelihood estimates of the two parameters of each curve and the associated parameter variances and covariance are calculated from the corresponding set of observer response data. An approximate Chi-square statistic with two degrees of freedom is then constructed from the differences between the parameters estimated for each ROC curve and from the variances and covariances of these estimates. This statistic is known to be truly Chi-square distributed only in the limit of large numbers of trials in the observer performance experiments. Performance of the statistic for data arising from a limited number of experimental trials was evaluated. Independent sets of rating scale data arising from the same underlying ROC curve were paired, and the fraction of differences found (falsely) significant was compared to the significance level, α, used with the test. Although test performance was found to be somewhat dependent on both the number of trials in the data and the position of the underlying ROC curve in the ROC space, the results for various significance levels showed the test to be reliable under practical experimental conditions

  13. Dynamical scaling in polymer solutions investigated by the neutron spin echo technique

    International Nuclear Information System (INIS)

    Richter, D.; Ewen, B.

    1979-01-01

    Chain dynamics in polymer solutions was investigated by means of the recently developed neutron spin echo spectroscopy. - By this technique, it was possible for the first time to verify unambiguously the scaling predictions of the Zimm model in the case of single chain behaviour and to observe the cross over to many chain behaviour. The segmental diffusion of single chains exhibits deviations from a simple exponential law, indicating the importance of memory effects. (orig.) [de

  14. Hidden scale invariance of metals

    DEFF Research Database (Denmark)

    Hummel, Felix; Kresse, Georg; Dyre, Jeppe C.

    2015-01-01

    Density functional theory (DFT) calculations of 58 liquid elements at their triple point show that most metals exhibit near proportionality between the thermal fluctuations of the virial and the potential energy in the isochoric ensemble. This demonstrates a general “hidden” scale invariance...... of metals making the condensed part of the thermodynamic phase diagram effectively one dimensional with respect to structure and dynamics. DFT computed density scaling exponents, related to the Grüneisen parameter, are in good agreement with experimental values for the 16 elements where reliable data were...... available. Hidden scale invariance is demonstrated in detail for magnesium by showing invariance of structure and dynamics. Computed melting curves of period three metals follow curves with invariance (isomorphs). The experimental structure factor of magnesium is predicted by assuming scale invariant...

  15. Subsea flowlines installation: new techniques are ready for use

    Energy Technology Data Exchange (ETDEWEB)

    Borelli, A.

    1978-01-01

    A discussion of the new offshore pipelaying methods developed and tested by ETPM (together with other French companies) covers a description, the technical characteristics, conditions of applicability, and data on testing for towing methods, i.e., for surface and subsurface tow techniques which involve onshore construction of pipe segments, towing to the offshore site, and immersion of the segments, e.g., by using the ''draw-down'' technique, which was developed for Mobil Research and Develoment Corp., and can be used for up to 10,000 ft pipe sections at up to 6000 ft depths; and the near-the-bottom tow technique tested on a full scale in May 1977 by Compagnie Francaise des Petroles in the Mediterranean with 1 km, 8 in. + 4 in. pipe bundles, in which two tugs tow a pipeline section, typically at 10 ft above the sea bottom, on guide ropes; and the ''J-curve'' laying method.

  16. Quantum field theory in curved spacetime

    International Nuclear Information System (INIS)

    Gibbons, G.W.

    1978-04-01

    The purpose of this article is to outline what the extension of such a treatment to curved space entails and to discuss what essentially new features arise when one takes into account the quantum mechanical nature of gravitating systems. I shall throughout assume a classical, unquantized gravitational field and confine the discussion to matter fields although similar techniques and ideas may be applied to 'gravitons' - that is linearized perturbations of the metric propagating on some fixed, unperturbed, background. (orig./WL) [de

  17. Ion beam analysis techniques applied to large scale pollution studies

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, D D; Bailey, G; Martin, J; Garton, D; Noorman, H; Stelcer, E; Johnson, P [Australian Nuclear Science and Technology Organisation, Lucas Heights, NSW (Australia)

    1994-12-31

    Ion Beam Analysis (IBA) techniques are ideally suited to analyse the thousands of filter papers a year that may originate from a large scale aerosol sampling network. They are fast multi-elemental and, for the most part, non-destructive so other analytical methods such as neutron activation and ion chromatography can be performed afterwards. ANSTO in collaboration with the NSW EPA, Pacific Power and the Universities of NSW and Macquarie has established a large area fine aerosol sampling network covering nearly 80,000 square kilometres of NSW with 25 fine particle samplers. This network known as ASP was funded by the Energy Research and Development Corporation (ERDC) and commenced sampling on 1 July 1991. The cyclone sampler at each site has a 2.5 {mu}m particle diameter cut off and runs for 24 hours every Sunday and Wednesday using one Gillman 25mm diameter stretched Teflon filter for each day. These filters are ideal targets for ion beam analysis work. Currently ANSTO receives 300 filters per month from this network for analysis using its accelerator based ion beam techniques on the 3 MV Van de Graaff accelerator. One week a month of accelerator time is dedicated to this analysis. Four simultaneous accelerator based IBA techniques are used at ANSTO, to analyse for the following 24 elements: H, C, N, O, F, Na, Al, Si, P, S, Cl, K, Ca, Ti, V, Cr, Mn, Fe, Cu, Ni, Co, Zn, Br and Pb. The IBA techniques were proved invaluable in identifying sources of fine particles and their spatial and seasonal variations accross the large area sampled by the ASP network. 3 figs.

  18. Ion beam analysis techniques applied to large scale pollution studies

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, D.D.; Bailey, G.; Martin, J.; Garton, D.; Noorman, H.; Stelcer, E.; Johnson, P. [Australian Nuclear Science and Technology Organisation, Lucas Heights, NSW (Australia)

    1993-12-31

    Ion Beam Analysis (IBA) techniques are ideally suited to analyse the thousands of filter papers a year that may originate from a large scale aerosol sampling network. They are fast multi-elemental and, for the most part, non-destructive so other analytical methods such as neutron activation and ion chromatography can be performed afterwards. ANSTO in collaboration with the NSW EPA, Pacific Power and the Universities of NSW and Macquarie has established a large area fine aerosol sampling network covering nearly 80,000 square kilometres of NSW with 25 fine particle samplers. This network known as ASP was funded by the Energy Research and Development Corporation (ERDC) and commenced sampling on 1 July 1991. The cyclone sampler at each site has a 2.5 {mu}m particle diameter cut off and runs for 24 hours every Sunday and Wednesday using one Gillman 25mm diameter stretched Teflon filter for each day. These filters are ideal targets for ion beam analysis work. Currently ANSTO receives 300 filters per month from this network for analysis using its accelerator based ion beam techniques on the 3 MV Van de Graaff accelerator. One week a month of accelerator time is dedicated to this analysis. Four simultaneous accelerator based IBA techniques are used at ANSTO, to analyse for the following 24 elements: H, C, N, O, F, Na, Al, Si, P, S, Cl, K, Ca, Ti, V, Cr, Mn, Fe, Cu, Ni, Co, Zn, Br and Pb. The IBA techniques were proved invaluable in identifying sources of fine particles and their spatial and seasonal variations accross the large area sampled by the ASP network. 3 figs.

  19. Learning curves for solid oxide fuel cells

    International Nuclear Information System (INIS)

    Rivera-Tinoco, Rodrigo; Schoots, Koen; Zwaan, Bob van der

    2012-01-01

    Highlights: ► We present learning curves for fuel cells based on empirical data. ► We disentangle different cost reduction mechanisms for SOFCs. ► We distinguish between learning-by-doing, R and D, economies-of-scale and automation. - Abstract: In this article we present learning curves for solid oxide fuel cells (SOFCs). With data from fuel cell manufacturers we derive a detailed breakdown of their production costs. We develop a bottom-up model that allows for determining overall SOFC manufacturing costs with their respective cost components, among which material, energy, labor and capital charges. The results obtained from our model prove to deviate by at most 13% from total cost figures quoted in the literature. For the R and D stage of development and diffusion, we find local learning rates between 13% and 17% and we demonstrate that the corresponding cost reductions result essentially from learning-by-searching effects. When considering periods in time that focus on the pilot and early commercial production stages, we find regional learning rates of 27% and 1%, respectively, which we assume derive mainly from genuine learning phenomena. These figures turnout significantly higher, approximately 44% and 12% respectively, if also effects of economies-of-scale and automation are included. When combining all production stages we obtain lr = 35%, which represents a mix of cost reduction phenomena. This high learning rate value and the potential to scale up production suggest that continued efforts in the development of SOFC manufacturing processes, as well as deployment and use of SOFCs, may lead to substantial further cost reductions.

  20. Sample preparation for large-scale bioanalytical studies based on liquid chromatographic techniques.

    Science.gov (United States)

    Medvedovici, Andrei; Bacalum, Elena; David, Victor

    2018-01-01

    Quality of the analytical data obtained for large-scale and long term bioanalytical studies based on liquid chromatography depends on a number of experimental factors including the choice of sample preparation method. This review discusses this tedious part of bioanalytical studies, applied to large-scale samples and using liquid chromatography coupled with different detector types as core analytical technique. The main sample preparation methods included in this paper are protein precipitation, liquid-liquid extraction, solid-phase extraction, derivatization and their versions. They are discussed by analytical performances, fields of applications, advantages and disadvantages. The cited literature covers mainly the analytical achievements during the last decade, although several previous papers became more valuable in time and they are included in this review. Copyright © 2017 John Wiley & Sons, Ltd.

  1. Going Beyond, Going Further: The Preparation of Acid-Base Titration Curves.

    Science.gov (United States)

    McClendon, Michael

    1984-01-01

    Background information, list of materials needed, and procedures used are provided for a simple technique for generating mechanically plotted acid-base titration curves. The method is suitable for second-year high school chemistry students. (JN)

  2. Generation of response functions of a NaI detector by using an interpolation technique

    International Nuclear Information System (INIS)

    Tominaga, Shoji

    1983-01-01

    A computer method is developed for generating response functions of a NaI detector to monoenergetic γ-rays. The method is based on an interpolation between measured response curves by a detector. The computer programs are constructed for Heath's response spectral library. The principle of the basic mathematics used for interpolation, which was reported previously by the author, et al., is that response curves can be decomposed into a linear combination of intrinsic-component patterns, and thereby the interpolation of curves is reduced to a simple interpolation of weighting coefficients needed to combine the component patterns. This technique has some advantages of data compression, reduction in computation time, and stability of the solution, in comparison with the usual functional fitting method. The processing method of segmentation of a spectrum is devised to generate useful and precise response curves. A spectral curve, obtained for each γ-ray source, is divided into some regions defined by the physical processes, such as the photopeak area, the Compton continuum area, the backscatter peak area, and so on. Each segment curve then is processed separately for interpolation. Lastly the estimated curves to the respective areas are connected on one channel scale. The generation programs are explained briefly. It is shown that the generated curve represents the overall shape of a response spectrum including not only its photopeak but also the corresponding Compton area, with a sufficient accuracy. (author)

  3. The standard centrifuge method accurately measures vulnerability curves of long-vesselled olive stems.

    Science.gov (United States)

    Hacke, Uwe G; Venturas, Martin D; MacKinnon, Evan D; Jacobsen, Anna L; Sperry, John S; Pratt, R Brandon

    2015-01-01

    The standard centrifuge method has been frequently used to measure vulnerability to xylem cavitation. This method has recently been questioned. It was hypothesized that open vessels lead to exponential vulnerability curves, which were thought to be indicative of measurement artifact. We tested this hypothesis in stems of olive (Olea europea) because its long vessels were recently claimed to produce a centrifuge artifact. We evaluated three predictions that followed from the open vessel artifact hypothesis: shorter stems, with more open vessels, would be more vulnerable than longer stems; standard centrifuge-based curves would be more vulnerable than dehydration-based curves; and open vessels would cause an exponential shape of centrifuge-based curves. Experimental evidence did not support these predictions. Centrifuge curves did not vary when the proportion of open vessels was altered. Centrifuge and dehydration curves were similar. At highly negative xylem pressure, centrifuge-based curves slightly overestimated vulnerability compared to the dehydration curve. This divergence was eliminated by centrifuging each stem only once. The standard centrifuge method produced accurate curves of samples containing open vessels, supporting the validity of this technique and confirming its utility in understanding plant hydraulics. Seven recommendations for avoiding artefacts and standardizing vulnerability curve methodology are provided. © 2014 The Authors. New Phytologist © 2014 New Phytologist Trust.

  4. Analysis of Surface Plasmon Resonance Curves with a Novel Sigmoid-Asymmetric Fitting Algorithm

    Directory of Open Access Journals (Sweden)

    Daeho Jang

    2015-09-01

    Full Text Available The present study introduces a novel curve-fitting algorithm for surface plasmon resonance (SPR curves using a self-constructed, wedge-shaped beam type angular interrogation SPR spectroscopy technique. Previous fitting approaches such as asymmetric and polynomial equations are still unsatisfactory for analyzing full SPR curves and their use is limited to determining the resonance angle. In the present study, we developed a sigmoid-asymmetric equation that provides excellent curve-fitting for the whole SPR curve over a range of incident angles, including regions of the critical angle and resonance angle. Regardless of the bulk fluid type (i.e., water and air, the present sigmoid-asymmetric fitting exhibited nearly perfect matching with a full SPR curve, whereas the asymmetric and polynomial curve fitting methods did not. Because the present curve-fitting sigmoid-asymmetric equation can determine the critical angle as well as the resonance angle, the undesired effect caused by the bulk fluid refractive index was excluded by subtracting the critical angle from the resonance angle in real time. In conclusion, the proposed sigmoid-asymmetric curve-fitting algorithm for SPR curves is widely applicable to various SPR measurements, while excluding the effect of bulk fluids on the sensing layer.

  5. [Evaluation of the learning curve of residents in localizing a phantom target with ultrasonography].

    Science.gov (United States)

    Dessieux, T; Estebe, J-P; Bloc, S; Mercadal, L; Ecoffey, C

    2008-10-01

    Few information are available regarding the learning curve in ultrasonography and even less for ultrasound-guided regional anesthesia. This study aimed to evaluate in a training program the learning curve on a phantom of 12 residents novice in ultrasonography. Twelve trainees inexperienced in ultrasonography were given introductory training consisting of didactic formation on the various components of the portable ultrasound machine (i.e. on/off button, gain, depth, resolution, and image storage). Then, students performed three trials, in two sets of increased difficulty, at executing these predefined tasks: adjustments of the machine, then localization of a small plastic piece introduced into roasting pork (3 cm below the surface). At the end of the evaluation, the residents were asked to insert a 22 G needle into an exact predetermined target (i.e. point of fascia intersection). The progression of the needle was continuously controlled by ultrasound visualization using injection of a small volume of water (needle perpendicular to the longitudinal plane of the ultrasound beam). Two groups of two different examiners evaluated for each three trials the skill of the residents (quality, time to perform the machine adjustments, to localize the plastic target, and to hydrolocalize, and volume used for hydrolocalization). After each trial, residents evaluated their performance using a difficulty scale (0: easy to 10: difficult). All residents performed the adjustments from the last trial of each set, with a learning curve observed in terms of duration. Localization of the plastic piece was achieved by all residents at the 6th trial, with a shorter duration of localization. Hydrolocalization was achieved after the 4th trial by all subjects. Difficulty scale was correlated to the number of trials. All these results were independent of the experience of residents in regional anesthesia. Four trials were necessary to adjust correctly the machine, to localize a target, and to

  6. Stenting for curved lesions using a novel curved balloon: Preliminary experimental study.

    Science.gov (United States)

    Tomita, Hideshi; Higaki, Takashi; Kobayashi, Toshiki; Fujii, Takanari; Fujimoto, Kazuto

    2015-08-01

    Stenting may be a compelling approach to dilating curved lesions in congenital heart diseases. However, balloon-expandable stents, which are commonly used for congenital heart diseases, are usually deployed in a straight orientation. In this study, we evaluated the effect of stenting with a novel curved balloon considered to provide better conformability to the curved-angled lesion. In vitro experiments: A Palmaz Genesis(®) stent (Johnson & Johnson, Cordis Co, Bridgewater, NJ, USA) mounted on the Goku(®) curve (Tokai Medical Co. Nagoya, Japan) was dilated in vitro to observe directly the behavior of the stent and balloon assembly during expansion. Animal experiment: A short Express(®) Vascular SD (Boston Scientific Co, Marlborough, MA, USA) stent and a long Express(®) Vascular LD stent (Boston Scientific) mounted on the curved balloon were deployed in the curved vessel of a pig to observe the effect of stenting in vivo. In vitro experiments: Although the stent was dilated in a curved fashion, stent and balloon assembly also rotated conjointly during expansion of its curved portion. In the primary stenting of the short stent, the stent was dilated with rotation of the curved portion. The excised stent conformed to the curved vessel. As the long stent could not be negotiated across the mid-portion with the balloon in expansion when it started curving, the mid-portion of the stent failed to expand fully. Furthermore, the balloon, which became entangled with the stent strut, could not be retrieved even after complete deflation. This novel curved balloon catheter might be used for implantation of the short stent in a curved lesion; however, it should not be used for primary stenting of the long stent. Post-dilation to conform the stent to the angled vessel would be safer than primary stenting irrespective of stent length. Copyright © 2014 Japanese College of Cardiology. Published by Elsevier Ltd. All rights reserved.

  7. Curve Boxplot: Generalization of Boxplot for Ensembles of Curves.

    Science.gov (United States)

    Mirzargar, Mahsa; Whitaker, Ross T; Kirby, Robert M

    2014-12-01

    In simulation science, computational scientists often study the behavior of their simulations by repeated solutions with variations in parameters and/or boundary values or initial conditions. Through such simulation ensembles, one can try to understand or quantify the variability or uncertainty in a solution as a function of the various inputs or model assumptions. In response to a growing interest in simulation ensembles, the visualization community has developed a suite of methods for allowing users to observe and understand the properties of these ensembles in an efficient and effective manner. An important aspect of visualizing simulations is the analysis of derived features, often represented as points, surfaces, or curves. In this paper, we present a novel, nonparametric method for summarizing ensembles of 2D and 3D curves. We propose an extension of a method from descriptive statistics, data depth, to curves. We also demonstrate a set of rendering and visualization strategies for showing rank statistics of an ensemble of curves, which is a generalization of traditional whisker plots or boxplots to multidimensional curves. Results are presented for applications in neuroimaging, hurricane forecasting and fluid dynamics.

  8. JUMPING THE CURVE

    Directory of Open Access Journals (Sweden)

    René Pellissier

    2012-01-01

    Full Text Available This paper explores the notion ofjump ing the curve,following from Handy 's S-curve onto a new curve with new rules policies and procedures. . It claims that the curve does not generally lie in wait but has to be invented by leadership. The focus of this paper is the identification (mathematically and inferentially ofthat point in time, known as the cusp in catastrophe theory, when it is time to change - pro-actively, pre-actively or reactively. These three scenarios are addressed separately and discussed in terms ofthe relevance ofeach.

  9. A study of residence time distribution using radiotracer technique in the large scale plant facility

    Science.gov (United States)

    Wetchagarun, S.; Tippayakul, C.; Petchrak, A.; Sukrod, K.; Khoonkamjorn, P.

    2017-06-01

    As the demand for troubleshooting of large industrial plants increases, radiotracer techniques, which have capability to provide fast, online and effective detections to plant problems, have been continually developed. One of the good potential applications of the radiotracer for troubleshooting in a process plant is the analysis of Residence Time Distribution (RTD). In this paper, the study of RTD in a large scale plant facility using radiotracer technique was presented. The objective of this work is to gain experience on the RTD analysis using radiotracer technique in a “larger than laboratory” scale plant setup which can be comparable to the real industrial application. The experiment was carried out at the sedimentation tank in the water treatment facility of Thailand Institute of Nuclear Technology (Public Organization). Br-82 was selected to use in this work due to its chemical property, its suitable half-life and its on-site availability. NH4Br in the form of aqueous solution was injected into the system as the radiotracer. Six NaI detectors were placed along the pipelines and at the tank in order to determine the RTD of the system. The RTD and the Mean Residence Time (MRT) of the tank was analysed and calculated from the measured data. The experience and knowledge attained from this study is important for extending this technique to be applied to industrial facilities in the future.

  10. Spectral curves in gauge/string dualities: integrability, singular sectors and regularization

    International Nuclear Information System (INIS)

    Konopelchenko, Boris; Alonso, Luis Martínez; Medina, Elena

    2013-01-01

    We study the moduli space of the spectral curves y 2 = W′(z) 2 + f(z) which characterize the vacua of N=1 U(n) supersymmetric gauge theories with an adjoint Higgs field and a polynomial tree level potential W(z). The integrable structure of the Whitham equations is used to determine the spectral curves from their moduli. An alternative characterization of the spectral curves in terms of critical points of a family of polynomial solutions W to Euler–Poisson–Darboux equations is provided. The equations for these critical points are a generalization of the planar limit equations for one-cut random matrix models. Moreover, singular spectral curves with higher order branch points turn out to be described by degenerate critical points of W. As a consequence we propose a multiple scaling limit method of regularization and show that, in the simplest cases, it leads to the Painlevè-I equation and its multi-component generalizations. (paper)

  11. Mathematics of quantitative kinetic PCR and the application of standard curves.

    Science.gov (United States)

    Rutledge, R G; Côté, C

    2003-08-15

    Fluorescent monitoring of DNA amplification is the basis of real-time PCR, from which target DNA concentration can be determined from the fractional cycle at which a threshold amount of amplicon DNA is produced. Absolute quantification can be achieved using a standard curve constructed by amplifying known amounts of target DNA. In this study, the mathematics of quantitative PCR are examined in detail, from which several fundamental aspects of the threshold method and the application of standard curves are illustrated. The construction of five replicate standard curves for two pairs of nested primers was used to examine the reproducibility and degree of quantitative variation using SYBER Green I fluorescence. Based upon this analysis the application of a single, well- constructed standard curve could provide an estimated precision of +/-6-21%, depending on the number of cycles required to reach threshold. A simplified method for absolute quantification is also proposed, in which quantitative scale is determined by DNA mass at threshold.

  12. Statistics for products of traces of high powers of the frobenius class of hyperelliptic curves

    OpenAIRE

    Roditty-Gershon, Edva

    2011-01-01

    We study the averages of products of traces of high powers of the Frobenius class of hyperelliptic curves of genus g over a fixed finite field. We show that for increasing genus g, the limiting expectation of these products equals to the expectation when the curve varies over the unitary symplectic group USp(2g). We also consider the scaling limit of linear statistics for eigenphases of the Frobenius class of hyperelliptic curves, and show that their first few moments are Gaussian.

  13. Dual Smarandache Curves of a Timelike Curve lying on Unit dual Lorentzian Sphere

    OpenAIRE

    Kahraman, Tanju; Hüseyin Ugurlu, Hasan

    2016-01-01

    In this paper, we give Darboux approximation for dual Smarandache curves of time like curve on unit dual Lorentzian sphere. Firstly, we define the four types of dual Smarandache curves of a timelike curve lying on dual Lorentzian sphere.

  14. Growth curve registration for evaluating salinity tolerance in barley

    KAUST Repository

    Meng, Rui

    2017-03-23

    Background: Smarthouses capable of non-destructive, high-throughput plant phenotyping collect large amounts of data that can be used to understand plant growth and productivity in extreme environments. The challenge is to apply the statistical tool that best analyzes the data to study plant traits, such as salinity tolerance, or plant-growth-related traits. Results: We derive family-wise salinity sensitivity (FSS) growth curves and use registration techniques to summarize growth patterns of HEB-25 barley families and the commercial variety, Navigator. We account for the spatial variation in smarthouse microclimates and in temporal variation across phenotyping runs using a functional ANOVA model to derive corrected FSS curves. From FSS, we derive corrected values for family-wise salinity tolerance, which are strongly negatively correlated with Na but not significantly with K, indicating that Na content is an important factor affecting salinity tolerance in these families, at least for plants of this age and grown in these conditions. Conclusions: Our family-wise methodology is suitable for analyzing the growth curves of a large number of plants from multiple families. The corrected curves accurately account for the spatial and temporal variations among plants that are inherent to high-throughput experiments.

  15. Growth curve registration for evaluating salinity tolerance in barley

    KAUST Repository

    Meng, Rui; Saade, Stephanie; Kurtek, Sebastian; Berger, Bettina; Brien, Chris; Pillen, Klaus; Tester, Mark A.; Sun, Ying

    2017-01-01

    Background: Smarthouses capable of non-destructive, high-throughput plant phenotyping collect large amounts of data that can be used to understand plant growth and productivity in extreme environments. The challenge is to apply the statistical tool that best analyzes the data to study plant traits, such as salinity tolerance, or plant-growth-related traits. Results: We derive family-wise salinity sensitivity (FSS) growth curves and use registration techniques to summarize growth patterns of HEB-25 barley families and the commercial variety, Navigator. We account for the spatial variation in smarthouse microclimates and in temporal variation across phenotyping runs using a functional ANOVA model to derive corrected FSS curves. From FSS, we derive corrected values for family-wise salinity tolerance, which are strongly negatively correlated with Na but not significantly with K, indicating that Na content is an important factor affecting salinity tolerance in these families, at least for plants of this age and grown in these conditions. Conclusions: Our family-wise methodology is suitable for analyzing the growth curves of a large number of plants from multiple families. The corrected curves accurately account for the spatial and temporal variations among plants that are inherent to high-throughput experiments.

  16. DECIPHERING THERMAL PHASE CURVES OF DRY, TIDALLY LOCKED TERRESTRIAL PLANETS

    Energy Technology Data Exchange (ETDEWEB)

    Koll, Daniel D. B.; Abbot, Dorian S., E-mail: dkoll@uchicago.edu [Department of the Geophysical Sciences, University of Chicago, Chicago, IL 60637 (United States)

    2015-03-20

    Next-generation space telescopes will allow us to characterize terrestrial exoplanets. To do so effectively it will be crucial to make use of all available data. We investigate which atmospheric properties can, and cannot, be inferred from the broadband thermal phase curve of a dry and tidally locked terrestrial planet. First, we use dimensional analysis to show that phase curves are controlled by six nondimensional parameters. Second, we use an idealized general circulation model to explore the relative sensitivity of phase curves to these parameters. We find that the feature of phase curves most sensitive to atmospheric parameters is the peak-to-trough amplitude. Moreover, except for hot and rapidly rotating planets, the phase amplitude is primarily sensitive to only two nondimensional parameters: (1) the ratio of dynamical to radiative timescales and (2) the longwave optical depth at the surface. As an application of this technique, we show how phase curve measurements can be combined with transit or emission spectroscopy to yield a new constraint for the surface pressure and atmospheric mass of terrestrial planets. We estimate that a single broadband phase curve, measured over half an orbit with the James Webb Space Telescope, could meaningfully constrain the atmospheric mass of a nearby super-Earth. Such constraints will be important for studying the atmospheric evolution of terrestrial exoplanets as well as characterizing the surface conditions on potentially habitable planets.

  17. Long-term Behavior of Hydrocarbon Production Curves

    Science.gov (United States)

    Lovell, A.; Karra, S.; O'Malley, D.; Viswanathan, H. S.; Srinivasan, G.

    2017-12-01

    Recovering hydrocarbons (such as natural gas) from naturally-occurring formations with low permeability has had a huge impact on the energy sector, however, recovery rates are low due to poor understanding of recovery and transport mechanisms [1]. The physical mechanisms that control the production of hydrocarbon are only partially understood. Calculations have shown that the short-term behavior in the peak of the production curve is understood to come from the free hydrocarbons in the fracture networks, but the long-term behavior of these curves is often underpredicted [2]. This behavior is thought to be due to small scale processes - such as matrix diffusion, desorption, and connectivity in the damage region around the large fracture network. In this work, we explore some of these small-scale processes using discrete fracture networks (DFN) and the toolkit dfnWorks [3], the matrix diffusion, size of the damage region, and distribution of free gas between the fracture networks and rock matrix. Individual and combined parameter spaces are explored, and comparisons of the resulting production curves are made to experimental site data from the Haynesville formation [4]. We find that matrix diffusion significantly controls the shape of the tail of the production curve, while the distribution of free gas impacts the relative magnitude of the peak to the tail. The height of the damage region has no effect on the shape of the tail. Understanding the constrains of the parameter space based on site data is the first step in rigorously quantifying the uncertainties coming from these types of systems, which can in turn optimize and improve hydrocarbon recovery. [1] C. McGlade, et. al., (2013) Methods of estimating shale gas resources - comparison, evaluation, and implications, Energy, 59, 116-125 [2] S. Karra, et. al., (2015) Effect of advective flow in fractures and matrix diffusion on natural gas production, Water Resources Research, 51(10), 8646-8657 [3] J.D. Hyman, et

  18. NormaCurve: a SuperCurve-based method that simultaneously quantifies and normalizes reverse phase protein array data.

    Directory of Open Access Journals (Sweden)

    Sylvie Troncale

    Full Text Available MOTIVATION: Reverse phase protein array (RPPA is a powerful dot-blot technology that allows studying protein expression levels as well as post-translational modifications in a large number of samples simultaneously. Yet, correct interpretation of RPPA data has remained a major challenge for its broad-scale application and its translation into clinical research. Satisfying quantification tools are available to assess a relative protein expression level from a serial dilution curve. However, appropriate tools allowing the normalization of the data for external sources of variation are currently missing. RESULTS: Here we propose a new method, called NormaCurve, that allows simultaneous quantification and normalization of RPPA data. For this, we modified the quantification method SuperCurve in order to include normalization for (i background fluorescence, (ii variation in the total amount of spotted protein and (iii spatial bias on the arrays. Using a spike-in design with a purified protein, we test the capacity of different models to properly estimate normalized relative expression levels. The best performing model, NormaCurve, takes into account a negative control array without primary antibody, an array stained with a total protein stain and spatial covariates. We show that this normalization is reproducible and we discuss the number of serial dilutions and the number of replicates that are required to obtain robust data. We thus provide a ready-to-use method for reliable and reproducible normalization of RPPA data, which should facilitate the interpretation and the development of this promising technology. AVAILABILITY: The raw data, the scripts and the normacurve package are available at the following web site: http://microarrays.curie.fr.

  19. ECM using Edwards curves

    DEFF Research Database (Denmark)

    Bernstein, Daniel J.; Birkner, Peter; Lange, Tanja

    2013-01-01

    -arithmetic level are as follows: (1) use Edwards curves instead of Montgomery curves; (2) use extended Edwards coordinates; (3) use signed-sliding-window addition-subtraction chains; (4) batch primes to increase the window size; (5) choose curves with small parameters and base points; (6) choose curves with large...

  20. Benefit and cost curves for typical pollination mutualisms.

    Science.gov (United States)

    Morris, William F; Vázquez, Diego P; Chacoff, Natacha P

    2010-05-01

    Mutualisms provide benefits to interacting species, but they also involve costs. If costs come to exceed benefits as population density or the frequency of encounters between species increases, the interaction will no longer be mutualistic. Thus curves that represent benefits and costs as functions of interaction frequency are important tools for predicting when a mutualism will tip over into antagonism. Currently, most of what we know about benefit and cost curves in pollination mutualisms comes from highly specialized pollinating seed-consumer mutualisms, such as the yucca moth-yucca interaction. There, benefits to female reproduction saturate as the number of visits to a flower increases (because the amount of pollen needed to fertilize all the flower's ovules is finite), but costs continue to increase (because pollinator offspring consume developing seeds), leading to a peak in seed production at an intermediate number of visits. But for most plant-pollinator mutualisms, costs to the plant are more subtle than consumption of seeds, and how such costs scale with interaction frequency remains largely unknown. Here, we present reasonable benefit and cost curves that are appropriate for typical pollinator-plant interactions, and we show how they can result in a wide diversity of relationships between net benefit (benefit minus cost) and interaction frequency. We then use maximum-likelihood methods to fit net-benefit curves to measures of female reproductive success for three typical pollination mutualisms from two continents, and for each system we chose the most parsimonious model using information-criterion statistics. We discuss the implications of the shape of the net-benefit curve for the ecology and evolution of plant-pollinator mutualisms, as well as the challenges that lie ahead for disentangling the underlying benefit and cost curves for typical pollination mutualisms.

  1. Symphysis-fundal height curve in the diagnosis of fetal growth deviations

    Directory of Open Access Journals (Sweden)

    Djacyr Magna Cabral Freire

    2010-12-01

    Full Text Available OBJECTIVE: To validate a new symphysis-fundal curve for screening fetal growth deviations and to compare its performance with the standard curve adopted by the Brazilian Ministry of Health. METHODS: Observational study including a total of 753 low-risk pregnant women with gestational age above 27 weeks between March to October 2006 in the city of João Pessoa, Northeastern Brazil. Symphisys-fundal was measured using a standard technique recommended by the Brazilian Ministry of Health. Estimated fetal weight assessed through ultrasound using the Brazilian fetal weight chart for gestational age was the gold standard. A subsample of 122 women with neonatal weight measurements was taken up to seven days after estimated fetal weight measurements and symphisys-fundal classification was compared with Lubchenco growth reference curve as gold standard. Sensitivity, specificity, positive and negative predictive values were calculated. The McNemar χ2 test was used for comparing sensitivity of both symphisys-fundal curves studied. RESULTS: The sensitivity of the new curve for detecting small for gestational age fetuses was 51.6% while that of the Brazilian Ministry of Health reference curve was significantly lower (12.5%. In the subsample using neonatal weight as gold standard, the sensitivity of the new reference curve was 85.7% while that of the Brazilian Ministry of Health was 42.9% for detecting small for gestational age. CONCLUSIONS: The diagnostic performance of the new curve for detecting small for gestational age fetuses was significantly higher than that of the Brazilian Ministry of Health reference curve.

  2. Shape optimization of self-avoiding curves

    Science.gov (United States)

    Walker, Shawn W.

    2016-04-01

    This paper presents a softened notion of proximity (or self-avoidance) for curves. We then derive a sensitivity result, based on shape differential calculus, for the proximity. This is combined with a gradient-based optimization approach to compute three-dimensional, parameterized curves that minimize the sum of an elastic (bending) energy and a proximity energy that maintains self-avoidance by a penalization technique. Minimizers are computed by a sequential-quadratic-programming (SQP) method where the bending energy and proximity energy are approximated by a finite element method. We then apply this method to two problems. First, we simulate adsorbed polymer strands that are constrained to be bound to a surface and be (locally) inextensible. This is a basic model of semi-flexible polymers adsorbed onto a surface (a current topic in material science). Several examples of minimizing curve shapes on a variety of surfaces are shown. An advantage of the method is that it can be much faster than using molecular dynamics for simulating polymer strands on surfaces. Second, we apply our proximity penalization to the computation of ideal knots. We present a heuristic scheme, utilizing the SQP method above, for minimizing rope-length and apply it in the case of the trefoil knot. Applications of this method could be for generating good initial guesses to a more accurate (but expensive) knot-tightening algorithm.

  3. EMPIRICALLY ESTIMATED FAR-UV EXTINCTION CURVES FOR CLASSICAL T TAURI STARS

    Energy Technology Data Exchange (ETDEWEB)

    McJunkin, Matthew; France, Kevin [Laboratory for Atmospheric and Space Physics, University of Colorado, 600 UCB, Boulder, CO 80303-7814 (United States); Schindhelm, Eric [Southwest Research Institute, 1050 Walnut Street, Suite 300, Boulder, CO 80302 (United States); Herczeg, Gregory [Kavli Institute for Astronomy and Astrophysics, Peking University, Yi He Yuan Lu 5, Haidian Qu, 100871 Beijing (China); Schneider, P. Christian [ESA/ESTEC, Keplerlaan 1, 2201 AZ Noordwijk (Netherlands); Brown, Alex, E-mail: matthew.mcjunkin@colorado.edu [Center for Astrophysics and Space Astronomy, University of Colorado, 593 UCB, Boulder, CO 80309-0593 (United States)

    2016-09-10

    Measurements of extinction curves toward young stars are essential for calculating the intrinsic stellar spectrophotometric radiation. This flux determines the chemical properties and evolution of the circumstellar region, including the environment in which planets form. We develop a new technique using H{sub 2} emission lines pumped by stellar Ly α photons to characterize the extinction curve by comparing the measured far-ultraviolet H{sub 2} line fluxes with model H{sub 2} line fluxes. The difference between model and observed fluxes can be attributed to the dust attenuation along the line of sight through both the interstellar and circumstellar material. The extinction curves are fit by a Cardelli et al. (1989) model and the A {sub V} (H{sub 2}) for the 10 targets studied with good extinction fits range from 0.5 to 1.5 mag, with R {sub V} values ranging from 2.0 to 4.7. A {sub V} and R {sub V} are found to be highly degenerate, suggesting that one or the other needs to be calculated independently. Column densities and temperatures for the fluorescent H{sub 2} populations are also determined, with averages of log{sub 10}( N (H{sub 2})) = 19.0 and T = 1500 K. This paper explores the strengths and limitations of the newly developed extinction curve technique in order to assess the reliability of the results and improve the method in the future.

  4. Study on elastic-plastic deformation analysis using a cyclic stress-strain curve

    International Nuclear Information System (INIS)

    Igari, Toshihide; Setoguchi, Katsuya; Yamauchi, Masafumi

    1983-01-01

    This paper presents the results of the elastic-plastic deformation analysis using a cyclic stress-strain curve with an intention to apply this method for predicting the low-cycle fatigue life. Uniaxial plastic cycling tests were performed on 2 1/4Cr-1Mo steel to investigate the correspondence between the cyclic stress-strain curve and the hysteresis loop, and also to determine what mathematical model should be used for analysis of deformation at stress reversal. Furthermore, a cyclic in-plane bending test was performed on a flat plate to clarify the validity of the cyclic stress-strain curve-based theoretical analysis. The results obtained are as follows: (1) The cyclic stress-strain curve corresponds nearly to the ascending curve of hysteresis loop scaled by a factor of 1/2 for both stress and strain. Therefore, the cyclic stress-strain curve can be determined from the shape of hysteresis loop, for simplicity. (2) To perform the elastic-plastic deformation analysis using the cyclic stress-strain curve is both practical and effective for predicting the cyclic elastic-plastic deformation of structures at the stage of advanced cycles. And Masing model can serve as a suitable mathematical model for such a deformation analysis. (author)

  5. Analysis of interacting quantum field theory in curved spacetime

    International Nuclear Information System (INIS)

    Birrell, N.D.; Taylor, J.G.

    1980-01-01

    A detailed analysis of interacting quantized fields propagating in a curved background spacetime is given. Reduction formulas for S-matrix elements in terms of vacuum Green's functions are derived, special attention being paid to the possibility that the ''in'' and ''out'' vacuum states may not be equivalent. Green's functions equations are obtained and a diagrammatic representation for them given, allowing a formal, diagrammatic renormalization to be effected. Coordinate space techniques for showing renormalizability are developed in Minkowski space, for lambdaphi 3 /sub() 4,6/ field theories. The extension of these techniques to curved spacetimes is considered. It is shown that the possibility of field theories becoming nonrenormalizable there cannot be ruled out, although, allowing certain modifications to the theory, phi 3 /sub( 4 ) is proven renormalizable in a large class of spacetimes. Finally particle production from the vacuum by the gravitational field is discussed with particular reference to Schwarzschild spacetime. We shed some light on the nonlocalizability of the production process and on the definition of the S matrix for such processes

  6. Technological change in energy systems. Learning curves, logistic curves and input-output coefficients

    International Nuclear Information System (INIS)

    Pan, Haoran; Koehler, Jonathan

    2007-01-01

    Learning curves have recently been widely adopted in climate-economy models to incorporate endogenous change of energy technologies, replacing the conventional assumption of an autonomous energy efficiency improvement. However, there has been little consideration of the credibility of the learning curve. The current trend that many important energy and climate change policy analyses rely on the learning curve means that it is of great importance to critically examine the basis for learning curves. Here, we analyse the use of learning curves in energy technology, usually implemented as a simple power function. We find that the learning curve cannot separate the effects of price and technological change, cannot reflect continuous and qualitative change of both conventional and emerging energy technologies, cannot help to determine the time paths of technological investment, and misses the central role of R and D activity in driving technological change. We argue that a logistic curve of improving performance modified to include R and D activity as a driving variable can better describe the cost reductions in energy technologies. Furthermore, we demonstrate that the top-down Leontief technology can incorporate the bottom-up technologies that improve along either the learning curve or the logistic curve, through changing input-output coefficients. An application to UK wind power illustrates that the logistic curve fits the observed data better and implies greater potential for cost reduction than the learning curve does. (author)

  7. Regionalisation of low flow frequency curves for the Peninsular Malaysia

    Science.gov (United States)

    Mamun, Abdullah A.; Hashim, Alias; Daoud, Jamal I.

    2010-02-01

    SUMMARYRegional maps and equations for the magnitude and frequency of 1, 7 and 30-day low flows were derived and are presented in this paper. The river gauging stations of neighbouring catchments that produced similar low flow frequency curves were grouped together. As such, the Peninsular Malaysia was divided into seven low flow regions. Regional equations were developed using the multivariate regression technique. An empirical relationship was developed for mean annual minimum flow as a function of catchment area, mean annual rainfall and mean annual evaporation. The regional equations exhibited good coefficient of determination ( R2 > 0.90). Three low flow frequency curves showing the low, mean and high limits for each region were proposed based on a graphical best-fit technique. Knowing the catchment area, mean annual rainfall and evaporation in the region, design low flows of different durations can be easily estimated for the ungauged catchments. This procedure is expected to overcome the problem of data unavailability in estimating low flows in the Peninsular Malaysia.

  8. Large Display Interaction via Multiple Acceleration Curves and Multifinger Pointer Control

    Directory of Open Access Journals (Sweden)

    Andrey Esakia

    2014-01-01

    Full Text Available Large high-resolution displays combine high pixel density with ample physical dimensions. The combination of these factors creates a multiscale workspace where interactive targeting of on-screen objects requires both high speed for distant targets and high accuracy for small targets. Modern operating systems support implicit dynamic control-display gain adjustment (i.e., a pointer acceleration curve that helps to maintain both speed and accuracy. However, large high-resolution displays require a broader range of control-display gains than a single acceleration curve can usably enable. Some interaction techniques attempt to solve the problem by utilizing multiple explicit modes of interaction, where different modes provide different levels of pointer precision. Here, we investigate the alternative hypothesis of using a single mode of interaction for continuous pointing that enables both (1 standard implicit granularity control via an acceleration curve and (2 explicit switching between multiple acceleration curves in an efficient and dynamic way. We evaluate a sample solution that augments standard touchpad accelerated pointer manipulation with multitouch capability, where the choice of acceleration curve dynamically changes depending on the number of fingers in contact with the touchpad. Specifically, users can dynamically switch among three different acceleration curves by using one, two, or three fingers on the touchpad.

  9. A learning curve for solar thermal power

    Science.gov (United States)

    Platzer, Werner J.; Dinter, Frank

    2016-05-01

    Photovoltaics started its success story by predicting the cost degression depending on cumulated installed capacity. This so-called learning curve was published and used for predictions for PV modules first, then predictions of system cost decrease also were developed. This approach is less sensitive to political decisions and changing market situations than predictions on the time axis. Cost degression due to innovation, use of scaling effects, improved project management, standardised procedures including the search for better sites and optimization of project size are learning effects which can only be utilised when projects are developed. Therefore a presentation of CAPEX versus cumulated installed capacity is proposed in order to show the possible future advancement of the technology to politics and market. However from a wide range of publications on cost for CSP it is difficult to derive a learning curve. A logical cost structure for direct and indirect capital expenditure is needed as the basis for further analysis. Using derived reference cost for typical power plant configurations predictions of future cost have been derived. Only on the basis of that cost structure and the learning curve levelised cost of electricity for solar thermal power plants should be calculated for individual projects with different capacity factors in various locations.

  10. Analysis of the radial distribution curves of partially ordered condensed carbon films

    International Nuclear Information System (INIS)

    Palatnik, L.S.; Derevyanchenko, A.S.; Nechitajlo, A.A.; Stetsenko, A.N.; Gorbenko, N.I.

    1977-01-01

    The Fourier analysis of the electron scattering curves has been carried out to determine the short-range order structure of carbon condensates. The intensity curves for carbon films condensed in a approximately 10 -6 Torr vacuum upon a substrate heated up to 600 deg C were obtained by diffraction techniques with filtration of the inelastic scattered electron background. The radial distribution curve errors were analyzed and quantified with the aid of a computer to determine the short-range order of the condensed carbon. It has been shown that carbon films consist of regions measuring approximately 20 A formed by parallelly packed graphite nets with azimuthal orientation different from that in ideal graphite crystals

  11. Dose - Response Curves for Dicentrics and PCC Rings: Preparedness for Radiological Emergency in Thailand

    International Nuclear Information System (INIS)

    Rungsimaphorn, B.; Rerkamnuaychoke, B.; Sudprasert, W.

    2014-01-01

    Establishing in-vitro dose calibration curves is important for reconstruction of radiation dose in the exposed individuals. The aim of this pioneering work in Thailand was to generate dose-response curves using conventional biological dosimetry: dicentric chromosome assay (DCA) and premature chromosome condensation (PCC) assay. The peripheral blood lymphocytes were irradiated with 137 Cs at a dose rate of 0.652 Gy/min to doses of 0.1, 0.25, 0.5, 0.75, 1, 2, 3, 4 and 5 Gy for DCA technique, and 5, 10, 15, 20 and 25 Gy for PCC technique. The blood samples were cultured and processed following the standard procedure given by the IAEA with slight modifications. At least 500-1,000 metaphases or 100 dicentrics/ PCC rings were analyzed using an automated metaphase finder system. The yield of dicentrics with dose was fitted to a linear quadratic model using Chromosome Aberration Calculation Software (CABAS, version 2.0), whereas the dose-response curve of PCC rings was fitted to a linear relationship. These curves will be useful for in-vitro dose reconstruction and can support the preparedness for radiological emergency in the country.

  12. Incorporating Nonstationarity into IDF Curves across CONUS from Station Records and Implications

    Science.gov (United States)

    Wang, K.; Lettenmaier, D. P.

    2017-12-01

    Intensity-duration-frequency (IDF) curves are widely used for engineering design of storm-affected structures. Current practice is that IDF-curves are based on observed precipitation extremes fit to a stationary probability distribution (e.g., the extreme value family). However, there is increasing evidence of nonstationarity in station records. We apply the Mann-Kendall trend test to over 1000 stations across the CONUS at a 0.05 significance level, and find that about 30% of stations test have significant nonstationarity for at least one duration (1-, 2-, 3-, 6-, 12-, 24-, and 48-hours). We fit the stations to a GEV distribution with time-varying location and scale parameters using a Bayesian- methodology and compare the fit of stationary versus nonstationary GEV distributions to observed precipitation extremes. Within our fitted nonstationary GEV distributions, we compare distributions with a time-varying location parameter versus distributions with both time-varying location and scale parameters. For distributions with two time-varying parameters, we pay particular attention to instances where location and scale trends have opposing directions. Finally, we use the mathematical framework based on work of Koutsoyiannis to generate IDF curves based on the fitted GEV distributions and discuss the implications that using time-varying parameters may have on simple scaling relationships. We apply the above methods to evaluate how frequency statistics based on a stationary assumption compare to those that incorporate nonstationarity for both short and long term projects. Overall, we find that neglecting nonstationarity can lead to under- or over-estimates (depending on the trend for the given duration and region) of important statistics such as the design storm.

  13. Gold analysis by the gamma absorption technique

    International Nuclear Information System (INIS)

    Kurtoglu, Arzu; Tugrul, A.B.

    2003-01-01

    Gold (Au) analyses are generally performed using destructive techniques. In this study, the Gamma Absorption Technique has been employed for gold analysis. A series of different gold alloys of known gold content were analysed and a calibration curve was obtained. This curve was then used for the analysis of unknown samples. Gold analyses can be made non-destructively, easily and quickly by the gamma absorption technique. The mass attenuation coefficients of the alloys were measured around the K-shell absorption edge of Au. Theoretical mass attenuation coefficient values were obtained using the WinXCom program and comparison of the experimental results with the theoretical values showed generally good and acceptable agreement

  14. In silico sampling reveals the effect of clustering and shows that the log-normal rank abundance curve is an artefact

    NARCIS (Netherlands)

    Neuteboom, J.H.; Struik, P.C.

    2005-01-01

    The impact of clustering on rank abundance, species-individual (S-N)and species-area curves was investigated using a computer programme for in silico sampling. In a rank abundance curve the abundances of species are plotted on log-scale against species sequence. In an S-N curve the number of species

  15. A new family of Fisher-curves estimates Fisher's alpha more accurately

    NARCIS (Netherlands)

    Schulte, R.P.O.; Lantinga, E.A.; Hawkins, M.J.

    2005-01-01

    Fisher's alpha is a satisfactory scale-independent indicator of biodiversity. However, alpha may be underestimated in communities in which the spatial arrangement of individuals is strongly clustered, or in which the total number of species does not tend to infinity. We have extended Fisher's curve

  16. Elongation cutoff technique armed with quantum fast multipole method for linear scaling.

    Science.gov (United States)

    Korchowiec, Jacek; Lewandowski, Jakub; Makowski, Marcin; Gu, Feng Long; Aoki, Yuriko

    2009-11-30

    A linear-scaling implementation of the elongation cutoff technique (ELG/C) that speeds up Hartree-Fock (HF) self-consistent field calculations is presented. The cutoff method avoids the known bottleneck of the conventional HF scheme, that is, diagonalization, because it operates within the low dimension subspace of the whole atomic orbital space. The efficiency of ELG/C is illustrated for two model systems. The obtained results indicate that the ELG/C is a very efficient sparse matrix algebra scheme. Copyright 2009 Wiley Periodicals, Inc.

  17. Gallium Nitride: A Nano scale Study using Electron Microscopy and Associated Techniques

    International Nuclear Information System (INIS)

    Mohammed Benaissa; Vennegues, Philippe

    2008-01-01

    A complete nano scale study on GaN thin films doped with Mg. This study was carried out using TEM and associated techniques such as HREM, CBED, EDX and EELS. It was found that the presence of triangular defects (of few nanometers in size) within GaN:Mg films were at the origin of unexpected electrical and optical behaviors, such as a decrease in the free hole density at high Mg doping. It is shown that these defects are inversion domains limited with inversion-domains boundaries. (author)

  18. REVISITING EVIDENCE OF CHAOS IN X-RAY LIGHT CURVES: THE CASE OF GRS 1915+105

    Energy Technology Data Exchange (ETDEWEB)

    Mannattil, Manu; Gupta, Himanshu; Chakraborty, Sagar, E-mail: mmanu@iitk.ac.in, E-mail: hiugupta@iitk.ac.in, E-mail: sagarc@iitk.ac.in [Department of Physics, Indian Institute of Technology Kanpur, Uttar Pradesh 208016 (India)

    2016-12-20

    Nonlinear time series analysis has been widely used to search for signatures of low-dimensional chaos in light curves emanating from astrophysical bodies. A particularly popular example is the microquasar GRS 1915+105, whose irregular but systematic X-ray variability has been well studied using data acquired by the Rossi X-ray Timing Explorer . With a view to building simpler models of X-ray variability, attempts have been made to classify the light curves of GRS 1915+105 as chaotic or stochastic. Contrary to some of the earlier suggestions, after careful analysis, we find no evidence for chaos or determinism in any of the GRS 1915+105 classes. The dearth of long and stationary data sets representing all the different variability classes of GRS 1915+105 makes it a poor candidate for analysis using nonlinear time series techniques. We conclude that either very exhaustive data analysis with sufficiently long and stationary light curves should be performed, keeping all the pitfalls of nonlinear time series analysis in mind, or alternative schemes of classifying the light curves should be adopted. The generic limitations of the techniques that we point out in the context of GRS 1915+105 affect all similar investigations of light curves from other astrophysical sources.

  19. Heat rate curve approximation for power plants without data measuring devices

    Energy Technology Data Exchange (ETDEWEB)

    Poullikkas, Andreas [Electricity Authority of Cyprus, P.O. Box 24506, 1399 Nicosia (CY

    2012-07-01

    In this work, a numerical method, based on the one-dimensional finite difference technique, is proposed for the approximation of the heat rate curve, which can be applied for power plants in which no data acquisition is available. Unlike other methods in which three or more data points are required for the approximation of the heat rate curve, the proposed method can be applied when the heat rate curve data is available only at the maximum and minimum operating capacities of the power plant. The method is applied on a given power system, in which we calculate the electricity cost using the CAPSE (computer aided power economics) algorithm. Comparisons are made when the least squares method is used. The results indicate that the proposed method give accurate results.

  20. Functional methods for arbitrary densities in curved spacetime

    International Nuclear Information System (INIS)

    Basler, M.

    1993-01-01

    This paper gives an introduction to the technique of functional differentiation and integration in curved spacetime, applied to examples from quantum field theory. Special attention is drawn on the choice of functional integral measure. Referring to a suggestion by Toms, fields are choosen as arbitrary scalar, spinorial or vectorial densities. The technique developed by Toms for a pure quadratic Lagrangian are extended to the calculation of the generating functional with external sources. Included are two examples of interacting theories, a self-interacting scalar field and a Yang-Mills theory. For these theories the complete set of Feynman graphs depending on the weight of variables is derived. (orig.)

  1. Computerised curve deconvolution of TL/OSL curves using a popular spreadsheet program.

    Science.gov (United States)

    Afouxenidis, D; Polymeris, G S; Tsirliganis, N C; Kitis, G

    2012-05-01

    This paper exploits the possibility of using commercial software for thermoluminescence and optically stimulated luminescence curve deconvolution analysis. The widely used software package Microsoft Excel, with the Solver utility has been used to perform deconvolution analysis to both experimental and reference glow curves resulted from the GLOw Curve ANalysis INtercomparison project. The simple interface of this programme combined with the powerful Solver utility, allows the analysis of complex stimulated luminescence curves into their components and the evaluation of the associated luminescence parameters.

  2. Computerised curve deconvolution of TL/OSL curves using a popular spreadsheet program

    International Nuclear Information System (INIS)

    Afouxenidis, D.; Polymeris, G. S.; Tsirliganis, N. C.; Kitis, G.

    2012-01-01

    This paper exploits the possibility of using commercial software for thermoluminescence and optically stimulated luminescence curve deconvolution analysis. The widely used software package Microsoft Excel, with the Solver utility has been used to perform deconvolution analysis to both experimental and reference glow curves resulted from the Glow Curve Analysis Intercomparison project. The simple interface of this programme combined with the powerful Solver utility, allows the analysis of complex stimulated luminescence curves into their components and the evaluation of the associated luminescence parameters. (authors)

  3. Mass transfer and power characteristics of stirred tank with Rushton and curved blade impeller

    Directory of Open Access Journals (Sweden)

    Thiyam Tamphasana Devi

    2017-04-01

    Full Text Available Present work compares the mass transfer coefficient (kLa and power draw capability of stirred tank employed with Rushton and curved blade impeller using computational fluid dynamics (CFD techniques in single and double impeller cases. Comparative analysis for different boundary conditions and mass transfer model has been done to assess their suitability. The predicted local kLa has been found higher in curved blade impeller than the Rushton impeller, whereas stirred tank with double impeller does not show variation due to low superficial gas velocity. The global kLa predicted has been found higher in curved blade impeller than the Rushton impeller in double and single cases. Curved blade impeller also exhibits higher power draw capability than the Rushton impeller. Overall, stirred tank with curved blade impeller gives higher efficiency in both single and double cases than the Rushton turbine

  4. Treatment selection in a randomized clinical trial via covariate-specific treatment effect curves.

    Science.gov (United States)

    Ma, Yunbei; Zhou, Xiao-Hua

    2017-02-01

    For time-to-event data in a randomized clinical trial, we proposed two new methods for selecting an optimal treatment for a patient based on the covariate-specific treatment effect curve, which is used to represent the clinical utility of a predictive biomarker. To select an optimal treatment for a patient with a specific biomarker value, we proposed pointwise confidence intervals for each covariate-specific treatment effect curve and the difference between covariate-specific treatment effect curves of two treatments. Furthermore, to select an optimal treatment for a future biomarker-defined subpopulation of patients, we proposed confidence bands for each covariate-specific treatment effect curve and the difference between each pair of covariate-specific treatment effect curve over a fixed interval of biomarker values. We constructed the confidence bands based on a resampling technique. We also conducted simulation studies to evaluate finite-sample properties of the proposed estimation methods. Finally, we illustrated the application of the proposed method in a real-world data set.

  5. The estimation of branching curves in the presence of subject-specific random effects.

    Science.gov (United States)

    Elmi, Angelo; Ratcliffe, Sarah J; Guo, Wensheng

    2014-12-20

    Branching curves are a technique for modeling curves that change trajectory at a change (branching) point. Currently, the estimation framework is limited to independent data, and smoothing splines are used for estimation. This article aims to extend the branching curve framework to the longitudinal data setting where the branching point varies by subject. If the branching point is modeled as a random effect, then the longitudinal branching curve framework is a semiparametric nonlinear mixed effects model. Given existing issues with using random effects within a smoothing spline, we express the model as a B-spline based semiparametric nonlinear mixed effects model. Simple, clever smoothness constraints are enforced on the B-splines at the change point. The method is applied to Women's Health data where we model the shape of the labor curve (cervical dilation measured longitudinally) before and after treatment with oxytocin (a labor stimulant). Copyright © 2014 John Wiley & Sons, Ltd.

  6. New digital demodulator with matched filters and curve segmentation techniques for BFSK demodulation: Analytical description

    Directory of Open Access Journals (Sweden)

    Jorge Torres Gómez

    2015-09-01

    Full Text Available The present article relates in general to digital demodulation of Binary Frequency Shift Keying (BFSK. The objective of the present research is to obtain a new processing method for demodulating BFSK-signals in order to reduce hardware complexity in comparison with other methods reported. The solution proposed here makes use of the matched filter theory and curve segmentation algorithms. This paper describes the integration and configuration of a Sampler Correlator and curve segmentation blocks in order to obtain a digital receiver for a proper demodulation of the received signal. The proposed solution is shown to strongly reduce hardware complexity. In this part a presentation of the proposed solution regarding the analytical expressions is addressed. The paper covers in detail the elements needed for properly configuring the system. In a second part it is presented the implementation of the system for FPGA technology and the simulation results in order to validate the overall performance.

  7. ArcCN-Runoff: An ArcGIS tool for generating curve number and runoff maps

    Science.gov (United States)

    Zhan, X.; Huang, M.-L.

    2004-01-01

    The development and the application of ArcCN-Runoff tool, an extension of ESRI@ ArcGIS software, are reported. This tool can be applied to determine curve numbers and to calculate runoff or infiltration for a rainfall event in a watershed. Implementation of GIS techniques such as dissolving, intersecting, and a curve-number reference table improve efficiency. Technical processing time may be reduced from days, if not weeks, to hours for producing spatially varied curve number and runoff maps. An application example for a watershed in Lyon County and Osage County, Kansas, USA, is presented. ?? 2004 Elsevier Ltd. All rights reserved.

  8. Optimization In Searching Daily Rule Curve At Mosul Regulating Reservoir, North Iraq Using Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Thair M. Al-Taiee

    2013-05-01

    Full Text Available To obtain optimal operating rules for storage reservoirs, large numbers of simulation and optimization models have been developed over the past several decades, which vary significantly in their mechanisms and applications. Rule curves are guidelines for long term reservoir operation. An efficient technique is required to find the optimal rule curves that can mitigate water shortage in long term operation. The investigation of developed Genetic Algorithm (GA technique, which is an optimization approach base on the mechanics of natural selection, derived from the theory of natural evolution, was carried out to through the application to predict the daily rule curve of  Mosul regulating reservoir in Iraq.  Record daily inflows, outflow, water level in the reservoir for 19 year (1986-1990 and (1994-2007 were used in the developed model for assessing the optimal reservoir operation. The objective function is set to minimize the annual sum of squared deviation from the desired downstream release and desired storage volume in the reservoir. The decision variables are releases, storage volume, water level and outlet (demand from the reservoir. The results of the GA model gave a good agreement during the comparison with the actual rule curve and the designed rating curve of the reservoir. The simulated result shows that GA-derived policies are promising and competitive and can be effectively used for daily reservoir operation in addition to the rational monthly operation and predicting also rating curve of reservoirs.

  9. An evaluation of .06 tapered gutta-percha cones for filling of .06 taper prepared curved root canals.

    Science.gov (United States)

    Gordon, M P J; Love, R M; Chandler, N P

    2005-02-01

    To compare the area occupied by gutta-percha, sealer, or void in standardized .06 tapered prepared simulated curved canals and in mesio-buccal canals of extracted maxillary first molars filled with a single .06 gutta-percha point and sealer or lateral condensation of multiple .02 gutta-percha points and sealer. Simulated canals in resin blocks with either a 30 degrees curve and radius of 10.5 mm (n = 20) or a 58 degrees curve and 4.7 mm radius (n = 20) and curved mesio-buccal canals of extracted maxillary first molars (n = 20) were prepared using .06 ProFiles in a variable tip crown-down sequence to an apical size 35 at 0.5 mm from the canal terminus or apical foramen. Ten 30 degrees and 58 degrees curved resin canals and 10 canals in the extracted teeth group were obturated with .02 taper gutta-percha cones and AH 26 sealer using lateral condensation. The time required to obturate was recorded. The remaining canals were obturated with a single .06 taper gutta-percha cone and AH 26 sealer. Excess gutta-percha was removed from the specimens using heat and the warm mass vertically condensed. Horizontal sections were cut at 0.5, 1.5, 2.5, 4.5, 7.5 and 11.5 mm from the canal terminus or apical foramen. Colour photographs were taken using an Olympus 35 mm camera attached to a stereomicroscope set at x40 magnification, and then digitized using a flatbed scanner. The cross-sectional area of the canal contents was analysed using Adobe PhotoShop. The percentage of gutta-percha, sealer or voids to the total root canal area were derived and data analysed using unpaired Student's t-test and the Mann-Whitney U-test. In the 30 degrees curved canals the levels had between 94 and 100% of the area filled with gutta-percha with no significant difference (P > 0.05) between the lateral condensation and single cone techniques. In the 58 degrees curved canals the levels had 92-99% of the area filled with gutta-percha, with the single cone technique having significantly (P 0.05) between

  10. Study on high density multi-scale calculation technique

    International Nuclear Information System (INIS)

    Sekiguchi, S.; Tanaka, Y.; Nakada, H.; Nishikawa, T.; Yamamoto, N.; Yokokawa, M.

    2004-01-01

    To understand degradation of nuclear materials under irradiation, it is essential to know as much about each phenomenon observed from multi-scale points of view; they are micro-scale in atomic-level, macro-level in structural scale and intermediate level. In this study for application to meso-scale materials (100A ∼ 2μm), computer technology approaching from micro- and macro-scales was developed including modeling and computer application using computational science and technology method. And environmental condition of grid technology for multi-scale calculation was prepared. The software and MD (molecular dynamics) stencil for verifying the multi-scale calculation were improved and their movement was confirmed. (A. Hishinuma)

  11. Tools and Techniques for Basin-Scale Climate Change Assessment

    Science.gov (United States)

    Zagona, E.; Rajagopalan, B.; Oakley, W.; Wilson, N.; Weinstein, P.; Verdin, A.; Jerla, C.; Prairie, J. R.

    2012-12-01

    The Department of Interior's WaterSMART Program seeks to secure and stretch water supplies to benefit future generations and identify adaptive measures to address climate change. Under WaterSMART, Basin Studies are comprehensive water studies to explore options for meeting projected imbalances in water supply and demand in specific basins. Such studies could be most beneficial with application of recent scientific advances in climate projections, stochastic simulation, operational modeling and robust decision-making, as well as computational techniques to organize and analyze many alternatives. A new integrated set of tools and techniques to facilitate these studies includes the following components: Future supply scenarios are produced by the Hydrology Simulator, which uses non-parametric K-nearest neighbor resampling techniques to generate ensembles of hydrologic traces based on historical data, optionally conditioned on long paleo reconstructed data using various Markov Chain techniuqes. Resampling can also be conditioned on climate change projections from e.g., downscaled GCM projections to capture increased variability; spatial and temporal disaggregation is also provided. The simulations produced are ensembles of hydrologic inputs to the RiverWare operations/infrastucture decision modeling software. Alternative demand scenarios can be produced with the Demand Input Tool (DIT), an Excel-based tool that allows modifying future demands by groups such as states; sectors, e.g., agriculture, municipal, energy; and hydrologic basins. The demands can be scaled at future dates or changes ramped over specified time periods. Resulting data is imported directly into the decision model. Different model files can represent infrastructure alternatives and different Policy Sets represent alternative operating policies, including options for noticing when conditions point to unacceptable vulnerabilities, which trigger dynamically executing changes in operations or other

  12. A systematic methodology for creep master curve construction using the stepped isostress method (SSM): a numerical assessment

    Science.gov (United States)

    Miranda Guedes, Rui

    2018-02-01

    Long-term creep of viscoelastic materials is experimentally inferred through accelerating techniques based on the time-temperature superposition principle (TTSP) or on the time-stress superposition principle (TSSP). According to these principles, a given property measured for short times at a higher temperature or higher stress level remains the same as that obtained for longer times at a lower temperature or lower stress level, except that the curves are shifted parallel to the horizontal axis, matching a master curve. These procedures enable the construction of creep master curves with short-term experimental tests. The Stepped Isostress Method (SSM) is an evolution of the classical TSSP method. Higher reduction of the required number of test specimens to obtain the master curve is achieved by the SSM technique, since only one specimen is necessary. The classical approach, using creep tests, demands at least one specimen per each stress level to produce a set of creep curves upon which TSSP is applied to obtain the master curve. This work proposes an analytical method to process the SSM raw data. The method is validated using numerical simulations to reproduce the SSM tests based on two different viscoelastic models. One model represents the viscoelastic behavior of a graphite/epoxy laminate and the other represents an adhesive based on epoxy resin.

  13. Infrared rotational light curves on Jupiter induced by wave activities and cloud patterns andimplications on brown dwarfs

    Science.gov (United States)

    Ge, Huazhi; Zhang, Xi; Fletcher, Leigh; Orton, Glenn S.; Sinclair, James Andrew; Fernandes,, Joshua; Momary, Thomas W.; Warren, Ari; Kasaba, Yasumasa; Sato, Takao M.; Fujiyoshi, Takuya

    2017-10-01

    Many brown dwarfs exhibit infrared rotational light curves with amplitude varying from a fewpercent to twenty percent (Artigau et al. 2009, ApJ, 701, 1534; Radigan et al. 2012, ApJ, 750,105). Recently, it was claimed that weather patterns, especially planetary-scale waves in thebelts and cloud spots, are responsible for the light curves and their evolutions on brown dwarfs(Apai et al. 2017, Science, 357, 683). Here we present a clear relationship between the direct IRemission maps and light curves of Jupiter at multiple wavelengths, which might be similar withthat on cold brown dwarfs. Based on infrared disk maps from Subaru/COMICS and VLT/VISIR,we constructed full maps of Jupiter and rotational light curves at different wavelengths in thethermal infrared. We discovered a strong relationship between the light curves and weatherpatterns on Jupiter. The light curves also exhibit strong multi-bands phase shifts and temporalvariations, similar to that detected on brown dwarfs. Together with the spectra fromTEXES/IRTF, our observations further provide detailed information of the spatial variations oftemperature, ammonia clouds and aerosols in the troposphere of Jupiter (Fletcher et al. 2016,Icarus, 2016 128) and their influences on the shapes of the light curves. We conclude that waveactivities in Jupiter’s belts (Fletcher et al. 2017, GRL, 44, 7140), cloud holes, and long-livedvortices such as the Great Red Spot and ovals control the shapes of IR light curves and multi-wavelength phase shifts on Jupiter. Our finding supports the hypothesis that observed lightcurves on brown dwarfs are induced by planetary-scale waves and cloud spots.

  14. Optimization of curved drift tubes for ultraviolet-ion mobility spectrometry

    Science.gov (United States)

    Ni, Kai; Ou, Guangli; Zhang, Xiaoguo; Yu, Zhou; Yu, Quan; Qian, Xiang; Wang, Xiaohao

    2015-08-01

    Ion mobility spectrometry (IMS) is a key trace detection technique for toxic pollutants and explosives in the atmosphere. Ultraviolet radiation photoionization source is widely used as an ionization source for IMS due to its advantages of high selectivity and non-radioactivity. However, UV-IMS bring problems that UV rays will be launched into the drift tube which will cause secondary ionization and lead to the photoelectric effect of the Faraday disk. So air is often used as working gas to reduce the effective distance of UV rays, but it will limit the application areas of UV-IMS. In this paper, we propose a new structure of curved drift tube, which can avoid abnormally incident UV rays. Furthermore, using curved drift tube may increase the length of drift tube and then improve the resolution of UV-IMS according to previous research. We studied the homogeneity of electric field in the curved drift tube, which determined the performance of UV-IMS. Numerical simulation of electric field in curved drift tube was conducted by SIMION in our study. In addition, modeling method and homogeneity standard for electric field were also presented. The influences of key parameters include radius of gyration, gap between electrode as well as inner diameter of curved drift tube, on the homogeneity of electric field were researched and some useful laws were summarized. Finally, an optimized curved drift tube is designed to achieve homogenous drift electric field. There is more than 98.75% of the region inside the curved drift tube where the fluctuation of the electric field strength along the radial direction is less than 0.2% of that along the axial direction.

  15. Evaluation of the a.c. potential drop method to determine J-crack resistance curves for a pressure vessel steel

    International Nuclear Information System (INIS)

    Gibson, G.P.

    1989-01-01

    An evaluation has been carried out of the a.c. potential drop technique for determining J-crack growth resistance curves for a pressure vessel steel. The technique involves passing an alternating current through the specimen and relating the changes in the potential drop across the crack mouth to changes in crack length occuring during the test. The factors investigated were the current and voltage probe positions, the a.c. frequency and the test temperature. In addition, by altering the heat treatment of the material, J-crack resistance curves were obtained under both contained and non-contained yielding conditions. In all situations, accurate J-R curves could be determined. (author)

  16. Renormalization Group scale-setting in astrophysical systems

    Science.gov (United States)

    Domazet, Silvije; Štefančić, Hrvoje

    2011-09-01

    A more general scale-setting procedure for General Relativity with Renormalization Group corrections is proposed. Theoretical aspects of the scale-setting procedure and the interpretation of the Renormalization Group running scale are discussed. The procedure is elaborated for several highly symmetric systems with matter in the form of an ideal fluid and for two models of running of the Newton coupling and the cosmological term. For a static spherically symmetric system with the matter obeying the polytropic equation of state the running scale-setting is performed analytically. The obtained result for the running scale matches the Ansatz introduced in a recent paper by Rodrigues, Letelier and Shapiro which provides an excellent explanation of rotation curves for a number of galaxies. A systematic explanation of the galaxy rotation curves using the scale-setting procedure introduced in this Letter is identified as an important future goal.

  17. Renormalization Group scale-setting in astrophysical systems

    International Nuclear Information System (INIS)

    Domazet, Silvije; Stefancic, Hrvoje

    2011-01-01

    A more general scale-setting procedure for General Relativity with Renormalization Group corrections is proposed. Theoretical aspects of the scale-setting procedure and the interpretation of the Renormalization Group running scale are discussed. The procedure is elaborated for several highly symmetric systems with matter in the form of an ideal fluid and for two models of running of the Newton coupling and the cosmological term. For a static spherically symmetric system with the matter obeying the polytropic equation of state the running scale-setting is performed analytically. The obtained result for the running scale matches the Ansatz introduced in a recent paper by Rodrigues, Letelier and Shapiro which provides an excellent explanation of rotation curves for a number of galaxies. A systematic explanation of the galaxy rotation curves using the scale-setting procedure introduced in this Letter is identified as an important future goal.

  18. Approximation by planar elastic curves

    DEFF Research Database (Denmark)

    Brander, David; Gravesen, Jens; Nørbjerg, Toke Bjerge

    2016-01-01

    We give an algorithm for approximating a given plane curve segment by a planar elastic curve. The method depends on an analytic representation of the space of elastic curve segments, together with a geometric method for obtaining a good initial guess for the approximating curve. A gradient......-driven optimization is then used to find the approximating elastic curve....

  19. A Literature-Based Analysis of the Learning Curves of Laparoscopic Radical Prostatectomy

    Directory of Open Access Journals (Sweden)

    Daniel W. Good

    2014-05-01

    Full Text Available There is a trend for the increased adoption of minimally invasive techniques of radical prostatectomy (RP – laparoscopic (LRP and robotic assisted (RARP – from the traditional open radical retropubic prostatectomy (ORP, popularised by Partin et al. Recently there has been a dramatic expansion in the rates of RARP being performed, and there have been many early reports postulating that the learning curve for RARP is shorter than for LRP. The aim of this study was to review the literature and analyse the length of the LRP learning curves for the various outcome measures: perioperative, oncologic, and functional outcomes. A broad search of the literature was performed in November 2013 using the PubMed database. Only studies of real patients and those from 2004 until 2013 were included; those on simulators were excluded. In total, 239 studies were identified after which 13 were included. The learning curve is a heterogeneous entity, depending entirely on the criteria used to define it. There is evidence of multiple learning curves; however the length of these is dependent on the definitions used by the authors. Few studies use the more rigorous definition of plateauing of the curve. Perioperative learning curve takes approximately 150-200 cases to plateau, oncologic curve approximately 200 cases, and the functional learning curve up to 700 cases to plateau (700 for potency, 200 cases for continence. In this review, we have analysed the literature with respect to the learning curve for LRP. It is clear that the learning curve is long. This necessitates centralising LRP to high volume centres such that surgeons, trainees, and patients are able to utilise the benefits of LRP.

  20. A Simple Technique for Shoulder Arthrography

    Energy Technology Data Exchange (ETDEWEB)

    Berna-Serna, J.D.; Redondo, M.V.; Martinez, F.; Reus, M.; Alonso, J.; Parrilla, A.; Campos, P.A. [Virgen de la Arrixaca Univ. Hospital, El Palmar, Murcia (Spain). Dept. of Radiology

    2006-09-15

    Purpose: To present a systematic approach to teaching a technique for arthrography of the shoulder. Using an adhesive marker-plate with radiopaque coordinates, precise sites for puncture can be identified and the need for fluoroscopic guidance obviated. Material and Methods: Forty-six glenohumeral arthrograms were performed in 45 patients; in 1 case involving examination of both shoulders. The stages of the technique are described in detail, as are the fundamental aspects of achieving an effective glenohumeral injection. Pain intensity was measured in all patients using a verbal description scale. Results: Shoulder arthrography was successful in all cases. Average time taken for the procedure was 7 min, with no difference in the respective times required by an experienced radiologist and a resident. The procedure was well tolerated by most patients, with slight discomfort being observed in a very few cases. Conclusion: The arthrographic technique used in this study is simple, safe, rapid, and reproducible, and has the advantage of precise localization of the site for puncture without need for fluoroscopic guidance. The procedure described in this study can be of help in teaching residents and can reduce the learning curve for radiologists with no experience in arthrographic methods. It also reduces the time of exposure to fluoroscopy Keywords: Arthrography, joint, shoulder.

  1. Micro-Scale Thermoacoustics

    Science.gov (United States)

    Offner, Avshalom; Ramon, Guy Z.

    2016-11-01

    Thermoacoustic phenomena - conversion of heat to acoustic oscillations - may be harnessed for construction of reliable, practically maintenance-free engines and heat pumps. Specifically, miniaturization of thermoacoustic devices holds great promise for cooling of micro-electronic components. However, as devices size is pushed down to micro-meter scale it is expected that non-negligible slip effects will exist at the solid-fluid interface. Accordingly, new theoretical models for thermoacoustic engines and heat pumps were derived, accounting for a slip boundary condition. These models are essential for the design process of micro-scale thermoacoustic devices that will operate under ultrasonic frequencies. Stability curves for engines - representing the onset of self-sustained oscillations - were calculated with both no-slip and slip boundary conditions, revealing improvement in the performance of engines with slip at the resonance frequency range applicable for micro-scale devices. Maximum achievable temperature differences curves for thermoacoustic heat pumps were calculated, revealing the negative effect of slip on the ability to pump heat up a temperature gradient. The authors acknowledge the support from the Nancy and Stephen Grand Technion Energy Program (GTEP).

  2. Experimental study of curved guide tubes for pellet injection

    International Nuclear Information System (INIS)

    Combs, S.K.; Baylor, L.R.; Foust, C.R.; Gouge, M.J.; Jernigan, T.C.; Milora, S.L.

    1997-01-01

    The use of curved guide tubes for transporting frozen hydrogen pellets offers great flexibility for pellet injection into plasma devices. While this technique has been previously employed, an increased interest in its applicability has been generated with the recent ASDEX Upgrade experimental data for magnetic high-field side (HFS) pellet injection. In these innovative experiments, the pellet penetration appeared to be significantly deeper than for the standard magnetic low-field side injection scheme, along with corresponding greater fueling efficiencies. Thus, some of the major experimental fusion devices are planning experiments with HFS pellet injection. Because of the complex geometries of experimental fusion devices, installations with multiple curved guide tube sections will be required for HFS pellet injection. To more thoroughly understand and document the capability of curved guide tubes, an experimental study is under way at the Oak Ridge National Laboratory (ORNL). In particular, configurations and pellet parameters applicable for the DIII-D tokamak and the International Thermonuclear Experimental Reactor (ITER) were simulated in laboratory experiments. Initial test results with nominal 2.7- and 10-mm-diam deuterium pellets are presented and discussed

  3. Bragg Curve Spectroscopy

    International Nuclear Information System (INIS)

    Gruhn, C.R.

    1981-05-01

    An alternative utilization is presented for the gaseous ionization chamber in the detection of energetic heavy ions, which is called Bragg Curve Spectroscopy (BCS). Conceptually, BCS involves using the maximum data available from the Bragg curve of the stopping heavy ion (HI) for purposes of identifying the particle and measuring its energy. A detector has been designed that measures the Bragg curve with high precision. From the Bragg curve the range from the length of the track, the total energy from the integral of the specific ionization over the track, the dE/dx from the specific ionization at the beginning of the track, and the Bragg peak from the maximum of the specific ionization of the HI are determined. This last signal measures the atomic number, Z, of the HI unambiguously

  4. Refined tropical curve counts and canonical bases for quantum cluster algebras

    DEFF Research Database (Denmark)

    Mandel, Travis

    We express the (quantizations of the) Gross-Hacking-Keel-Kontsevich canonical bases for cluster algebras in terms of certain (Block-Göttsche) weighted counts of tropical curves. In the process, we obtain via scattering diagram techniques a new invariance result for these Block-Göttsche counts....

  5. How to obtain J-R curve from one test on one sample

    International Nuclear Information System (INIS)

    Roche, Roland.

    1981-01-01

    Operational definition of J concept is first examined. Then it is shown that conventional methods of experimental determination of J values are based on the following assumption: if the load-deflexion curve is known for one value of the crack length it is possible to know the load-deflexion curve for any value of the crack length. This assumption is generalized with the help of scale functions and formula giving J are deduced. Attention is given to the effect of crack propagation on J values. The same assumption is used to extract the crack length from the load-deflexion curve. As the real crack lengths are known before propagation occurs and at the end of the test, it is possible to achieve a good calibration of the material characteristic [fr

  6. Plasmonic nanoparticle lithography: Fast resist-free laser technique for large-scale sub-50 nm hole array fabrication

    Science.gov (United States)

    Pan, Zhenying; Yu, Ye Feng; Valuckas, Vytautas; Yap, Sherry L. K.; Vienne, Guillaume G.; Kuznetsov, Arseniy I.

    2018-05-01

    Cheap large-scale fabrication of ordered nanostructures is important for multiple applications in photonics and biomedicine including optical filters, solar cells, plasmonic biosensors, and DNA sequencing. Existing methods are either expensive or have strict limitations on the feature size and fabrication complexity. Here, we present a laser-based technique, plasmonic nanoparticle lithography, which is capable of rapid fabrication of large-scale arrays of sub-50 nm holes on various substrates. It is based on near-field enhancement and melting induced under ordered arrays of plasmonic nanoparticles, which are brought into contact or in close proximity to a desired material and acting as optical near-field lenses. The nanoparticles are arranged in ordered patterns on a flexible substrate and can be attached and removed from the patterned sample surface. At optimized laser fluence, the nanohole patterning process does not create any observable changes to the nanoparticles and they have been applied multiple times as reusable near-field masks. This resist-free nanolithography technique provides a simple and cheap solution for large-scale nanofabrication.

  7. Learning Curve? Which One?

    Directory of Open Access Journals (Sweden)

    Paulo Prochno

    2004-07-01

    Full Text Available Learning curves have been studied for a long time. These studies provided strong support to the hypothesis that, as organizations produce more of a product, unit costs of production decrease at a decreasing rate (see Argote, 1999 for a comprehensive review of learning curve studies. But the organizational mechanisms that lead to these results are still underexplored. We know some drivers of learning curves (ADLER; CLARK, 1991; LAPRE et al., 2000, but we still lack a more detailed view of the organizational processes behind those curves. Through an ethnographic study, I bring a comprehensive account of the first year of operations of a new automotive plant, describing what was taking place on in the assembly area during the most relevant shifts of the learning curve. The emphasis is then on how learning occurs in that setting. My analysis suggests that the overall learning curve is in fact the result of an integration process that puts together several individual ongoing learning curves in different areas throughout the organization. In the end, I propose a model to understand the evolution of these learning processes and their supporting organizational mechanisms.

  8. Fitness of the analysis method of magnesium in drinking water using atomic absorption with quadratic calibration curve

    International Nuclear Information System (INIS)

    Perez-Lopez, Esteban

    2014-01-01

    The quantitative chemical analysis has been importance in research. Also, aspects like: quality control, sales of services and other areas of interest. Some instrumental analysis methods for quantification with linear calibration curve have presented limitations, because the short liner dynamic ranges of the analyte, or sometimes, by limiting the technique itself. The need has been to investigate a little more about the convenience of using quadratic calibration curves for analytical quantification, with which it has seeked demonstrate that has been a valid calculation model for chemical analysis instruments. An analysis base method is used on the technique of atomic absorption spectroscopy and in particular a determination of magnesium in a drinking water sample of the Tacares sector North of Grecia. A nonlinear calibration curve was used and specifically a curve with quadratic behavior. The same was compared with the test results obtained for the equal analysis with a linear calibration curve. The results have showed that the methodology has been valid for the determination referred with all confidence, since the concentrations have been very similar and, according to the used hypothesis testing, can be considered equal. (author) [es

  9. Observation of mass flux through hcp 4He off the melting curve

    International Nuclear Information System (INIS)

    Ray, M W; Hallock, R B

    2009-01-01

    Solid hcp 4 He has been created off the melting curve using two growth techniques. In an effort to observe the flow of 4 He through the solid, rather than squeezing the solid directly, the experimental apparatus allows injection of 4 He atoms from superfluid in porous Vycor directly into the solid. We will describe the apparatus and our observations. Evidence for the transport of mass through a sample cell filled with hcp solid 4 He off the melting curve is found. The temperature and pressure dependence of this behavior will be presented.

  10. Impact of the learning curve on outcome after transcatheter mitral valve repair

    DEFF Research Database (Denmark)

    Ledwoch, Jakob; Franke, Jennifer; Baldus, Stephan

    2014-01-01

    AIMS: This analysis from the German Mitral Valve Registry investigates the impact of the learning curve with the MitraClip(®) technique on procedural success and complications. METHODS AND RESULTS: Consecutive patients treated since 2009 in centers that performed more than 50 transcatheter mitral...... not decrease over time. CONCLUSION: A learning curve using the MitraClip(®) device does not appear to significantly affect acute MR reduction, hospital and 30-day mortality. Most likely, the proctor system leads to already high initial procedure success and relatively short procedure time....

  11. Playing off the curve - testing quantitative predictions of skill acquisition theories in development of chess performance.

    Science.gov (United States)

    Gaschler, Robert; Progscha, Johanna; Smallbone, Kieran; Ram, Nilam; Bilalić, Merim

    2014-01-01

    Learning curves have been proposed as an adequate description of learning processes, no matter whether the processes manifest within minutes or across years. Different mechanisms underlying skill acquisition can lead to differences in the shape of learning curves. In the current study, we analyze the tournament performance data of 1383 chess players who begin competing at young age and play tournaments for at least 10 years. We analyze the performance development with the goal to test the adequacy of learning curves, and the skill acquisition theories they are based on, for describing and predicting expertise acquisition. On the one hand, we show that the skill acquisition theories implying a negative exponential learning curve do a better job in both describing early performance gains and predicting later trajectories of chess performance than those theories implying a power function learning curve. On the other hand, the learning curves of a large proportion of players show systematic qualitative deviations from the predictions of either type of skill acquisition theory. While skill acquisition theories predict larger performance gains in early years and smaller gains in later years, a substantial number of players begin to show substantial improvements with a delay of several years (and no improvement in the first years), deviations not fully accounted for by quantity of practice. The current work adds to the debate on how learning processes on a small time scale combine to large-scale changes.

  12. Sensitivity of the probability of failure to probability of detection curve regions

    International Nuclear Information System (INIS)

    Garza, J.; Millwater, H.

    2016-01-01

    Non-destructive inspection (NDI) techniques have been shown to play a vital role in fracture control plans, structural health monitoring, and ensuring availability and reliability of piping, pressure vessels, mechanical and aerospace equipment. Probabilistic fatigue simulations are often used in order to determine the efficacy of an inspection procedure with the NDI method modeled as a probability of detection (POD) curve. These simulations can be used to determine the most advantageous NDI method for a given application. As an aid to this process, a first order sensitivity method of the probability-of-failure (POF) with respect to regions of the POD curve (lower tail, middle region, right tail) is developed and presented here. The sensitivity method computes the partial derivative of the POF with respect to a change in each region of a POD or multiple POD curves. The sensitivities are computed at no cost by reusing the samples from an existing Monte Carlo (MC) analysis. A numerical example is presented considering single and multiple inspections. - Highlights: • Sensitivities of probability-of-failure to a region of probability-of-detection curve. • The sensitivities are computed with negligible cost. • Sensitivities identify the important region of a POD curve. • Sensitivities can be used as a guide to selecting the optimal POD curve.

  13. Contractibility of curves

    Directory of Open Access Journals (Sweden)

    Janusz Charatonik

    1991-11-01

    Full Text Available Results concerning contractibility of curves (equivalently: of dendroids are collected and discussed in the paper. Interrelations tetween various conditions which are either sufficient or necessary for a curve to be contractible are studied.

  14. Improved technique that allows the performance of large-scale SNP genotyping on DNA immobilized by FTA technology.

    Science.gov (United States)

    He, Hongbin; Argiro, Laurent; Dessein, Helia; Chevillard, Christophe

    2007-01-01

    FTA technology is a novel method designed to simplify the collection, shipment, archiving and purification of nucleic acids from a wide variety of biological sources. The number of punches that can normally be obtained from a single specimen card are often however, insufficient for the testing of the large numbers of loci required to identify genetic factors that control human susceptibility or resistance to multifactorial diseases. In this study, we propose an improved technique to perform large-scale SNP genotyping. We applied a whole genome amplification method to amplify DNA from buccal cell samples stabilized using FTA technology. The results show that using the improved technique it is possible to perform up to 15,000 genotypes from one buccal cell sample. Furthermore, the procedure is simple. We consider this improved technique to be a promising methods for performing large-scale SNP genotyping because the FTA technology simplifies the collection, shipment, archiving and purification of DNA, while whole genome amplification of FTA card bound DNA produces sufficient material for the determination of thousands of SNP genotypes.

  15. Roc curves for continuous data

    CERN Document Server

    Krzanowski, Wojtek J

    2009-01-01

    Since ROC curves have become ubiquitous in many application areas, the various advances have been scattered across disparate articles and texts. ROC Curves for Continuous Data is the first book solely devoted to the subject, bringing together all the relevant material to provide a clear understanding of how to analyze ROC curves.The fundamental theory of ROC curvesThe book first discusses the relationship between the ROC curve and numerous performance measures and then extends the theory into practice by describing how ROC curves are estimated. Further building on the theory, the authors prese

  16. Preparation of Kepler light curves for asteroseismic analyses

    DEFF Research Database (Denmark)

    García, R.A.; Hekker, Saskia; Stello, Dennis

    2011-01-01

    The Kepler mission is providing photometric data of exquisite quality for the asteroseismic study of different classes of pulsating stars. These analyses place particular demands on the pre-processing of the data, over a range of time-scales from minutes to months. Here, we describe processing...... procedures developed by the Kepler Asteroseismic Science Consortium to prepare light curves that are optimized for the asteroseismic study of solar-like oscillating stars in which outliers, jumps and drifts are corrected....

  17. Evaluation of the glow curves of a new glass matrix

    International Nuclear Information System (INIS)

    Oliveira, Nathália S.; Souza, Samara P.; Ferreira, Pâmela Z.; Dantas, Noelio O.; Silva, Anielle C.A.; Neves, Lucio P.; Perini, Ana P.; Carrera, Betzabel N.S.; Watanabe, Shigueo

    2017-01-01

    Thermoluminescence is a dosimetric technique with may be used to personal, clinical, environmental and high doses. In this work a new glass matrix, with nominal composition of 20Li 2 CO 3 .10Al 2 O 3 .25BaO.45B 2 O 3 (mol%), was studied by the thermoluminescence technique. The glow curves was be analyzed, after the irradiation of this glass matrix with high doses. The results showed that this new glass matrix has a temperature peak in 260°C, which is ideal for dosimetry applications. (author)

  18. Theoretical study of melting curves on Ta, Mo, and W at high pressures

    Energy Technology Data Exchange (ETDEWEB)

    Xi Feng [Laboratory for Shock Wave and Detonation Physics Research, Institute of Fluid Physics, P.O. Box 919-102, 621900 Mianyang (China)], E-mail: hawk_0816@yahoo.com.cn; Cai Lingcang [Laboratory for Shock Wave and Detonation Physics Research, Institute of Fluid Physics, P.O. Box 919-102, 621900 Mianyang (China)

    2008-06-01

    The melting curves of tantalum (Ta), molybdenum (Mo), and tungsten (W) are calculated using a dislocation-mediated melting model. The calculated melting curves are in good agreement with shock-wave data, and partially in agreement with wire explosion and piston-cylinder data, but show large discrepancies with diamond-anvil cell (DAC) data. We propose that the melting mechanism caused by shock-wave and laser-heated DAC techniques are probably different, and that a systematic difference exists in the two melting processes.

  19. Statistical determination of significant curved I-girder bridge seismic response parameters

    Science.gov (United States)

    Seo, Junwon

    2013-06-01

    Curved steel bridges are commonly used at interchanges in transportation networks and more of these structures continue to be designed and built in the United States. Though the use of these bridges continues to increase in locations that experience high seismicity, the effects of curvature and other parameters on their seismic behaviors have been neglected in current risk assessment tools. These tools can evaluate the seismic vulnerability of a transportation network using fragility curves. One critical component of fragility curve development for curved steel bridges is the completion of sensitivity analyses that help identify influential parameters related to their seismic response. In this study, an accessible inventory of existing curved steel girder bridges located primarily in the Mid-Atlantic United States (MAUS) was used to establish statistical characteristics used as inputs for a seismic sensitivity study. Critical seismic response quantities were captured using 3D nonlinear finite element models. Influential parameters from these quantities were identified using statistical tools that incorporate experimental Plackett-Burman Design (PBD), which included Pareto optimal plots and prediction profiler techniques. The findings revealed that the potential variation in the influential parameters included number of spans, radius of curvature, maximum span length, girder spacing, and cross-frame spacing. These parameters showed varying levels of influence on the critical bridge response.

  20. Atlas of stress-strain curves

    CERN Document Server

    2002-01-01

    The Atlas of Stress-Strain Curves, Second Edition is substantially bigger in page dimensions, number of pages, and total number of curves than the previous edition. It contains over 1,400 curves, almost three times as many as in the 1987 edition. The curves are normalized in appearance to aid making comparisons among materials. All diagrams include metric (SI) units, and many also include U.S. customary units. All curves are captioned in a consistent format with valuable information including (as available) standard designation, the primary source of the curve, mechanical properties (including hardening exponent and strength coefficient), condition of sample, strain rate, test temperature, and alloy composition. Curve types include monotonic and cyclic stress-strain, isochronous stress-strain, and tangent modulus. Curves are logically arranged and indexed for fast retrieval of information. The book also includes an introduction that provides background information on methods of stress-strain determination, on...

  1. Timescale stretch parameterization of Type Ia supernova B-band light curves

    International Nuclear Information System (INIS)

    Goldhaber, G.; Groom, D.E.; Kim, A.; Aldering, G.; Astier, P.; Conley, A.; Deustua, S.E.; Ellis, R.; Fabbro, S.; Fruchter, A.S.; Goobar, A.; Hook, I.; Irwin, M.; Kim, M.; Knop, R.A.; Lidman, C.; McMahon, R.; Nugent, P.E.; Pain, R.; Panagia, N.; Pennypacker, C.R.; Perlmutter, S.; Ruiz-Lapuente, P.; Schaefer, B.; Walton, N.A.; York, T.

    2001-01-01

    R-band intensity measurements along the light curve of Type Ia supernovae discovered by the Cosmology Project (SCP) are fitted in brightness to templates allowing a free parameter the time-axis width factor w identically equal to s times (1+z). The data points are then individually aligned in the time-axis, normalized and K-corrected back to the rest frame, after which the nearly 1300 normalized intensity measurements are found to lie on a well-determined common rest-frame B-band curve which we call the ''composite curve.'' The same procedure is applied to 18 low-redshift Calan/Tololo SNe with Z < 0.11; these nearly 300 B-band photometry points are found to lie on the composite curve equally well. The SCP search technique produces several measurements before maximum light for each supernova. We demonstrate that the linear stretch factor, s, which parameterizes the light-curve timescale appears independent of z, and applies equally well to the declining and rising parts of the light curve. In fact, the B band template that best fits this composite curve fits the individual supernova photometry data when stretched by a factor s with chi 2/DoF ∼ 1, thus as well as any parameterization can, given the current data sets. The measurement of the data of explosion, however, is model dependent and not tightly constrained by the current data. We also demonstrate the 1 + z light-cure time-axis broadening expected from cosmological expansion. This argues strongly against alternative explanations, such as tired light, for the redshift of distant objects

  2. Numerical Characterization of Piezoceramics Using Resonance Curves

    Science.gov (United States)

    Pérez, Nicolás; Buiochi, Flávio; Brizzotti Andrade, Marco Aurélio; Adamowski, Julio Cezar

    2016-01-01

    Piezoelectric materials characterization is a challenging problem involving physical concepts, electrical and mechanical measurements and numerical optimization techniques. Piezoelectric ceramics such as Lead Zirconate Titanate (PZT) belong to the 6 mm symmetry class, which requires five elastic, three piezoelectric and two dielectric constants to fully represent the material properties. If losses are considered, the material properties can be represented by complex numbers. In this case, 20 independent material constants are required to obtain the full model. Several numerical methods have been used to adjust the theoretical models to the experimental results. The continuous improvement of the computer processing ability has allowed the use of a specific numerical method, the Finite Element Method (FEM), to iteratively solve the problem of finding the piezoelectric constants. This review presents the recent advances in the numerical characterization of 6 mm piezoelectric materials from experimental electrical impedance curves. The basic strategy consists in measuring the electrical impedance curve of a piezoelectric disk, and then combining the Finite Element Method with an iterative algorithm to find a set of material properties that minimizes the difference between the numerical impedance curve and the experimental one. Different methods to validate the results are also discussed. Examples of characterization of some common piezoelectric ceramics are presented to show the practical application of the described methods. PMID:28787875

  3. Numerical Characterization of Piezoceramics Using Resonance Curves

    Directory of Open Access Journals (Sweden)

    Nicolás Pérez

    2016-01-01

    Full Text Available Piezoelectric materials characterization is a challenging problem involving physical concepts, electrical and mechanical measurements and numerical optimization techniques. Piezoelectric ceramics such as Lead Zirconate Titanate (PZT belong to the 6 mm symmetry class, which requires five elastic, three piezoelectric and two dielectric constants to fully represent the material properties. If losses are considered, the material properties can be represented by complex numbers. In this case, 20 independent material constants are required to obtain the full model. Several numerical methods have been used to adjust the theoretical models to the experimental results. The continuous improvement of the computer processing ability has allowed the use of a specific numerical method, the Finite Element Method (FEM, to iteratively solve the problem of finding the piezoelectric constants. This review presents the recent advances in the numerical characterization of 6 mm piezoelectric materials from experimental electrical impedance curves. The basic strategy consists in measuring the electrical impedance curve of a piezoelectric disk, and then combining the Finite Element Method with an iterative algorithm to find a set of material properties that minimizes the difference between the numerical impedance curve and the experimental one. Different methods to validate the results are also discussed. Examples of characterization of some common piezoelectric ceramics are presented to show the practical application of the described methods.

  4. Tornado-Shaped Curves

    Science.gov (United States)

    Martínez, Sol Sáez; de la Rosa, Félix Martínez; Rojas, Sergio

    2017-01-01

    In Advanced Calculus, our students wonder if it is possible to graphically represent a tornado by means of a three-dimensional curve. In this paper, we show it is possible by providing the parametric equations of such tornado-shaped curves.

  5. The GO Cygni system: photoelectric observations and light curves analysis

    International Nuclear Information System (INIS)

    Rovithis, P.; Rovithis-Livaniou, H.; Niarchos, P.G.

    1990-01-01

    Photoelectric observations, in B and V, of the system GO Cygni obtained during 1985 at the Kryonerion Astronomical Station of the National Observatory of Greece are given. The corresponding light curves (typical β Lyrae) are analysed using Frequency Domain techniques. New photoelectric and absolute elements for the system are given, and its period was found to continue its increasing

  6. Tracer concentration curves and residence time analysis in technological flow systems. 2

    International Nuclear Information System (INIS)

    Pippel, W.

    1976-01-01

    Tracer concentration curves measured in flow systems by means of radioactive isotopes are treated as a two dimensional random process. Comparing them with the family distribution functions described in part I, it follows that only in case of ergodic behaviour of the system tracer curves can be considered as age distribution functions. The conception of ergodicity in residence time systems has been explained with the aid of a time function measurable by a special method of radioactive tracer technique and by the mean value of the residence time obtainable from this function. Furthermore, technological consequences in evaluating tracer concentration curves of real flow systems are discussed with respect to supposed ergodic or nonergodic behaviour. These considerations are of special importance for flow systems with temporary fluctuations in structure. (author)

  7. The integration of novel diagnostics techniques for multi-scale monitoring of large civil infrastructures

    Directory of Open Access Journals (Sweden)

    F. Soldovieri

    2008-11-01

    Full Text Available In the recent years, structural monitoring of large infrastructures (buildings, dams, bridges or more generally man-made structures has raised an increased attention due to the growing interest about safety and security issues and risk assessment through early detection. In this framework, aim of the paper is to introduce a new integrated approach which combines two sensing techniques acting on different spatial and temporal scales. The first one is a distributed optic fiber sensor based on the Brillouin scattering phenomenon, which allows a spatially and temporally continuous monitoring of the structure with a "low" spatial resolution (meter. The second technique is based on the use of Ground Penetrating Radar (GPR, which can provide detailed images of the inner status of the structure (with a spatial resolution less then tens centimetres, but does not allow a temporal continuous monitoring. The paper describes the features of these two techniques and provides experimental results concerning preliminary test cases.

  8. A versatile curve-fit model for linear to deeply concave rank abundance curves

    NARCIS (Netherlands)

    Neuteboom, J.H.; Struik, P.C.

    2005-01-01

    A new, flexible curve-fit model for linear to concave rank abundance curves was conceptualized and validated using observational data. The model links the geometric-series model and log-series model and can also fit deeply concave rank abundance curves. The model is based ¿ in an unconventional way

  9. RFQ scaling-law implications and examples

    International Nuclear Information System (INIS)

    Wadlinger, E.A.

    1986-01-01

    We demonstrate the utility of the RFQ scaling laws that have been previously derived. These laws are relations between accelerator parameters (electric field, fr frequency, etc.) and beam parameters (current, energy, emittance, etc.) that act as guides for designing radio-frequency quadrupoles (RFQs) by showing the various tradeoffs involved in making RFQ designs. These scaling laws give a unique family of curves, at any given synchronous particle phase, that relates the beam current, emittance, particle mass, and space-charge tune depression with the RFQ frequency and maximum vane-tip electric field when assuming equipartitioning and equal longitudinal and transverse tune depressions. These scaling curves are valid at any point in any given RFQ where there is a bunched and equipartitioned beam. We show several examples for designing RFQs, examine the performance characteristics of an existing device, and study various RFQ performance limitations required by the scaling laws

  10. Use of Monte Carlo Methods for determination of isodose curves in brachytherapy

    International Nuclear Information System (INIS)

    Vieira, Jose Wilson

    2001-08-01

    Brachytherapy is a special form of cancer treatment in which the radioactive source is very close to or inside the tumor with the objective of causing the necrosis of the cancerous tissue. The intensity of cell response to the radiation varies according to the tissue type and degree of differentiation. Since the malign cells are less differentiated than the normal ones, they are more sensitive to the radiation. This is the basis for radiotherapy techniques. Institutes that work with the application of high dose rates use sophisticated computer programs to calculate the necessary dose to achieve the necrosis of the tumor and the same time, minimizing the irradiation of tissues and organs of the neighborhood. With knowledge the characteristics of the source and the tumor, it is possible to trace isodose curves with the necessary information for planning the brachytherapy in patients. The objective of this work is, using Monte Carlo techniques, to develop a computer program - the ISODOSE - which allows to determine isodose curves in turn of linear radioactive sources used in brachytherapy. The development of ISODOSE is important because the available commercial programs, in general, are very expensive and practically inaccessible to small clinics. The use of Monte Carlo techniques is viable because they avoid problems inherent to analytic solutions as, for instance , the integration of functions with singularities in its domain. The results of ISODOSE were compared with similar data found in the literature and also with those obtained at the institutes of radiotherapy of the 'Hospital do Cancer do Recife' and of the 'Hospital Portugues do Recife'. ISODOSE presented good performance, mainly, due to the Monte Carlo techniques, that allowed a quite detailed drawing of the isodose curves in turn of linear sources. (author)

  11. Estimation of Typhoon Wind Hazard Curves for Nuclear Sites

    Energy Technology Data Exchange (ETDEWEB)

    Choun, Young-Sun; Kim, Min-Kyu [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2016-10-15

    The intensity of such typhoons, which can influence the Korean Peninsula, is on an increasing trend owing to a rapid change of climate of the Northwest Pacific Ocean. Therefore, nuclear facilities should be prepared against future super-typhoons. Currently, the U.S. Nuclear Regulatory Commission requires that a new NPP should be designed to endure the design-basis hurricane wind speeds corresponding to an annual exceedance frequency of 10{sup -7} (return period of 10 million years). A typical technique used to estimate typhoon wind speeds is based on a sampling of the key parameters of typhoon wind models from the distribution functions fitting statistical distributions to the observation data. Thus, the estimated wind speeds for long return periods include an unavoidable uncertainty owing to a limited observation. This study estimates the typhoon wind speeds for nuclear sites using a Monte Carlo simulation, and derives wind hazard curves using a logic-tree framework to reduce the epistemic uncertainty. Typhoon wind speeds were estimated for different return periods through a Monte-Carlo simulation using the typhoon observation data, and the wind hazard curves were derived using a logic-tree framework for three nuclear sites. The hazard curves for the simulated and probable maximum winds were obtained. The mean hazard curves for the simulated and probable maximum winds can be used for the design and risk assessment of an NPP.

  12. 3D Analysis of D-RaCe and Self-Adjusting File in Removing Filling Materials from Curved Root Canals Instrumented and Filled with Different Techniques

    Directory of Open Access Journals (Sweden)

    Neslihan Simsek

    2014-01-01

    Full Text Available The aim of this study was to compare the efficacy of D-RaCe files and a self-adjusting file (SAF system in removing filling material from curved root canals instrumented and filled with different techniques by using microcomputed tomography (micro-CT. The mesial roots of 20 extracted mandibular first molars were used. Root canals (mesiobuccal and mesiolingual were instrumented with SAF or Revo-S. The canals were then filled with gutta-percha and AH Plus sealer using cold lateral compaction or thermoplasticized injectable techniques. The root fillings were first removed with D-RaCe (Step 1, followed by Step 2, in which a SAF system was used to remove the residual fillings in all groups. Micro-CT scans were used to measure the volume of residual filling after root canal filling, reinstrumentation with D-RaCe (Step 1, and reinstrumentation with SAF (Step 2. Data were analyzed using Wilcoxon and Kruskal-Wallis tests. There were no statistically significant differences between filling techniques in the canals instrumented with SAF (P=0.292 and Revo-S (P=0.306. The amount of remaining filling material was similar in all groups (P=0.363; all of the instrumentation techniques left filling residue inside the canals. However, the additional use of SAF was more effective than using D-RaCe alone.

  13. Growth Curve Models and Applications : Indian Statistical Institute

    CERN Document Server

    2017-01-01

    Growth curve models in longitudinal studies are widely used to model population size, body height, biomass, fungal growth, and other variables in the biological sciences, but these statistical methods for modeling growth curves and analyzing longitudinal data also extend to general statistics, economics, public health, demographics, epidemiology, SQC, sociology, nano-biotechnology, fluid mechanics, and other applied areas.   There is no one-size-fits-all approach to growth measurement. The selected papers in this volume build on presentations from the GCM workshop held at the Indian Statistical Institute, Giridih, on March 28-29, 2016. They represent recent trends in GCM research on different subject areas, both theoretical and applied. This book includes tools and possibilities for further work through new techniques and modification of existing ones. The volume includes original studies, theoretical findings and case studies from a wide range of app lied work, and these contributions have been externally r...

  14. Maximum likelihood fitting of FROC curves under an initial-detection-and-candidate-analysis model

    International Nuclear Information System (INIS)

    Edwards, Darrin C.; Kupinski, Matthew A.; Metz, Charles E.; Nishikawa, Robert M.

    2002-01-01

    We have developed a model for FROC curve fitting that relates the observer's FROC performance not to the ROC performance that would be obtained if the observer's responses were scored on a per image basis, but rather to a hypothesized ROC performance that the observer would obtain in the task of classifying a set of 'candidate detections' as positive or negative. We adopt the assumptions of the Bunch FROC model, namely that the observer's detections are all mutually independent, as well as assumptions qualitatively similar to, but different in nature from, those made by Chakraborty in his AFROC scoring methodology. Under the assumptions of our model, we show that the observer's FROC performance is a linearly scaled version of the candidate analysis ROC curve, where the scaling factors are just given by the FROC operating point coordinates for detecting initial candidates. Further, we show that the likelihood function of the model parameters given observational data takes on a simple form, and we develop a maximum likelihood method for fitting a FROC curve to this data. FROC and AFROC curves are produced for computer vision observer datasets and compared with the results of the AFROC scoring method. Although developed primarily with computer vision schemes in mind, we hope that the methodology presented here will prove worthy of further study in other applications as well

  15. Calibration curves for biological dosimetry; Curvas de calibracion para dosimetria biologica

    Energy Technology Data Exchange (ETDEWEB)

    Guerrero C, C.; Brena V, M. [ININ, A.P. 18-1027, 11801 Mexico D.F. (Mexico)]. E-mail cgc@nuclear.inin.mx

    2004-07-01

    The generated information by the investigations in different laboratories of the world, included the ININ, in which settles down that certain class of chromosomal leisure it increases in function of the dose and radiation type, has given by result the obtaining of calibrated curves that are applied in the well-known technique as biological dosimetry. In this work is presented a summary of the work made in the laboratory that includes the calibrated curves for gamma radiation of {sup 60} Cobalt and X rays of 250 k Vp, examples of presumed exposure to ionizing radiation, resolved by means of aberration analysis and the corresponding dose estimate through the equations of the respective curves and finally a comparison among the dose calculations in those people affected by the accident of Ciudad Juarez, carried out by the group of Oak Ridge, USA and those obtained in this laboratory. (Author)

  16. In-Vehicle Dynamic Curve-Speed Warnings at High-Risk Rural Curves

    Science.gov (United States)

    2018-03-01

    Lane-departure crashes at horizontal curves represent a significant portion of fatal crashes on rural Minnesota roads. Because of this, solutions are needed to aid drivers in identifying upcoming curves and inform them of a safe speed at which they s...

  17. Large-scale nanofabrication of periodic nanostructures using nanosphere-related techniques for green technology applications (Conference Presentation)

    Science.gov (United States)

    Yen, Chen-Chung; Wu, Jyun-De; Chien, Yi-Hsin; Wang, Chang-Han; Liu, Chi-Ching; Ku, Chen-Ta; Chen, Yen-Jon; Chou, Meng-Cheng; Chang, Yun-Chorng

    2016-09-01

    Nanotechnology has been developed for decades and many interesting optical properties have been demonstrated. However, the major hurdle for the further development of nanotechnology depends on finding economic ways to fabricate such nanostructures in large-scale. Here, we demonstrate how to achieve low-cost fabrication using nanosphere-related techniques, such as Nanosphere Lithography (NSL) and Nanospherical-Lens Lithography (NLL). NSL is a low-cost nano-fabrication technique that has the ability to fabricate nano-triangle arrays that cover a very large area. NLL is a very similar technique that uses polystyrene nanospheres to focus the incoming ultraviolet light and exposure the underlying photoresist (PR) layer. PR hole arrays form after developing. Metal nanodisk arrays can be fabricated following metal evaporation and lifting-off processes. Nanodisk or nano-ellipse arrays with various sizes and aspect ratios are routinely fabricated in our research group. We also demonstrate we can fabricate more complicated nanostructures, such as nanodisk oligomers, by combining several other key technologies such as angled exposure and deposition, we can modify these methods to obtain various metallic nanostructures. The metallic structures are of high fidelity and in large scale. The metallic nanostructures can be transformed into semiconductor nanostructures and be used in several green technology applications.

  18. A deconvolution technique for processing small intestinal transit data

    Energy Technology Data Exchange (ETDEWEB)

    Brinch, K. [Department of Clinical Physiology and Nuclear Medicine, Glostrup Hospital, University Hospital of Copenhagen (Denmark); Larsson, H.B.W. [Danish Research Center of Magnetic Resonance, Hvidovre Hospital, University Hospital of Copenhagen (Denmark); Madsen, J.L. [Department of Clinical Physiology and Nuclear Medicine, Hvidovre Hospital, University Hospital of Copenhagen (Denmark)

    1999-03-01

    The deconvolution technique can be used to compute small intestinal impulse response curves from scintigraphic data. Previously suggested approaches, however, are sensitive to noise from the data. We investigated whether deconvolution based on a new simple iterative convolving technique can be recommended. Eight healthy volunteers ingested a meal that contained indium-111 diethylene triamine penta-acetic acid labelled water and technetium-99m stannous colloid labelled omelette. Imaging was performed at 30-min intervals until all radioactivity was located in the colon. A Fermi function=(1+e{sup -{alpha}{beta}})/(1+e{sup (t-{alpha}){beta}}) was chosen to characterize the small intestinal impulse response function. By changing only two parameters, {alpha} and {beta}, it is possible to obtain configurations from nearly a square function to nearly a monoexponential function. Small intestinal input function was obtained from the gastric emptying curve and convolved with the Fermi function. The sum of least squares was used to find {alpha} and {beta} yielding the best fit of the convolved curve to the oberved small intestinal time-activity curve. Finally, a small intestinal mean transit time was calculated from the Fermi function referred to. In all cases, we found an excellent fit of the convolved curve to the observed small intestinal time-activity curve, that is the Fermi function reflected the small intestinal impulse response curve. Small intestinal mean transit time of liquid marker (median 2.02 h) was significantly shorter than that of solid marker (median 2.99 h; P<0.02). The iterative convolving technique seems to be an attractive alternative to ordinary approaches for the processing of small intestinal transit data. (orig.) With 2 figs., 13 refs.

  19. Testing the stationarity of white dwarf light-curves

    International Nuclear Information System (INIS)

    Molnar, L; Kollath, Z; Plachy, E; Paparo, M

    2009-01-01

    Long period white dwarfs show changes in their frequency spectra from one observing season to another, i.e. their light-curves cannot be considered as stationary multiperiodic variations on long timescales. However, due to the complex frequency spectra of these stars and the narrow frequency spacing, it is still unknown, what the shortest time scale is, where real physical modulation exists. We present tests on artificial data, resembling the observations, using time-frequency distributions (TFDs), Fourier-analysis and the analytical signal method.

  20. Involute Spur Gear Template Development by Parametric Technique ...

    African Journals Online (AJOL)

    There are many methods available for developing profiles of gear and spline teeth. Most of the techniques are inaccurate because they use only an approximation of the involute curve profile. The parametric method developed in this paper provides accurate involute curve creation using formulas and exact geometric ...

  1. Theoretical determination of transit time locus curves for ultrasonic pulse echo testing - ALOK. Pt. 4

    International Nuclear Information System (INIS)

    Grohs, B.

    1983-01-01

    The ALOK-technique allows the simultaneous detection of flaws and their evaluation with respect to type, location and dimension by interpretation of the transit time behaviour during scanning of the reflector. The accuracy of information obtained by means of this technique can be further improved both during interference elimination and reconstruction owing to the ability of exact calculation of possible transit time locus curves of given reflectors. The mathematical solution of transit time locus curve calculations refers here to pulse echo testing in consideration of the refraction of sound on the forward wedge/test object - interface. The method of solving the problem is equivalent to the Fermat's principle in optics. (orig.) [de

  2. Application of parallel computing techniques to a large-scale reservoir simulation

    International Nuclear Information System (INIS)

    Zhang, Keni; Wu, Yu-Shu; Ding, Chris; Pruess, Karsten

    2001-01-01

    Even with the continual advances made in both computational algorithms and computer hardware used in reservoir modeling studies, large-scale simulation of fluid and heat flow in heterogeneous reservoirs remains a challenge. The problem commonly arises from intensive computational requirement for detailed modeling investigations of real-world reservoirs. This paper presents the application of a massive parallel-computing version of the TOUGH2 code developed for performing large-scale field simulations. As an application example, the parallelized TOUGH2 code is applied to develop a three-dimensional unsaturated-zone numerical model simulating flow of moisture, gas, and heat in the unsaturated zone of Yucca Mountain, Nevada, a potential repository for high-level radioactive waste. The modeling approach employs refined spatial discretization to represent the heterogeneous fractured tuffs of the system, using more than a million 3-D gridblocks. The problem of two-phase flow and heat transfer within the model domain leads to a total of 3,226,566 linear equations to be solved per Newton iteration. The simulation is conducted on a Cray T3E-900, a distributed-memory massively parallel computer. Simulation results indicate that the parallel computing technique, as implemented in the TOUGH2 code, is very efficient. The reliability and accuracy of the model results have been demonstrated by comparing them to those of small-scale (coarse-grid) models. These comparisons show that simulation results obtained with the refined grid provide more detailed predictions of the future flow conditions at the site, aiding in the assessment of proposed repository performance

  3. The learning curve associated with the introduction of the subcutaneous implantable defibrillator

    NARCIS (Netherlands)

    Knops, Reinoud E.; Brouwer, Tom F.; Barr, Craig S.; Theuns, Dominic A.; Boersma, Lucas; Weiss, Raul; Neuzil, Petr; Scholten, Marcoen; Lambiase, Pier D.; Leon, Angel R.; Hood, Margaret; Jones, Paul W.; Wold, Nicholas; Grace, Andrew A.; Olde Nordkamp, Louise R. A.; Burke, Martin C.

    2016-01-01

    Aims The subcutaneous implantable cardioverter defibrillator (S-ICD) was introduced to overcome complications related to transvenous leads. Adoption of the S-ICD requires implanters to learn a new implantation technique. The aim of this study was to assess the learning curve for S-ICD implanters

  4. The learning curve associated with the introduction of the subcutaneous implantable defibrillator

    NARCIS (Netherlands)

    R.E. Knops (Reinoud); T.F. Brouwer (Tom F.); C.S. Barr (Craig); D.A.M.J. Theuns (Dominic); L. Boersma (Lucas); R. Weiss (Ram); P. Neuzil (Petr); M.F. Scholten (Marcoen); P.D. Lambiase (Pier); A. Leon (Angel); A.M. Hood (Margaret); P. Jones; Wold, N. (Nicholas); Grace, A.A. (Andrew A.); L.R.A. Olde Nordkamp (Louise R.A.); M.C. Burke (Martin)

    2016-01-01

    textabstractAims: The subcutaneous implantable cardioverter defibrillator (S-ICD) was introduced to overcome complications related to transvenous leads. Adoption of the S-ICD requires implanters to learn a new implantation technique. The aim of this study was to assess the learning curve for S-ICD

  5. Application of Learning Curves for Didactic Model Evaluation: Case Studies

    Directory of Open Access Journals (Sweden)

    Felix Mödritscher

    2013-01-01

    Full Text Available The success of (online courses depends, among other factors, on the underlying didactical models which have always been evaluated with qualitative and quantitative research methods. Several new evaluation techniques have been developed and established in the last years. One of them is ‘learning curves’, which aim at measuring error rates of users when they interact with adaptive educational systems, thereby enabling the underlying models to be evaluated and improved. In this paper, we report how we have applied this new method to two case studies to show that learning curves are useful to evaluate didactical models and their implementation in educational platforms. Results show that the error rates follow a power law distribution with each additional attempt if the didactical model of an instructional unit is valid. Furthermore, the initial error rate, the slope of the curve and the goodness of fit of the curve are valid indicators for the difficulty level of a course and the quality of its didactical model. As a conclusion, the idea of applying learning curves for evaluating didactical model on the basis of usage data is considered to be valuable for supporting teachers and learning content providers in improving their online courses.

  6. Scaling Transformation in the Rembrandt Technique

    DEFF Research Database (Denmark)

    Barfod, Michael Bruhn; Leleur, Steen

    2013-01-01

    This paper examines a decision support system (DSS) for the appraisal of complex decision problems using multi-criteria decision analysis (MCDA). The DSS makes use of a structured hierarchical approach featuring the multiplicative AHP also known as the REMBRANDT technique. The paper addresses...... of a conventional AHP calculation in order to examine what impact the choice of progression factors as well as the choice of technique have on the decision making. Based on this a modified progression factor for the calculation of scores for the alternatives in REMBRANDT is suggested while the progression factor...

  7. A method for the measurement of dispersion curves of circumferential guided waves radiating from curved shells: experimental validation and application to a femoral neck mimicking phantom

    Science.gov (United States)

    Nauleau, Pierre; Minonzio, Jean-Gabriel; Chekroun, Mathieu; Cassereau, Didier; Laugier, Pascal; Prada, Claire; Grimal, Quentin

    2016-07-01

    Our long-term goal is to develop an ultrasonic method to characterize the thickness, stiffness and porosity of the cortical shell of the femoral neck, which could enhance hip fracture risk prediction. To this purpose, we proposed to adapt a technique based on the measurement of guided waves. We previously evidenced the feasibility of measuring circumferential guided waves in a bone-mimicking phantom of a circular cross-section of even thickness. The goal of this study is to investigate the impact of the complex geometry of the femoral neck on the measurement of guided waves. Two phantoms of an elliptical cross-section and one phantom of a realistic cross-section were investigated. A 128-element array was used to record the inter-element response matrix of these waveguides. This experiment was simulated using a custom-made hybrid code. The response matrices were analyzed using a technique based on the physics of wave propagation. This method yields portions of dispersion curves of the waveguides which were compared to reference dispersion curves. For the elliptical phantoms, three portions of dispersion curves were determined with a good agreement between experiment, simulation and theory. The method was thus validated. The characteristic dimensions of the shell were found to influence the identification of the circumferential wave signals. The method was then applied to the signals backscattered by the superior half of constant thickness of the realistic phantom. A cut-off frequency and some portions of modes were measured, with a good agreement with the theoretical curves of a plate waveguide. We also observed that the method cannot be applied directly to the signals backscattered by the lower half of varying thicknesses of the phantom. The proposed approach could then be considered to evaluate the properties of the superior part of the femoral neck, which is known to be a clinically relevant site.

  8. Reconstruction of an input function from a dynamic PET water image using multiple tissue curves

    Science.gov (United States)

    Kudomi, Nobuyuki; Maeda, Yukito; Yamamoto, Yuka; Nishiyama, Yoshihiro

    2016-08-01

    Quantification of cerebral blood flow (CBF) is important for the understanding of normal and pathologic brain physiology. When CBF is assessed using PET with {{\\text{H}}2} 15O or C15O2, its calculation requires an arterial input function, which generally requires invasive arterial blood sampling. The aim of the present study was to develop a new technique to reconstruct an image derived input function (IDIF) from a dynamic {{\\text{H}}2} 15O PET image as a completely non-invasive approach. Our technique consisted of using a formula to express the input using tissue curve with rate constant parameter. For multiple tissue curves extracted from the dynamic image, the rate constants were estimated so as to minimize the sum of the differences of the reproduced inputs expressed by the extracted tissue curves. The estimated rates were used to express the inputs and the mean of the estimated inputs was used as an IDIF. The method was tested in human subjects (n  =  29) and was compared to the blood sampling method. Simulation studies were performed to examine the magnitude of potential biases in CBF and to optimize the number of multiple tissue curves used for the input reconstruction. In the PET study, the estimated IDIFs were well reproduced against the measured ones. The difference between the calculated CBF values obtained using the two methods was small as around  PET imaging. This suggests the possibility of using a completely non-invasive technique to assess CBF in patho-physiological studies.

  9. Flow in curved ducts of varying cross-section

    Science.gov (United States)

    Sotiropoulos, F.; Patel, V. C.

    1992-07-01

    Two numerical methods for solving the incompressible Navier-Stokes equations are compared with each other by applying them to calculate laminar and turbulent flows through curved ducts of regular cross-section. Detailed comparisons, between the computed solutions and experimental data, are carried out in order to validate the two methods and to identify their relative merits and disadvantages. Based on the conclusions of this comparative study a numerical method is developed for simulating viscous flows through curved ducts of varying cross-sections. The proposed method is capable of simulating the near-wall turbulence using fine computational meshes across the sublayer in conjunction with a two-layer k-epsilon model. Numerical solutions are obtained for: (1) a straight transition duct geometry, and (2) a hydroturbine draft-tube configuration at model scale Reynolds number for various inlet swirl intensities. The report also provides a detailed literature survey that summarizes all the experimental and computational work in the area of duct flows.

  10. The curve shortening problem

    CERN Document Server

    Chou, Kai-Seng

    2001-01-01

    Although research in curve shortening flow has been very active for nearly 20 years, the results of those efforts have remained scattered throughout the literature. For the first time, The Curve Shortening Problem collects and illuminates those results in a comprehensive, rigorous, and self-contained account of the fundamental results.The authors present a complete treatment of the Gage-Hamilton theorem, a clear, detailed exposition of Grayson''s convexity theorem, a systematic discussion of invariant solutions, applications to the existence of simple closed geodesics on a surface, and a new, almost convexity theorem for the generalized curve shortening problem.Many questions regarding curve shortening remain outstanding. With its careful exposition and complete guide to the literature, The Curve Shortening Problem provides not only an outstanding starting point for graduate students and new investigations, but a superb reference that presents intriguing new results for those already active in the field.

  11. Development of a large area, curved two-dimensional detector for single-crystal neutron diffraction studies

    International Nuclear Information System (INIS)

    Moon, Myung-Kook; Lee, Chang-Hee; Kim, Shin-Ae; Noda, Yukio

    2013-01-01

    A new type of two-dimensional curved position-sensitive neutron detector has been developed for a high-throughput single-crystal neutron diffractometer, which was designed to cover 110° horizontally and 56° vertically. The prototype curved detector covering 70° horizontally and 45° vertically was first developed to test the technical feasibility of the detector parameters, the internal anode and cathode structures for the curved shape, technical difficulties in the assembly procedure, and so on. Then, based on this experience, a full-scale curved detector with twice the active area of the prototype was fabricated with newly modified anode and cathode planes and optimized design parameters in terms of mechanical and electric properties. The detector was installed in a dedicated diffractometer at the ST3 beam port of the research reactor HANARO. In this paper, the fabrication and application of the prototype and a new larger-area curved position-sensitive neutron detector for single crystal diffraction is presented

  12. Soil Water Retention Curve

    Science.gov (United States)

    Johnson, L. E.; Kim, J.; Cifelli, R.; Chandra, C. V.

    2016-12-01

    Potential water retention, S, is one of parameters commonly used in hydrologic modeling for soil moisture accounting. Physically, S indicates total amount of water which can be stored in soil and is expressed in units of depth. S can be represented as a change of soil moisture content and in this context is commonly used to estimate direct runoff, especially in the Soil Conservation Service (SCS) curve number (CN) method. Generally, the lumped and the distributed hydrologic models can easily use the SCS-CN method to estimate direct runoff. Changes in potential water retention have been used in previous SCS-CN studies; however, these studies have focused on long-term hydrologic simulations where S is allowed to vary at the daily time scale. While useful for hydrologic events that span multiple days, the resolution is too coarse for short-term applications such as flash flood events where S may not recover its full potential. In this study, a new method for estimating a time-variable potential water retention at hourly time-scales is presented. The methodology is applied for the Napa River basin, California. The streamflow gage at St Helena, located in the upper reaches of the basin, is used as the control gage site to evaluate the model performance as it is has minimal influences by reservoirs and diversions. Rainfall events from 2011 to 2012 are used for estimating the event-based SCS CN to transfer to S. As a result, we have derived the potential water retention curve and it is classified into three sections depending on the relative change in S. The first is a negative slope section arising from the difference in the rate of moving water through the soil column, the second is a zero change section representing the initial recovery the potential water retention, and the third is a positive change section representing the full recovery of the potential water retention. Also, we found that the soil water moving has traffic jam within 24 hours after finished first

  13. Dynamic state estimation techniques for large-scale electric power systems

    International Nuclear Information System (INIS)

    Rousseaux, P.; Pavella, M.

    1991-01-01

    This paper presents the use of dynamic type state estimators for energy management in electric power systems. Various dynamic type estimators have been developed, but have never been implemented. This is primarily because of dimensionality problems posed by the conjunction of an extended Kalman filter with a large scale power system. This paper precisely focuses on how to circumvent the high dimensionality, especially prohibitive in the filtering step, by using a decomposition-aggregation hierarchical scheme; to appropriately model the power system dynamics, the authors introduce new state variables in the prediction step and rely on a load forecasting method. The combination of these two techniques succeeds in solving the overall dynamic state estimation problem not only in a tractable and realistic way, but also in compliance with real-time computational requirements. Further improvements are also suggested, bound to the specifics of the high voltage electric transmission systems

  14. IMAGE-PLANE ANALYSIS OF n-POINT-MASS LENS CRITICAL CURVES AND CAUSTICS

    Energy Technology Data Exchange (ETDEWEB)

    Danek, Kamil; Heyrovský, David, E-mail: kamil.danek@utf.mff.cuni.cz, E-mail: heyrovsky@utf.mff.cuni.cz [Institute of Theoretical Physics, Faculty of Mathematics and Physics, Charles University in Prague (Czech Republic)

    2015-06-10

    The interpretation of gravitational microlensing events caused by planetary systems or multiple stars is based on the n-point-mass lens model. The first planets detected by microlensing were well described by the two-point-mass model of a star with one planet. By the end of 2014, four events involving three-point-mass lenses had been announced. Two of the lenses were stars with two planetary companions each; two were binary stars with a planet orbiting one component. While the two-point-mass model is well understood, the same cannot be said for lenses with three or more components. Even the range of possible critical-curve topologies and caustic geometries of the three-point-mass lens remains unknown. In this paper we provide new tools for mapping the critical-curve topology and caustic cusp number in the parameter space of n-point-mass lenses. We perform our analysis in the image plane of the lens. We show that all contours of the Jacobian are critical curves of re-scaled versions of the lens configuration. Utilizing this property further, we introduce the cusp curve to identify cusp-image positions on all contours simultaneously. In order to track cusp-number changes in caustic metamorphoses, we define the morph curve, which pinpoints the positions of metamorphosis-point images along the cusp curve. We demonstrate the usage of both curves on simple two- and three-point-mass lens examples. For the three simplest caustic metamorphoses we illustrate the local structure of the image and source planes.

  15. Cubic Bezier Curve Approach for Automated Offline Signature Verification with Intrusion Identification

    Directory of Open Access Journals (Sweden)

    Arun Vijayaragavan

    2014-01-01

    Full Text Available Authentication is a process of identifying person’s rights over a system. Many authentication types are used in various systems, wherein biometrics authentication systems are of a special concern. Signature verification is a basic biometric authentication technique used widely. The signature matching algorithm uses image correlation and graph matching technique which provides false rejection or acceptance. We proposed a model to compare knowledge from signature. Intrusion in the signature repository system results in copy of the signature that leads to false acceptance. Our approach uses a Bezier curve algorithm to identify the curve points and uses the behaviors of the signature for verification. An analyzing mobile agent is used to identify the input signature parameters and compare them with reference signature repository. It identifies duplication of signature over intrusion and rejects it. Experiments are conducted on a database with thousands of signature images from various sources and the results are favorable.

  16. Multi-binding site model-based curve-fitting program for the computation of RIA data

    International Nuclear Information System (INIS)

    Malan, P.G.; Ekins, R.P.; Cox, M.G.; Long, E.M.R.

    1977-01-01

    In this paper, a comparison will be made of model-based and empirical curve-fitting procedures. The implementation of a multiple binding-site curve-fitting model which will successfully fit a wide range of assay data, and which can be run on a mini-computer is described. The latter sophisticated model also provides estimates of binding site concentrations and the values of the respective equilibrium constants present: the latter have been used for refining assay conditions using computer optimisation techniques. (orig./AJ) [de

  17. No evidence for an open vessel effect in centrifuge-based vulnerability curves of a long-vesselled liana (Vitis vinifera).

    Science.gov (United States)

    Jacobsen, Anna L; Pratt, R Brandon

    2012-06-01

    Vulnerability to cavitation curves are used to estimate xylem cavitation resistance and can be constructed using multiple techniques. It was recently suggested that a technique that relies on centrifugal force to generate negative xylem pressures may be susceptible to an open vessel artifact in long-vesselled species. Here, we used custom centrifuge rotors to measure different sample lengths of 1-yr-old stems of grapevine to examine the influence of open vessels on vulnerability curves, thus testing the hypothesized open vessel artifact. These curves were compared with a dehydration-based vulnerability curve. Although samples differed significantly in the number of open vessels, there was no difference in the vulnerability to cavitation measured on 0.14- and 0.271-m-long samples of Vitis vinifera. Dehydration and centrifuge-based curves showed a similar pattern of declining xylem-specific hydraulic conductivity (K(s)) with declining water potential. The percentage loss in hydraulic conductivity (PLC) differed between dehydration and centrifuge curves and it was determined that grapevine is susceptible to errors in estimating maximum K(s) during dehydration because of the development of vessel blockages. Our results from a long-vesselled liana do not support the open vessel artifact hypothesis. © 2012 The Authors. New Phytologist © 2012 New Phytologist Trust.

  18. Evaluation of PCR and high-resolution melt curve analysis for differentiation of Salmonella isolates.

    Science.gov (United States)

    Saeidabadi, Mohammad Sadegh; Nili, Hassan; Dadras, Habibollah; Sharifiyazdi, Hassan; Connolly, Joanne; Valcanis, Mary; Raidal, Shane; Ghorashi, Seyed Ali

    2017-06-01

    Consumption of poultry products contaminated with Salmonella is one of the major causes of foodborne diseases worldwide and therefore detection and differentiation of Salmonella spp. in poultry is important. In this study, oligonucleotide primers were designed from hemD gene and a PCR followed by high-resolution melt (HRM) curve analysis was developed for rapid differentiation of Salmonella isolates. Amplicons of 228 bp were generated from 16 different Salmonella reference strains and from 65 clinical field isolates mainly from poultry farms. HRM curve analysis of the amplicons differentiated Salmonella isolates and analysis of the nucleotide sequence of the amplicons from selected isolates revealed that each melting curve profile was related to a unique DNA sequence. The relationship between reference strains and tested specimens was also evaluated using a mathematical model without visual interpretation of HRM curves. In addition, the potential of the PCR-HRM curve analysis was evaluated for genotyping of additional Salmonella isolates from different avian species. The findings indicate that PCR followed by HRM curve analysis provides a rapid and robust technique for genotyping of Salmonella isolates to determine the serovar/serotype.

  19. FN-curves: preliminary estimation of severe accident risks after Fukushima

    International Nuclear Information System (INIS)

    Vasconcelos, Vanderley de; Soares, Wellington Antonio; Costa, Antonio Carlos Lopes da

    2015-01-01

    Doubts of whether the risks related to severe accidents in nuclear reactors are indeed very low were raised after the nuclear accident at Fukushima Daiichi in 2011. Risk estimations of severe accidents in nuclear power plants involve both probability and consequence assessment of such events. Among the ways to display risks, risk curves are tools that express the frequency of exceeding a certain magnitude of consequence. Societal risk is often represented graphically in a FN-curve, a type of risk curve, which displays the probability of having N or more fatalities per year, as a function of N, on a double logarithmic scale. The FN-curve, originally introduced for the assessment of the risks in the nuclear industry through the U.S.NRC Reactor Safety Study WASH-1400 (1975), is used in various countries to express and limit risks of hazardous activities. This first study estimated an expected rate of core damage equal to 5x10 -5 by reactor-year and suggested an upper bound of 3x10 -4 by reactor-year. A more recent report issued by Electric Power Research Institute - EPRI (2008) estimates a figure of the order of 2x10 -5 by reactor-year. The Fukushima nuclear accident apparently implies that the observed core damage frequency is higher than that predicted by these probabilistic safety assessments. Therefore, this paper presents a preliminary analyses of the FN-curves related to severe nuclear reactor accidents, taking into account a combination of available data of past accidents, probability modelling to estimate frequencies, and expert judgments. (author)

  20. FN-curves: preliminary estimation of severe accident risks after Fukushima

    Energy Technology Data Exchange (ETDEWEB)

    Vasconcelos, Vanderley de; Soares, Wellington Antonio; Costa, Antonio Carlos Lopes da, E-mail: vasconv@cdtn.br, E-mail: soaresw@cdtn.br, E-mail: aclc@cdtn.br [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil)

    2015-07-01

    Doubts of whether the risks related to severe accidents in nuclear reactors are indeed very low were raised after the nuclear accident at Fukushima Daiichi in 2011. Risk estimations of severe accidents in nuclear power plants involve both probability and consequence assessment of such events. Among the ways to display risks, risk curves are tools that express the frequency of exceeding a certain magnitude of consequence. Societal risk is often represented graphically in a FN-curve, a type of risk curve, which displays the probability of having N or more fatalities per year, as a function of N, on a double logarithmic scale. The FN-curve, originally introduced for the assessment of the risks in the nuclear industry through the U.S.NRC Reactor Safety Study WASH-1400 (1975), is used in various countries to express and limit risks of hazardous activities. This first study estimated an expected rate of core damage equal to 5x10{sup -5} by reactor-year and suggested an upper bound of 3x10{sup -4} by reactor-year. A more recent report issued by Electric Power Research Institute - EPRI (2008) estimates a figure of the order of 2x10{sup -5} by reactor-year. The Fukushima nuclear accident apparently implies that the observed core damage frequency is higher than that predicted by these probabilistic safety assessments. Therefore, this paper presents a preliminary analyses of the FN-curves related to severe nuclear reactor accidents, taking into account a combination of available data of past accidents, probability modelling to estimate frequencies, and expert judgments. (author)

  1. Part 5: Receiver Operating Characteristic Curve and Area under the Curve

    Directory of Open Access Journals (Sweden)

    Saeed Safari

    2016-04-01

    Full Text Available Multiple diagnostic tools are used by emergency physicians,every day. In addition, new tools are evaluated to obtainmore accurate methods and reduce time or cost of conventionalones. In the previous parts of this educationalseries, we described diagnostic performance characteristicsof diagnostic tests including sensitivity, specificity, positiveand negative predictive values, and likelihood ratios. Thereceiver operating characteristics (ROC curve is a graphicalpresentation of screening characteristics. ROC curve is usedto determine the best cutoff point and compare two or moretests or observers by measuring the area under the curve(AUC. In this part of our educational series, we explain ROCcurve and two methods to determine the best cutoff value.

  2. Learning Curves of Virtual Mastoidectomy in Distributed and Massed Practice.

    Science.gov (United States)

    Andersen, Steven Arild Wuyts; Konge, Lars; Cayé-Thomasen, Per; Sørensen, Mads Sølvsten

    2015-10-01

    Repeated and deliberate practice is crucial in surgical skills training, and virtual reality (VR) simulation can provide self-directed training of basic surgical skills to meet the individual needs of the trainee. Assessment of the learning curves of surgical procedures is pivotal in understanding skills acquisition and best-practice implementation and organization of training. To explore the learning curves of VR simulation training of mastoidectomy and the effects of different practice sequences with the aim of proposing the optimal organization of training. A prospective trial with a 2 × 2 design was conducted at an academic teaching hospital. Participants included 43 novice medical students. Of these, 21 students completed time-distributed practice from October 14 to November 29, 2013, and a separate group of 19 students completed massed practice on May 16, 17, or 18, 2014. Data analysis was performed from June 6, 2014, to March 3, 2015. Participants performed 12 repeated virtual mastoidectomies using a temporal bone surgical simulator in either a distributed (practice blocks spaced in time) or massed (all practice in 1 day) training program with randomization for simulator-integrated tutoring during the first 5 sessions. Performance was assessed using a modified Welling Scale for final product analysis by 2 blinded senior otologists. Compared with the 19 students in the massed practice group, the 21 students in the distributed practice group were older (mean age, 25.1 years), more often male (15 [62%]), and had slightly higher mean gaming frequency (2.3 on a 1-5 Likert scale). Learning curves were established and distributed practice was found to be superior to massed practice, reported as mean end score (95% CI) of 15.7 (14.4-17.0) in distributed practice vs. 13.0 (11.9-14.1) with massed practice (P = .002). Simulator-integrated tutoring accelerated the initial performance, with mean score for tutored sessions of 14.6 (13.9-15.2) vs. 13.4 (12.8-14.0) for

  3. Volume changes at macro- and nano-scale in epoxy resins studied by PALS and PVT experimental techniques

    Energy Technology Data Exchange (ETDEWEB)

    Somoza, A. [IFIMAT-UNCentro, Pinto 399, B7000GHG Tandil (Argentina) and CICPBA, Pinto 399, B7000GHG Tandil (Argentina)]. E-mail: asomoza@exa.unicen.edu.ar; Salgueiro, W. [IFIMAT-UNCentro, Pinto 399, B7000GHG Tandil (Argentina); Goyanes, S. [LPMPyMC, Depto. de Fisica, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, Ciudad Universitaria, Pabellon I, 1428 Buenos Aires (Argentina); Ramos, J. [Materials and Technology Group, Departamento de Ingenieria Quimica y M. Ambiente, Escuela University Politecnica, Universidad Pais Vasco/Euskal Herriko Unibertsitatea, Pz. Europa 1, 20018 Donostia/San Sebastian (Spain); Mondragon, I. [Materials and Technology Group, Departamento de Ingenieria Quimica y M. Ambiente, Escuela University Politecnica, Universidad Pais Vasco/Euskal Herriko Unibertsitatea, Pz. Europa 1, 20018 Donostia/San Sebastian (Spain)

    2007-02-15

    A systematic study on changes in the volumes at macro- and nano-scale in epoxy systems cured with selected aminic hardeners at different pre-cure temperatures is presented. Free- and macroscopic specific-volumes were measured by PALS and pressure-volume-temperature techniques, respectively. An analysis of the relation existing between macro- and nano-scales of the thermosetting networks developed by the different chemical structures is shown. The result obtained indicates that the structure of the hardeners governs the packing of the molecular chains of the epoxy network.

  4. Could CCI or FBCI Fully Eliminate the Impact of Curve Flexibility When Evaluating the Surgery Outcome for Thoracic Curve Idiopathic Scoliosis Patient? A Retrospective Study.

    Science.gov (United States)

    Yang, Changwei; Sun, Xiaofei; Li, Chao; Ni, Haijian; Zhu, Xiaodong; Yang, Shichang; Li, Ming

    2015-01-01

    To clarify if CCI or FBCI could fully eliminate the influence of curve flexibility on the coronal correction rate. We reviewed medical record of all thoracic curve AIS cases undergoing posterior spinal fusion with all pedicle screw systems from June 2011 to July 2013. Radiographical data was collected and calculated. Student t test, Pearson correlation analysis and linear regression analysis were used to analyze the data. 60 were included in this study. The mean age was 14.7 y (10-18 y) with 10 males (17%) and 50 females (83%). The average Risser sign was 2.7. The mean thoracic Cobb angle before operation was 51.9°. The mean bending Cobb angle was 27.6° and the mean fulcrum bending Cobb angle was 17.4°. The mean Cobb angle at 2 week after surgery was 16.3°. The Pearson correlation coefficient r between CCI and BFR was -0.856(Peliminate the impact of curve flexibility on the outcome of correction. A modified CCI or FBCI can better evaluating the corrective effects of different surgical techniques or instruments.

  5. Signature Curves Statistics of DNA Supercoils

    OpenAIRE

    Shakiban, Cheri; Lloyd, Peter

    2004-01-01

    In this paper we describe the Euclidean signature curves for two dimensional closed curves in the plane and their generalization to closed space curves. The focus will be on discrete numerical methods for approximating such curves. Further we will apply these numerical methods to plot the signature curves related to three-dimensional simulated DNA supercoils. Our primary focus will be on statistical analysis of the data generated for the signature curves of the supercoils. We will try to esta...

  6. Method of construction spatial transition curve

    Directory of Open Access Journals (Sweden)

    S.V. Didanov

    2013-04-01

    Full Text Available Purpose. The movement of rail transport (speed rolling stock, traffic safety, etc. is largely dependent on the quality of the track. In this case, a special role is the transition curve, which ensures smooth insertion of the transition from linear to circular section of road. The article deals with modeling of spatial transition curve based on the parabolic distribution of the curvature and torsion. This is a continuation of research conducted by the authors regarding the spatial modeling of curved contours. Methodology. Construction of the spatial transition curve is numerical methods for solving nonlinear integral equations, where the initial data are taken coordinate the starting and ending points of the curve of the future, and the inclination of the tangent and the deviation of the curve from the tangent plane at these points. System solutions for the numerical method are the partial derivatives of the equations of the unknown parameters of the law of change of torsion and length of the transition curve. Findings. The parametric equations of the spatial transition curve are calculated by finding the unknown coefficients of the parabolic distribution of the curvature and torsion, as well as the spatial length of the transition curve. Originality. A method for constructing the spatial transition curve is devised, and based on this software geometric modeling spatial transition curves of railway track with specified deviations of the curve from the tangent plane. Practical value. The resulting curve can be applied in any sector of the economy, where it is necessary to ensure a smooth transition from linear to circular section of the curved space bypass. An example is the transition curve in the construction of the railway line, road, pipe, profile, flat section of the working blades of the turbine and compressor, the ship, plane, car, etc.

  7. Efficacy of Reciproc(®) and Profile(®) Instruments in the Removal of Gutta-Percha from Straight and Curved Root Canals ex Vivo.

    Science.gov (United States)

    Marfisi, Karem; Mercadé, Montserrat; Plotino, Gianluca; Clavel, Tatiana; Duran-Sindreu, Fernando; Roig, Miguel

    2015-01-01

    To compare the efficacy of Reciproc(®) (VDW GmbH) and ProFile(®) (Dentsply Maillefer) instruments at removing gutta-percha from straight and curved root canals ex vivo filled using the cold lateral condensation and GuttaMaster(®) (VDW GmbH) techniques. Forty mesial roots of mandibular molars with two curved canals and 80 single-rooted teeth with straight root canals, a total of 160 root canals, were randomly assigned to eight groups (canals per group = 20) according to filling technique, retreatment instrument and root canal curvature as follows: Group I, cold lateral condensation/ProFile(®)/straight; Group II, cold lateral condensation/ProFile(®)/curved; Group III, cold lateral condensation/Reciproc(®)/straight; Group IV, cold lateral condensation/Reciproc(®)/curved; Group V, GuttaMaster(®)/ProFile(®)/straight; Group VI, GuttaMaster(®)/ProFile(®)/curved; Group VII, GuttaMaster(®)/Reciproc(®)/straight; and Group VIII, GuttaMaster(®)/Reciproc(®)/curved. The following data were recorded: procedural errors, retreatment duration and canal wall cleanliness. Means and standard deviations were calculated and analysed using the Kruskal-Wallis test, one-way analysis of variance and Tukey's test (P straight (P = 0.0001) and curved (P = 0.0003) root canals. Reciproc(®) were statistically more effective than ProFile(®) instruments in removing GuttaMaster(®) from straight root canals (P = 0.021). Regardless of filling technique or retreatment instrument, gutta-percha was removed more rapidly from curved than from straight root canals (P = 0.0001). Neither system completely removed filling material from the root canals. Compared with ProFile(®) instruments, Reciproc(®) instruments removed GuttaMaster(®) filling material from straight and curved root canals more rapidly.

  8. Photoelectic BV Light Curves of Algol and the Interpretations of the Light Curves

    Directory of Open Access Journals (Sweden)

    Ho-Il Kim

    1985-06-01

    Full Text Available Standardized B and V photoelectric light curves of Algol are made with the observations obtained during 1982-84 with the 40-cm and the 61-cm reflectors of Yonsei University Observatory. These light curves show asymmetry between ascending and descending shoulders. The ascending shoulder is 0.02 mag brighter than descending shoulder in V light curve and 0.03 mag in B light curve. These asymmetric light curves are interpreted as the result of inhomogeneous energy distribution on the surface of one star of the eclipsing pair rather than the result of gaseous stream flowing from KOIV to B8V star. The 180-year periodicity, so called great inequality, are most likely the result proposed by Kim et al. (1983 that the abrupt and discrete mass losses of cooler component may be the cause of this orbital change. The amount of mass loss deduced from these discrete period changes turned out to be of the order of 10^(-6 - 10^(-5 Msolar.

  9. A Journey Between Two Curves

    Directory of Open Access Journals (Sweden)

    Sergey A. Cherkis

    2007-03-01

    Full Text Available A typical solution of an integrable system is described in terms of a holomorphic curve and a line bundle over it. The curve provides the action variables while the time evolution is a linear flow on the curve's Jacobian. Even though the system of Nahm equations is closely related to the Hitchin system, the curves appearing in these two cases have very different nature. The former can be described in terms of some classical scattering problem while the latter provides a solution to some Seiberg-Witten gauge theory. This note identifies the setup in which one can formulate the question of relating the two curves.

  10. Experience curve for natural gas production by hydraulic fracturing

    International Nuclear Information System (INIS)

    Fukui, Rokuhei; Greenfield, Carl; Pogue, Katie; Zwaan, Bob van der

    2017-01-01

    From 2007 to 2012 shale gas production in the US expanded at an astounding average growth rate of over 50%/yr, and thereby increased nearly tenfold over this short time period alone. Hydraulic fracturing technology, or “fracking”, as well as new directional drilling techniques, played key roles in this shale gas revolution, by allowing for extraction of natural gas from previously unviable shale resources. Although hydraulic fracturing technology had been around for decades, it only recently became commercially attractive for large-scale implementation. As the production of shale gas rapidly increased in the US over the past decade, the wellhead price of natural gas dropped substantially. In this paper we express the relationship between wellhead price and cumulative natural gas output in terms of an experience curve, and obtain a learning rate of 13% for the industry using hydraulic fracturing technology. This learning rate represents a measure for the know-how and skills accumulated thus far by the US shale gas industry. The use of experience curves for renewable energy options such as solar and wind power has allowed analysts, practitioners, and policy makers to assess potential price reductions, and underlying cost decreases, for these technologies in the future. The reasons for price reductions of hydraulic fracturing are fundamentally different from those behind renewable energy technologies – hence they cannot be directly compared – and hydraulic fracturing may soon reach, or maybe has already attained, a lower bound for further price reductions, for instance as a result of its water requirements or environmental footprint. Yet, understanding learning-by-doing phenomena as expressed by an industry-wide experience curve for shale gas production can be useful for strategic planning in the gas sector, as well as assist environmental policy design, and serve more broadly as input for projections of energy system developments. - Highlights: • Hydraulic

  11. Bond yield curve construction

    Directory of Open Access Journals (Sweden)

    Kožul Nataša

    2014-01-01

    Full Text Available In the broadest sense, yield curve indicates the market's view of the evolution of interest rates over time. However, given that cost of borrowing it closely linked to creditworthiness (ability to repay, different yield curves will apply to different currencies, market sectors, or even individual issuers. As government borrowing is indicative of interest rate levels available to other market players in a particular country, and considering that bond issuance still remains the dominant form of sovereign debt, this paper describes yield curve construction using bonds. The relationship between zero-coupon yield, par yield and yield to maturity is given and their usage in determining curve discount factors is described. Their usage in deriving forward rates and pricing related derivative instruments is also discussed.

  12. Folding of non-Euclidean curved shells

    Science.gov (United States)

    Bende, Nakul; Evans, Arthur; Innes-Gold, Sarah; Marin, Luis; Cohen, Itai; Santangelo, Christian; Hayward, Ryan

    2015-03-01

    Origami-based folding of 2D sheets has been of recent interest for a variety of applications ranging from deployable structures to self-folding robots. Though folding of planar sheets follows well-established principles, folding of curved shells involves an added level of complexity due to the inherent influence of curvature on mechanics. In this study, we use principles from differential geometry and thin shell mechanics to establish fundamental rules that govern folding of prototypical creased shells. In particular, we show how the normal curvature of a crease line controls whether the deformation is smooth or discontinuous, and investigate the influence of shell thickness and boundary conditions. We show that snap-folding of shells provides a route to rapid actuation on time-scales dictated by the speed of sound. The simple geometric design principles developed can be applied at any length-scale, offering potential for bio-inspired soft actuators for tunable optics, microfluidics, and robotics. This work was funded by the National Science Foundation through EFRI ODISSEI-1240441 with additional support to S.I.-G. through the UMass MRSEC DMR-0820506 REU program.

  13. The statistical background to proposed ASME/MPC fracture toughness reference curves

    International Nuclear Information System (INIS)

    Oldfield, W.

    1981-01-01

    The ASME Pressure Vessel Codes define, in Sec. 11, lower bound fracture toughness curves. These curves are used to predict the lower bound fracture toughness on the basis of the RT test procedure. This test is used to remove heat to heat differences, by permitting the lower bound (reference) curve to be moved along the temperature scale according to the measured RT. Numerous objections have been raised to the procedure, and a Subcommittee (the ASME/MPC Working Group on Reference Toughness) is currently revising the codified procedures for fracture toughness prediction. The task has required a substantial amount of statistical work, since the new procedure are to have a statistical basis. Using initiation fracture toughness (J-Integral R curve procedures in the ductile domain) it was shown that when CVN energy data is properly transformed it is highly correlated with valid fracture toughness measurements. A single functional relationship can be used to predict the mean fracture toughness for a sample of steel from a set of CVN energy measurements, and the coefficients of the function tabulated. More importantly, the approximate lower statistical bounds to the initiation fracture toughness behaviour can be similarly predicted, and coefficients for selected bounds have also been tabulated. (orig.)

  14. Curve Digitizer – A software for multiple curves digitizing

    Directory of Open Access Journals (Sweden)

    Florentin ŞPERLEA

    2010-06-01

    Full Text Available The Curve Digitizer is software that extracts data from an image file representing a graphicand returns them as pairs of numbers which can then be used for further analysis and applications.Numbers can be read on a computer screen stored in files or copied on paper. The final result is adata set that can be used with other tools such as MSEXCEL. Curve Digitizer provides a useful toolfor any researcher or engineer interested in quantifying the data displayed graphically. The image filecan be obtained by scanning a document

  15. Growth Curve and Structural Equation Modeling : Topics from the Indian Statistical Institute

    CERN Document Server

    2015-01-01

    This book describes some recent trends in GCM research on different subject areas, both theoretical and applied. This includes tools and possibilities for further work through new techniques and modification of existing ones. A growth curve is an empirical model of the evolution of a quantity over time. Growth curves in longitudinal studies are used in disciplines including biology, statistics, population studies, economics, biological sciences, sociology, nano-biotechnology, and fluid mechanics. The volume includes original studies, theoretical findings and case studies from a wide range of applied work. This volume builds on presentations from a GCM workshop held at the Indian Statistical Institute, Giridih, January 18-19, 2014. This book follows the volume Advances in Growth Curve Models, published by Springer in 2013. The results have meaningful application in health care, prediction of crop yield, child nutrition, poverty measurements, estimation of growth rate, and other research areas.

  16. D-Branes in Curved Space

    Energy Technology Data Exchange (ETDEWEB)

    McGreevy, John Austen; /Stanford U., Phys. Dept.

    2005-07-06

    This thesis is a study of D-branes in string compactifications. In this context, D-branes are relevant as an important component of the nonperturbative spectrum, as an incisive probe of these backgrounds, and as a natural stringy tool for localizing gauge interactions. In the first part of the thesis, we discuss half-BPS D-branes in compactifications of type II string theory on Calabi-Yau threefolds. The results we describe for these objects are pertinent both in their role as stringy brane-worlds, and in their role as solitonic objects. In particular, we determine couplings of these branes to the moduli determining the closed-string geometry, both perturbatively and non-perturbatively in the worldsheet expansion. We provide a local model for transitions in moduli space where the BPS spectrum jumps, and discuss the extension of mirror symmetry between Calabi-Yau manifolds to the case when D-branes are present. The next section is an interlude which provides some applications of D-branes to other curved backgrounds of string theory. In particular, we discuss a surprising phenomenon in which fundamental strings moving through background Ramond-Ramond fields dissolve into large spherical D3-branes. This mechanism is used to explain a previously-mysterious fact discovered via the AdS-CFT correspondence. Next, we make a connection between type IIA string vacua of the type discussed in the first section and M-theory compactifications on manifolds of G{sub 2} holonomy. Finally we discuss constructions of string vacua which do not have large radius limits. In the final part of the thesis, we develop techniques for studying the worldsheets of open strings ending on the curved D-branes studied in the first section. More precisely, we formulate a large class of massive two-dimensional gauge theories coupled to boundary matter, which flow in the infrared to the relevant boundary conformal field theories. Along with many other applications, these techniques are used to describe

  17. National proficiency-gain curves for minimally invasive gastrointestinal cancer surgery.

    Science.gov (United States)

    Mackenzie, H; Markar, S R; Askari, A; Ni, M; Faiz, O; Hanna, G B

    2016-01-01

    Minimal access surgery for gastrointestinal cancer has short-term benefits but is associated with a proficiency-gain curve. The aim of this study was to define national proficiency-gain curves for minimal access colorectal and oesophagogastric surgery, and to determine the impact on clinical outcomes. All adult patients undergoing minimal access oesophageal, colonic and rectal surgery between 2002 and 2012 were identified from the Hospital Episode Statistics database. Proficiency-gain curves were created using risk-adjusted cumulative sum analysis. Change points were identified, and bootstrapping was performed with 1000 iterations to identify a confidence level. The primary outcome was 30-day mortality; secondary outcomes were 90-day mortality, reintervention, conversion and length of hospital stay. Some 1696, 15 008 and 16 701 minimal access oesophageal, rectal and colonic cancer resections were performed during the study period. The change point in the proficiency-gain curve for 30-day mortality for oesophageal, rectal and colonic surgery was 19 (confidence level 98·4 per cent), 20 (99·2 per cent) and three (99·5 per cent) procedures; the mortality rate fell from 4·0 to 2·0 per cent (relative risk reduction (RRR) 0·50, P = 0·033), from 2·1 to 1·2 per cent (RRR 0·43, P curve for reintervention in oesophageal, rectal and colonic resection was 19 (98·1 per cent), 32 (99·5 per cent) and 26 (99·2 per cent) procedures respectively. There were also significant proficiency-gain curves for 90-day mortality, conversion and length of stay. The introduction of minimal access gastrointestinal cancer surgery has been associated with a proficiency-gain curve for mortality and major morbidity at a national level. Unnecessary patient harm should be avoided by appropriate training and monitoring of new surgical techniques. © 2015 BJS Society Ltd Published by John Wiley & Sons Ltd.

  18. Recent progress in multichamber deposition of high-quality amorphous silicon solar cells on planar and compound curved substrates at GSI

    Energy Technology Data Exchange (ETDEWEB)

    Schropp, R.E.I.; Roedern, B. von; Klose, P.; Hollingsworth, R.E.; Xi, J.; Cueto, J. del; Chatham, H.; Bhat, P.K. (Glasstech Solar, Inc. (GSI), Wheat Ridge, CO (USA))

    1989-10-15

    We present recent advances obtained at GSI in scaling-up multichamber fabrication of solar cells. We have successfully adopted the multichamber approach in the development of large-area compound curved semitransparent modules. For this application, a new semitransparent electrode was developed, together with an innovative three-dimensional laser patterning technique. A fully automated 1.5 MW annual production line with a ''glass-in-panel-out'' approach has been completed for the Government of India, and another 3 MW plant is presently under construction. GSI's research effort using its multichamber R and D system has achieved single-junction conversion efficiencies of 11.3% at low intrinsic layer deposition rates and of 9.7% at a rate of 18 A s{sup -1}. (orig.).

  19. Polynomial curve fitting for control rod worth using least square numerical analysis

    International Nuclear Information System (INIS)

    Muhammad Husamuddin Abdul Khalil; Mark Dennis Usang; Julia Abdul Karim; Mohd Amin Sharifuldin Salleh

    2012-01-01

    RTP must have sufficient excess reactivity to compensate the negative reactivity feedback effects such as those caused by the fuel temperature and power defects of reactivity, fuel burn-up and to allow full power operation for predetermined period of time. To compensate this excess reactivity, it is necessary to introduce an amount of negative reactivity by adjusting or controlling the control rods at will. Control rod worth depends largely upon the value of the neutron flux at the location of the rod and reflected by a polynomial curve. Purpose of this paper is to rule out the polynomial curve fitting using least square numerical techniques via MATLAB compatible language. (author)

  20. TWO METHODS OF ESTIMATING SEMIPARAMETRIC COMPONENT IN THE ENVIRONMENTAL KUZNET'S CURVE (EKC)

    OpenAIRE

    Paudel, Krishna P.; Zapata, Hector O.

    2004-01-01

    This study compares parametric and semiparametric smoothing techniques to estimate the environmental Kuznet curve. The ad hoc functional form where income is related either as a square or a cubic function to environmental quality is relaxed in search of a better nonlinear fit to the pollution-income relationship for panel data.

  1. VizieR Online Data Catalog: Praesepe members light curves (Kovacs+, 2014)

    Science.gov (United States)

    Kovacs, G.; Hartman, J. D.; Bakos, G. A.; Quinn, S. N.; Penev, K.; Latham, D. W.; Bhatti, W.; Csubry, Z.; de Val-Borro, M.

    2014-07-01

    Light curves used in the time series analysis of Praesepe are presented. There are 381 light curves on instrumental Sloan r' magnitude scale with the zero points determined by the 2MASS magnitudes according to Eq. (1) of the paper. We present two types of magnitudes: a) external parameter decorrelated (EPD) and b), the ones obtained after the application of a trend filtering algorithm (TFA) on the EPD time series. These two methods are briefly described in the paper and in detail in the references therein. Here we just note that both methods are intended to filter out systematics due to environmental effects (instrumental, weather, etc.). For TFA filtering we used 600 templates and did not apply signal reconstruction. (5 data files).

  2. Very large scale characterization of graphene mechanical devices using a colorimetry technique.

    Science.gov (United States)

    Cartamil-Bueno, Santiago Jose; Centeno, Alba; Zurutuza, Amaia; Steeneken, Peter Gerard; van der Zant, Herre Sjoerd Jan; Houri, Samer

    2017-06-08

    We use a scalable optical technique to characterize more than 21 000 circular nanomechanical devices made of suspended single- and double-layer graphene on cavities with different diameters (D) and depths (g). To maximize the contrast between suspended and broken membranes we used a model for selecting the optimal color filter. The method enables parallel and automatized image processing for yield statistics. We find the survival probability to be correlated with a structural mechanics scaling parameter given by D 4 /g 3 . Moreover, we extract a median adhesion energy of Γ = 0.9 J m -2 between the membrane and the native SiO 2 at the bottom of the cavities.

  3. Developing an empirical Environmental Kuznets Curve

    Directory of Open Access Journals (Sweden)

    Ferry Purnawan

    2015-04-01

    Full Text Available This study aims to develop a model of Environmental Kuznets Curve (EKC that relates between environmental pollution level and the prosperity level in Tangerang City. The method uses two models of pooled data regression technique namely, Random Effect Model (REM, and Fixed Effects Model (FEM both quadratic and cubic. The period of observation is 2002-2012. The results suggest that relationship between per capita income and the level of environment quality, reflected as the BOD concentration (Oxygen Biological damage and COD (Chemical Oxygen Damage can be explained by the quadratic FEM model and follow the EKC hypothesis even though the turning point is not identified.

  4. Nonequilibrium recombination after a curved shock wave

    Science.gov (United States)

    Wen, Chihyung; Hornung, Hans

    2010-02-01

    The effect of nonequilibrium recombination after a curved two-dimensional shock wave in a hypervelocity dissociating flow of an inviscid Lighthill-Freeman gas is considered. An analytical solution is obtained with the effective shock values derived by Hornung (1976) [5] and the assumption that the flow is ‘quasi-frozen’ after a thin dissociating layer near the shock. The solution gives the expression of dissociation fraction as a function of temperature on a streamline. A rule of thumb can then be provided to check the validity of binary scaling for experimental conditions and a tool to determine the limiting streamline that delineates the validity zone of binary scaling. The effects on the nonequilibrium chemical reaction of the large difference in free stream temperature between free-piston shock tunnel and equivalent flight conditions are discussed. Numerical examples are presented and the results are compared with solutions obtained with two-dimensional Euler equations using the code of Candler (1988) [10].

  5. Estimating reaction rate constants: comparison between traditional curve fitting and curve resolution

    NARCIS (Netherlands)

    Bijlsma, S.; Boelens, H. F. M.; Hoefsloot, H. C. J.; Smilde, A. K.

    2000-01-01

    A traditional curve fitting (TCF) algorithm is compared with a classical curve resolution (CCR) approach for estimating reaction rate constants from spectral data obtained in time of a chemical reaction. In the TCF algorithm, reaction rate constants an estimated from the absorbance versus time data

  6. A catalog of special plane curves

    CERN Document Server

    Lawrence, J Dennis

    2014-01-01

    Among the largest, finest collections available-illustrated not only once for each curve, but also for various values of any parameters present. Covers general properties of curves and types of derived curves. Curves illustrated by a CalComp digital incremental plotter. 12 illustrations.

  7. Intersection numbers of spectral curves

    CERN Document Server

    Eynard, B.

    2011-01-01

    We compute the symplectic invariants of an arbitrary spectral curve with only 1 branchpoint in terms of integrals of characteristic classes in the moduli space of curves. Our formula associates to any spectral curve, a characteristic class, which is determined by the laplace transform of the spectral curve. This is a hint to the key role of Laplace transform in mirror symmetry. When the spectral curve is y=\\sqrt{x}, the formula gives Kontsevich--Witten intersection numbers, when the spectral curve is chosen to be the Lambert function \\exp{x}=y\\exp{-y}, the formula gives the ELSV formula for Hurwitz numbers, and when one chooses the mirror of C^3 with framing f, i.e. \\exp{-x}=\\exp{-yf}(1-\\exp{-y}), the formula gives the Marino-Vafa formula, i.e. the generating function of Gromov-Witten invariants of C^3. In some sense this formula generalizes ELSV, Marino-Vafa formula, and Mumford formula.

  8. Photogrammetric techniques for across-scale soil erosion assessment

    OpenAIRE

    Eltner, Anette

    2016-01-01

    Soil erosion is a complex geomorphological process with varying influences of different impacts at different spatio-temporal scales. To date, measurement of soil erosion is predominantly realisable at specific scales, thereby detecting separate processes, e.g. interrill erosion contrary to rill erosion. It is difficult to survey soil surface changes at larger areal coverage such as field scale with high spatial resolution. Either net changes at the system outlet or remaining traces after the ...

  9. Bootstrap-based procedures for inference in nonparametric receiver-operating characteristic curve regression analysis.

    Science.gov (United States)

    Rodríguez-Álvarez, María Xosé; Roca-Pardiñas, Javier; Cadarso-Suárez, Carmen; Tahoces, Pablo G

    2018-03-01

    Prior to using a diagnostic test in a routine clinical setting, the rigorous evaluation of its diagnostic accuracy is essential. The receiver-operating characteristic curve is the measure of accuracy most widely used for continuous diagnostic tests. However, the possible impact of extra information about the patient (or even the environment) on diagnostic accuracy also needs to be assessed. In this paper, we focus on an estimator for the covariate-specific receiver-operating characteristic curve based on direct regression modelling and nonparametric smoothing techniques. This approach defines the class of generalised additive models for the receiver-operating characteristic curve. The main aim of the paper is to offer new inferential procedures for testing the effect of covariates on the conditional receiver-operating characteristic curve within the above-mentioned class. Specifically, two different bootstrap-based tests are suggested to check (a) the possible effect of continuous covariates on the receiver-operating characteristic curve and (b) the presence of factor-by-curve interaction terms. The validity of the proposed bootstrap-based procedures is supported by simulations. To facilitate the application of these new procedures in practice, an R-package, known as npROCRegression, is provided and briefly described. Finally, data derived from a computer-aided diagnostic system for the automatic detection of tumour masses in breast cancer is analysed.

  10. A direct method to solve optimal knots of B-spline curves: An application for non-uniform B-spline curves fitting.

    Directory of Open Access Journals (Sweden)

    Van Than Dung

    Full Text Available B-spline functions are widely used in many industrial applications such as computer graphic representations, computer aided design, computer aided manufacturing, computer numerical control, etc. Recently, there exist some demands, e.g. in reverse engineering (RE area, to employ B-spline curves for non-trivial cases that include curves with discontinuous points, cusps or turning points from the sampled data. The most challenging task in these cases is in the identification of the number of knots and their respective locations in non-uniform space in the most efficient computational cost. This paper presents a new strategy for fitting any forms of curve by B-spline functions via local algorithm. A new two-step method for fast knot calculation is proposed. In the first step, the data is split using a bisecting method with predetermined allowable error to obtain coarse knots. Secondly, the knots are optimized, for both locations and continuity levels, by employing a non-linear least squares technique. The B-spline function is, therefore, obtained by solving the ordinary least squares problem. The performance of the proposed method is validated by using various numerical experimental data, with and without simulated noise, which were generated by a B-spline function and deterministic parametric functions. This paper also discusses the benchmarking of the proposed method to the existing methods in literature. The proposed method is shown to be able to reconstruct B-spline functions from sampled data within acceptable tolerance. It is also shown that, the proposed method can be applied for fitting any types of curves ranging from smooth ones to discontinuous ones. In addition, the method does not require excessive computational cost, which allows it to be used in automatic reverse engineering applications.

  11. Comparison of the Fullerton Advanced Balance Scale, Mini-BESTest, and Berg Balance Scale to Predict Falls in Parkinson Disease.

    Science.gov (United States)

    Schlenstedt, Christian; Brombacher, Stephanie; Hartwigsen, Gesa; Weisser, Burkhard; Möller, Bettina; Deuschl, Günther

    2016-04-01

    The correct identification of patients with Parkinson disease (PD) at risk for falling is important to initiate appropriate treatment early. This study compared the Fullerton Advanced Balance (FAB) scale with the Mini-Balance Evaluation Systems Test (Mini-BESTest) and Berg Balance Scale (BBS) to identify individuals with PD at risk for falls and to analyze which of the items of the scales best predict future falls. This was a prospective study to assess predictive criterion-related validity. The study was conducted at a university hospital in an urban community. Eighty-five patients with idiopathic PD (Hoehn and Yahr stages: 1-4) participated in the study. Measures were number of falls (assessed prospectively over 6 months), FAB scale, Mini-BESTest, BBS, and Unified Parkinson's Disease Rating Scale. The FAB scale, Mini-BESTest, and BBS showed similar accuracy to predict future falls, with values for area under the curve (AUC) of the receiver operating characteristic (ROC) curve of 0.68, 0.65, and 0.69, respectively. A model combining the items "tandem stance," "rise to toes," "one-leg stance," "compensatory stepping backward," "turning," and "placing alternate foot on stool" had an AUC of 0.84 of the ROC curve. There was a dropout rate of 19/85 participants. The FAB scale, Mini-BESTest, and BBS provide moderate capacity to predict "fallers" (people with one or more falls) from "nonfallers." Only some items of the 3 scales contribute to the detection of future falls. Clinicians should particularly focus on the item "tandem stance" along with the items "one-leg stance," "rise to toes," "compensatory stepping backward," "turning 360°," and "placing foot on stool" when analyzing postural control deficits related to fall risk. Future research should analyze whether balance training including the aforementioned items is effective in reducing fall risk. © 2016 American Physical Therapy Association.

  12. Aspherical Supernovae: Effects on Early Light Curves

    Science.gov (United States)

    Afsariardchi, Niloufar; Matzner, Christopher D.

    2018-04-01

    Early light from core-collapse supernovae, now detectable in high-cadence surveys, holds clues to a star and its environment just before it explodes. However, effects that alter the early light have not been fully explored. We highlight the possibility of nonradial flows at the time of shock breakout. These develop in sufficiently nonspherical explosions if the progenitor is not too diffuse. When they do develop, nonradial flows limit ejecta speeds and cause ejecta–ejecta collisions. We explore these phenomena and their observational implications using global, axisymmetric, nonrelativistic FLASH simulations of simplified polytropic progenitors, which we scale to representative stars. We develop a method to track photon production within the ejecta, enabling us to estimate band-dependent light curves from adiabatic simulations. Immediate breakout emission becomes hidden as an oblique flow develops. Nonspherical effects lead the shock-heated ejecta to release a more constant luminosity at a higher, evolving color temperature at early times, effectively mixing breakout light with the early light curve. Collisions between nonradial ejecta thermalize a small fraction of the explosion energy; we will address emission from these collisions in a subsequent paper.

  13. Curved crystals for high-resolution focusing of X and gamma rays through a Laue lens

    Science.gov (United States)

    Guidi, Vincenzo; Bellucci, Valerio; Camattari, Riccardo; Neri, Ilaria

    2013-08-01

    Crystals with curved diffracting planes have been investigated as high-efficiency optical components for the realization of a Laue lens for satellite-borne experiments in astrophysics. At Sensor and Semiconductor Laboratory (Ferrara, Italy) a research and development plan to implement Si and Ge curved crystals by surface grooving technique has been undertaken. The method of surface grooving allows obtaining Si and Ge curved crystals with self-standing curvature, i.e., with no need for external bending device, which is a mandatory issue in satellite-borne experiments. Si and Ge grooved crystals have been characterized by X-ray diffraction at ESRF and ILL to prove their functionality for a high-reflectivity Laue lens.

  14. Elliptic curves for applications (Tutorial)

    NARCIS (Netherlands)

    Lange, T.; Bernstein, D.J.; Chatterjee, S.

    2011-01-01

    More than 25 years ago, elliptic curves over finite fields were suggested as a group in which the Discrete Logarithm Problem (DLP) can be hard. Since then many researchers have scrutinized the security of the DLP on elliptic curves with the result that for suitably chosen curves only exponential

  15. Differential geometry and topology of curves

    CERN Document Server

    Animov, Yu

    2001-01-01

    Differential geometry is an actively developing area of modern mathematics. This volume presents a classical approach to the general topics of the geometry of curves, including the theory of curves in n-dimensional Euclidean space. The author investigates problems for special classes of curves and gives the working method used to obtain the conditions for closed polygonal curves. The proof of the Bakel-Werner theorem in conditions of boundedness for curves with periodic curvature and torsion is also presented. This volume also highlights the contributions made by great geometers. past and present, to differential geometry and the topology of curves.

  16. Techniques for extracting single-trial activity patterns from large-scale neural recordings

    Science.gov (United States)

    Churchland, Mark M; Yu, Byron M; Sahani, Maneesh; Shenoy, Krishna V

    2008-01-01

    Summary Large, chronically-implanted arrays of microelectrodes are an increasingly common tool for recording from primate cortex, and can provide extracellular recordings from many (order of 100) neurons. While the desire for cortically-based motor prostheses has helped drive their development, such arrays also offer great potential to advance basic neuroscience research. Here we discuss the utility of array recording for the study of neural dynamics. Neural activity often has dynamics beyond that driven directly by the stimulus. While governed by those dynamics, neural responses may nevertheless unfold differently for nominally identical trials, rendering many traditional analysis methods ineffective. We review recent studies – some employing simultaneous recording, some not – indicating that such variability is indeed present both during movement generation, and during the preceding premotor computations. In such cases, large-scale simultaneous recordings have the potential to provide an unprecedented view of neural dynamics at the level of single trials. However, this enterprise will depend not only on techniques for simultaneous recording, but also on the use and further development of analysis techniques that can appropriately reduce the dimensionality of the data, and allow visualization of single-trial neural behavior. PMID:18093826

  17. Models of genus one curves

    OpenAIRE

    Sadek, Mohammad

    2010-01-01

    In this thesis we give insight into the minimisation problem of genus one curves defined by equations other than Weierstrass equations. We are interested in genus one curves given as double covers of P1, plane cubics, or complete intersections of two quadrics in P3. By minimising such a curve we mean making the invariants associated to its defining equations as small as possible using a suitable change of coordinates. We study the non-uniqueness of minimisations of the genus one curves des...

  18. The crime kuznets curve

    OpenAIRE

    Buonanno, Paolo; Fergusson, Leopoldo; Vargas, Juan Fernando

    2014-01-01

    We document the existence of a Crime Kuznets Curve in US states since the 1970s. As income levels have risen, crime has followed an inverted U-shaped pattern, first increasing and then dropping. The Crime Kuznets Curve is not explained by income inequality. In fact, we show that during the sample period inequality has risen monotonically with income, ruling out the traditional Kuznets Curve. Our finding is robust to adding a large set of controls that are used in the literature to explain the...

  19. Supplementary Material for: Growth curve registration for evaluating salinity tolerance in barley

    KAUST Repository

    Meng, Rui

    2017-01-01

    Abstract Background Smarthouses capable of non-destructive, high-throughput plant phenotyping collect large amounts of data that can be used to understand plant growth and productivity in extreme environments. The challenge is to apply the statistical tool that best analyzes the data to study plant traits, such as salinity tolerance, or plant-growth-related traits. Results We derive family-wise salinity sensitivity (FSS) growth curves and use registration techniques to summarize growth patterns of HEB-25 barley families and the commercial variety, Navigator. We account for the spatial variation in smarthouse microclimates and in temporal variation across phenotyping runs using a functional ANOVA model to derive corrected FSS curves. From FSS, we derive corrected values for family-wise salinity tolerance, which are strongly negatively correlated with Na but not significantly with K, indicating that Na content is an important factor affecting salinity tolerance in these families, at least for plants of this age and grown in these conditions. Conclusions Our family-wise methodology is suitable for analyzing the growth curves of a large number of plants from multiple families. The corrected curves accurately account for the spatial and temporal variations among plants that are inherent to high-throughput experiments.

  20. The Use of Quality Control and Data Mining Techniques for Monitoring Scaled Scores: An Overview. Research Report. ETS RR-12-20

    Science.gov (United States)

    von Davier, Alina A.

    2012-01-01

    Maintaining comparability of test scores is a major challenge faced by testing programs that have almost continuous administrations. Among the potential problems are scale drift and rapid accumulation of errors. Many standard quality control techniques for testing programs, which can effectively detect and address scale drift for small numbers of…

  1. Thermal Molding of Organic Thin-Film Transistor Arrays on Curved Surfaces.

    Science.gov (United States)

    Sakai, Masatoshi; Watanabe, Kento; Ishimine, Hiroto; Okada, Yugo; Yamauchi, Hiroshi; Sadamitsu, Yuichi; Kudo, Kazuhiro

    2017-12-01

    In this work, a thermal molding technique is proposed for the fabrication of plastic electronics on curved surfaces, enabling the preparation of plastic films with freely designed shapes. The induced strain distribution observed in poly(ethylene naphthalate) films when planar sheets were deformed into hemispherical surfaces clearly indicated that natural thermal contraction played an important role in the formation of the curved surface. A fingertip-shaped organic thin-film transistor array molded from a real human finger was fabricated, and slight deformation induced by touching an object was detected from the drain current response. This type of device will lead to the development of robot fingers equipped with a sensitive tactile sense for precision work such as palpation or surgery.

  2. ROBUST DECLINE CURVE ANALYSIS

    Directory of Open Access Journals (Sweden)

    Sutawanir Darwis

    2012-05-01

    Full Text Available Empirical decline curve analysis of oil production data gives reasonable answer in hyperbolic type curves situations; however the methodology has limitations in fitting real historical production data in present of unusual observations due to the effect of the treatment to the well in order to increase production capacity. The development ofrobust least squares offers new possibilities in better fitting production data using declinecurve analysis by down weighting the unusual observations. This paper proposes a robustleast squares fitting lmRobMM approach to estimate the decline rate of daily production data and compares the results with reservoir simulation results. For case study, we usethe oil production data at TBA Field West Java. The results demonstrated that theapproach is suitable for decline curve fitting and offers a new insight in decline curve analysis in the present of unusual observations.

  3. Three gradients and the perception of flat and curved surfaces.

    Science.gov (United States)

    Cutting, J E; Millard, R T

    1984-06-01

    Researchers of visual perception have long been interested in the perceived slant of a surface and in the gradients that purportedly specify it. Slant is the angle between the line of sight and the tangent to the planar surface at any point, also called the surface normal. Gradients are the sources of information that grade, or change, with visual angle as one looks from one's feet upward to the horizon. The present article explores three gradients--perspective, compression, and density--and the phenomenal impression of flat and curved surfaces. The perspective gradient is measured at right angles to the axis of tilt at any point in the optic array; that is, when looking down a hallway at the tiles of a floor receding in the distance, perspective is measured by the x-axis width of each tile projected on the image plane orthogonal to the line of sight. The compression gradient is the ratio of y/x axis measures on the projected plane. The density gradient is measured by the number of tiles per unit solid visual angle. For flat surfaces and many others, perspective and compression gradients decrease with distance, and the density gradient increases. We discuss the manner in which these gradients change for various types of surfaces. Each gradient is founded on a different assumption about textures on the surfaces around us. In Experiment 1, viewers assessed the three-dimensional character of projections of flat and curved surfaces receding in the distance. They made pairwise judgments of preference and of dissimilarity among eight stimuli in each of four sets. The presence of each gradient was manipulated orthogonally such that each stimulus had zero, one, two, or three gradients appropriate for either a flat surface or a curved surface. Judgments were made were made for surfaces with both regularly shaped and irregularly shaped textures scattered on them. All viewer assessment were then scaled in one dimension. Multiple correlation and regression on the scale values

  4. NEW CONCEPTS AND TEST METHODS OF CURVE PROFILE AREA DENSITY IN SURFACE: ESTIMATION OF AREAL DENSITY ON CURVED SPATIAL SURFACE

    OpenAIRE

    Hong Shen

    2011-01-01

    The concepts of curve profile, curve intercept, curve intercept density, curve profile area density, intersection density in containing intersection (or intersection density relied on intersection reference), curve profile intersection density in surface (or curve intercept intersection density relied on intersection of containing curve), and curve profile area density in surface (AS) were defined. AS expressed the amount of curve profile area of Y phase in the unit containing surface area, S...

  5. M-curves and symmetric products

    Indian Academy of Sciences (India)

    Indranil Biswas

    2017-08-03

    Aug 3, 2017 ... is bounded above by g + 1, where g is the genus of X [11]. Curves which have exactly the maximum number (i.e., genus +1) of components of the real part are called M-curves. Classifying real algebraic curves up to homeomorphism is straightforward, however, classifying even planar non-singular real ...

  6. Sample dimensions effect on prediction of soil water retention curve and saturated hydraulic conductivity

    Science.gov (United States)

    Soil water retention curve (SWRC) and saturated hydraulic conductivity (SHC) are key hydraulic properties for unsaturated zone hydrology and groundwater. Not only are the SWRC and SHC measurements time-consuming, their results are scale dependent. Although prediction of the SWRC and SHC from availab...

  7. CONFIRMATION OF HOT JUPITER KEPLER-41b VIA PHASE CURVE ANALYSIS

    International Nuclear Information System (INIS)

    Quintana, Elisa V.; Rowe, Jason F.; Caldwell, Douglas A.; Christiansen, Jessie L.; Jenkins, Jon M.; Morris, Robert L.; Smith, Jeffrey C.; Thompson, Susan E.; Barclay, Thomas; Howell, Steve B.; Borucki, William J.; Sanderfer, Dwight T.; Still, Martin; Ciardi, David R.; Demory, Brice-Olivier; Klaus, Todd C.; Fulton, Benjamin J.; Shporer, Avi

    2013-01-01

    We present high precision photometry of Kepler-41, a giant planet in a 1.86 day orbit around a G6V star that was recently confirmed through radial velocity measurements. We have developed a new method to confirm giant planets solely from the photometric light curve, and we apply this method herein to Kepler-41 to establish the validity of this technique. We generate a full phase photometric model by including the primary and secondary transits, ellipsoidal variations, Doppler beaming, and reflected/emitted light from the planet. Third light contamination scenarios that can mimic a planetary transit signal are simulated by injecting a full range of dilution values into the model, and we re-fit each diluted light curve model to the light curve. The resulting constraints on the maximum occultation depth and stellar density combined with stellar evolution models rules out stellar blends and provides a measurement of the planet's mass, size, and temperature. We expect about two dozen Kepler giant planets can be confirmed via this method.

  8. Determination of water retention curves of concrete

    International Nuclear Information System (INIS)

    Villar, M.V.; Romero, F.J.

    2015-01-01

    The water retention curves of concrete and mortar obtained with two different techniques and following wetting and drying paths were determined. The material was the same used to manufacture the disposal cells of the Spanish surface facility of El Cabril. The water retention capacity of mortar is clearly higher than that of concrete when expressed as gravimetric water content, but the difference reduces when it is expressed as degree of saturation. Hysteresis between wetting and drying was observed for both materials, particularly for mortar. The tests went on for very long periods of time, and concerns about the geochemical, mineralogical and porosity changes occurred in the materials during the determinations (changes in dry mass, grain density, samples volume) and their repercussion on the results obtained (water content and degree of saturation computation) were raised. Also, the fact of having used techniques applying total and matrix suction could have affected the results. (authors)

  9. First results of saturation curve measurements of heat-resistant steel using GEANT4 and MCNP5 codes

    International Nuclear Information System (INIS)

    Hoang, Duc-Tam; Tran, Thien-Thanh; Le, Bao-Tran; Vo, Hoang-Nguyen; Chau, Van-Tao; Tran, Kim-Tuyet; Huynh, Dinh-Chuong

    2015-01-01

    A gamma backscattering technique is applied to calculate the saturation curve and the effective mass attenuation coefficient of material. A NaI(Tl) detector collimated by collimator of large diameter is modeled by Monte Carlo technique using both MCNP5 and GEANT4 codes. The result shows a good agreement in response function of the scattering spectra for the two codes. Based on such spectra, the saturation curve of heat-resistant steel is determined. The results represent a strong confirmation that it is appropriate to use the detector collimator of large diameter to obtain the scattering spectra and this work is also the basis of experimental set-up for determining the thickness of material. (author)

  10. Titration Curves: Fact and Fiction.

    Science.gov (United States)

    Chamberlain, John

    1997-01-01

    Discusses ways in which datalogging equipment can enable titration curves to be measured accurately and how computing power can be used to predict the shape of curves. Highlights include sources of error, use of spreadsheets to generate titration curves, titration of a weak acid with a strong alkali, dibasic acids, weak acid and weak base, and…

  11. An angiographic technique for coronary fractional flow reserve measurement: in vivo validation.

    Science.gov (United States)

    Takarada, Shigeho; Zhang, Zhang; Molloi, Sabee

    2013-03-01

    Fractional flow reserve (FFR) is an important prognostic determinant in a clinical setting. However, its measurement currently requires the use of invasive pressure wire, while an angiographic technique based on first-pass distribution analysis and scaling laws can be used to measure FFR using only image data. Eight anesthetized swine were instrumented with flow probe on the proximal segment of the left anterior descending (LAD) coronary arteries. Volumetric blood flow from the flow probe (Qp), coronary pressure (Pa) and right atrium pressure (Pv) were continuously recorded. Flow probe-based FFR (FFRq) was measured from the ratio of flow with and without stenosis. To determine the angiography-based FFR (FFRa), the ratio of blood flow in the presence of a stenosis (QS) to theoretically normal blood flow (QN) was calculated. A region of interest in the LAD arterial bed was drawn to generate time-density curves using angiographic images. QS was measured using a time-density curve and the assumption that blood was momentarily replaced with contrast agent during the injection. QN was estimated from the total coronary arterial volume using scaling laws. Pressure-wire measurements of FFR (FFRp), which was calculated from the ratio of distal coronary pressure (Pd) divided by proximal pressure (Pa), were continuously obtained during the study. A total of 54 measurements of FFRa, FFRp, and FFRq were taken. FFRa showed a good correlation with FFRq (FFRa = 0.97 FFRq +0.06, r(2) = 0.80, p < 0.001), although FFRp overestimated the FFRq (FFRp = 0.657 FFRq + 0.313, r(2) = 0.710, p < 0.0001). Additionally, the Bland-Altman analysis showed a close agreement between FFRa and FFRq. This angiographic technique to measure FFR can potentially be used to evaluate both anatomical and physiological assessments of a coronary stenosis during routine diagnostic cardiac catheterization that requires no pressure wires.

  12. Inverse Diffusion Curves Using Shape Optimization.

    Science.gov (United States)

    Zhao, Shuang; Durand, Fredo; Zheng, Changxi

    2018-07-01

    The inverse diffusion curve problem focuses on automatic creation of diffusion curve images that resemble user provided color fields. This problem is challenging since the 1D curves have a nonlinear and global impact on resulting color fields via a partial differential equation (PDE). We introduce a new approach complementary to previous methods by optimizing curve geometry. In particular, we propose a novel iterative algorithm based on the theory of shape derivatives. The resulting diffusion curves are clean and well-shaped, and the final image closely approximates the input. Our method provides a user-controlled parameter to regularize curve complexity, and generalizes to handle input color fields represented in a variety of formats.

  13. Analyzing Multiple-Choice Questions by Model Analysis and Item Response Curves

    Science.gov (United States)

    Wattanakasiwich, P.; Ananta, S.

    2010-07-01

    In physics education research, the main goal is to improve physics teaching so that most students understand physics conceptually and be able to apply concepts in solving problems. Therefore many multiple-choice instruments were developed to probe students' conceptual understanding in various topics. Two techniques including model analysis and item response curves were used to analyze students' responses from Force and Motion Conceptual Evaluation (FMCE). For this study FMCE data from more than 1000 students at Chiang Mai University were collected over the past three years. With model analysis, we can obtain students' alternative knowledge and the probabilities for students to use such knowledge in a range of equivalent contexts. The model analysis consists of two algorithms—concentration factor and model estimation. This paper only presents results from using the model estimation algorithm to obtain a model plot. The plot helps to identify a class model state whether it is in the misconception region or not. Item response curve (IRC) derived from item response theory is a plot between percentages of students selecting a particular choice versus their total score. Pros and cons of both techniques are compared and discussed.

  14. Retrograde curves of solidus and solubility

    International Nuclear Information System (INIS)

    Vasil'ev, M.V.

    1979-01-01

    The investigation was concerned with the constitutional diagrams of the eutectic type with ''retrograde solidus'' and ''retrograde solubility curve'' which must be considered as diagrams with degenerate monotectic transformation. The solidus and the solubility curves form a retrograde curve with a common retrograde point representing the solubility maximum. The two branches of the Aetrograde curve can be described with the aid of two similar equations. Presented are corresponding equations for the Cd-Zn system and shown is the possibility of predicting the run of the solubility curve

  15. [Customized and non-customized French intrauterine growth curves. II - Comparison with existing curves and benefits of customization].

    Science.gov (United States)

    Ego, A; Prunet, C; Blondel, B; Kaminski, M; Goffinet, F; Zeitlin, J

    2016-02-01

    Our aim is to compare the new French EPOPé intrauterine growth curves, developed to address the guidelines 2013 of the French College of Obstetricians and Gynecologists, with reference curves currently used in France, and to evaluate the consequences of their adjustment for fetal sex and maternal characteristics. Eight intrauterine and birthweight curves, used in France were compared to the EPOPé curves using data from the French Perinatal Survey 2010. The influence of adjustment on the rate of SGA births and the characteristics of these births was analysed. Due to their birthweight values and distribution, the selected intrauterine curves are less suitable for births in France than the new curves. Birthweight curves led to low rates of SGA births from 4.3 to 8.5% compared to 10.0% with the EPOPé curves. The adjustment for maternal and fetal characteristics avoids the over-representation of girls among SGA births, and reclassifies 4% of births. Among births reclassified as SGA, the frequency of medical and obstetrical risk factors for growth restriction, smoking (≥10 cigarettes/day), and neonatal transfer is higher than among non-SGA births (P<0.01). The EPOPé curves are more suitable for French births than currently used curves, and their adjustment improves the identification of mothers and babies at risk of growth restriction and poor perinatal outcomes. Copyright © 2015 Elsevier Masson SAS. All rights reserved.

  16. Extended analysis of cooling curves

    International Nuclear Information System (INIS)

    Djurdjevic, M.B.; Kierkus, W.T.; Liliac, R.E.; Sokolowski, J.H.

    2002-01-01

    Thermal Analysis (TA) is the measurement of changes in a physical property of a material that is heated through a phase transformation temperature range. The temperature changes in the material are recorded as a function of the heating or cooling time in such a manner that allows for the detection of phase transformations. In order to increase accuracy, characteristic points on the cooling curve have been identified using the first derivative curve plotted versus time. In this paper, an alternative approach to the analysis of the cooling curve has been proposed. The first derivative curve has been plotted versus temperature and all characteristic points have been identified with the same accuracy achieved using the traditional method. The new cooling curve analysis also enables the Dendrite Coherency Point (DCP) to be detected using only one thermocouple. (author)

  17. Sediment Curve Uncertainty Estimation Using GLUE and Bootstrap Methods

    Directory of Open Access Journals (Sweden)

    aboalhasan fathabadi

    2017-02-01

    Full Text Available Introduction: In order to implement watershed practices to decrease soil erosion effects it needs to estimate output sediment of watershed. Sediment rating curve is used as the most conventional tool to estimate sediment. Regarding to sampling errors and short data, there are some uncertainties in estimating sediment using sediment curve. In this research, bootstrap and the Generalized Likelihood Uncertainty Estimation (GLUE resampling techniques were used to calculate suspended sediment loads by using sediment rating curves. Materials and Methods: The total drainage area of the Sefidrood watershed is about 560000 km2. In this study uncertainty in suspended sediment rating curves was estimated in four stations including Motorkhane, Miyane Tonel Shomare 7, Stor and Glinak constructed on Ayghdamosh, Ghrangho, GHezelOzan and Shahrod rivers, respectively. Data were randomly divided into a training data set (80 percent and a test set (20 percent by Latin hypercube random sampling.Different suspended sediment rating curves equations were fitted to log-transformed values of sediment concentration and discharge and the best fit models were selected based on the lowest root mean square error (RMSE and the highest correlation of coefficient (R2. In the GLUE methodology, different parameter sets were sampled randomly from priori probability distribution. For each station using sampled parameter sets and selected suspended sediment rating curves equation suspended sediment concentration values were estimated several times (100000 to 400000 times. With respect to likelihood function and certain subjective threshold, parameter sets were divided into behavioral and non-behavioral parameter sets. Finally using behavioral parameter sets the 95% confidence intervals for suspended sediment concentration due to parameter uncertainty were estimated. In bootstrap methodology observed suspended sediment and discharge vectors were resampled with replacement B (set to

  18. Estimation of error on the cross-correlation, phase and time lag between evenly sampled light curves

    Science.gov (United States)

    Misra, R.; Bora, A.; Dewangan, G.

    2018-04-01

    Temporal analysis of radiation from Astrophysical sources like Active Galactic Nuclei, X-ray Binaries and Gamma-ray bursts provides information on the geometry and sizes of the emitting regions. Establishing that two light-curves in different energy bands are correlated, and measuring the phase and time-lag between them is an important and frequently used temporal diagnostic. Generally the estimates are done by dividing the light-curves into large number of adjacent intervals to find the variance or by using numerically expensive simulations. In this work we have presented alternative expressions for estimate of the errors on the cross-correlation, phase and time-lag between two shorter light-curves when they cannot be divided into segments. Thus the estimates presented here allow for analysis of light-curves with relatively small number of points, as well as to obtain information on the longest time-scales available. The expressions have been tested using 200 light curves simulated from both white and 1 / f stochastic processes with measurement errors. We also present an application to the XMM-Newton light-curves of the Active Galactic Nucleus, Akn 564. The example shows that the estimates presented here allow for analysis of light-curves with relatively small (∼ 1000) number of points.

  19. A neural network for the Bragg synthetic curves recognition.; Una red neuronal para el reconocimiento de curvas de Bragg sinteticas

    Energy Technology Data Exchange (ETDEWEB)

    Reynoso V, M R; Vega C, J J; Fernandez A, J; Belmont M, E; Policroniades R, R; Moreno B, E [Instituto Nacional de Investigaciones Nucleares, Salazar, Edo. de Mexico. (Mexico)

    1997-12-31

    A ionization chamber was employed named Bragg curve spectroscopy. The Bragg peak amplitude is a monotone growing function of Z, which permits to identify elements through their measurement. A better technique for this measurement is to improve the use of neural networks with the purpose of the identification of the Bragg curve. (Author).

  20. Residual stress measurement by X-ray diffraction with the Gaussian curve method and its automation

    International Nuclear Information System (INIS)

    Kurita, M.

    1987-01-01

    X-ray technique with the Gaussian curve method and its automation are described for rapid and nondestructive measurement of residual stress. A simplified equation for measuring the stress by the Gaussian curve method is derived because in its previous form this method required laborious calculation. The residual stress can be measured in a few minutes, depending on materials, using an automated X-ray stress analyzer with a microcomputer which was developed in the laboratory. The residual stress distribution of a partially induction hardened and tempered (at 280 0 C) steel bar was measured with the Gaussian curve method. A sharp residual tensile stress peak of 182 MPa appeared right outside the hardened region at which fatigue failure is liable to occur

  1. A Robust Computational Technique for Model Order Reduction of Two-Time-Scale Discrete Systems via Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Othman M. K. Alsmadi

    2015-01-01

    Full Text Available A robust computational technique for model order reduction (MOR of multi-time-scale discrete systems (single input single output (SISO and multi-input multioutput (MIMO is presented in this paper. This work is motivated by the singular perturbation of multi-time-scale systems where some specific dynamics may not have significant influence on the overall system behavior. The new approach is proposed using genetic algorithms (GA with the advantage of obtaining a reduced order model, maintaining the exact dominant dynamics in the reduced order, and minimizing the steady state error. The reduction process is performed by obtaining an upper triangular transformed matrix of the system state matrix defined in state space representation along with the elements of B, C, and D matrices. The GA computational procedure is based on maximizing the fitness function corresponding to the response deviation between the full and reduced order models. The proposed computational intelligence MOR method is compared to recently published work on MOR techniques where simulation results show the potential and advantages of the new approach.

  2. Precision Scaling Relations for Disk Galaxies in the Local Universe

    Science.gov (United States)

    Lapi, A.; Salucci, P.; Danese, L.

    2018-05-01

    We build templates of rotation curves as a function of the I-band luminosity via the mass modeling (by the sum of a thin exponential disk and a cored halo profile) of suitably normalized, stacked data from wide samples of local spiral galaxies. We then exploit such templates to determine fundamental stellar and halo properties for a sample of about 550 local disk-dominated galaxies with high-quality measurements of the optical radius R opt and of the corresponding rotation velocity V opt. Specifically, we determine the stellar M ⋆ and halo M H masses, the halo size R H and velocity scale V H, and the specific angular momenta of the stellar j ⋆ and dark matter j H components. We derive global scaling relationships involving such stellar and halo properties both for the individual galaxies in our sample and for their mean within bins; the latter are found to be in pleasing agreement with previous determinations by independent methods (e.g., abundance matching techniques, weak-lensing observations, and individual rotation curve modeling). Remarkably, the size of our sample and the robustness of our statistical approach allow us to attain an unprecedented level of precision over an extended range of mass and velocity scales, with 1σ dispersion around the mean relationships of less than 0.1 dex. We thus set new standard local relationships that must be reproduced by detailed physical models, which offer a basis for improving the subgrid recipes in numerical simulations, that provide a benchmark to gauge independent observations and check for systematics, and that constitute a basic step toward the future exploitation of the spiral galaxy population as a cosmological probe.

  3. Tempo curves considered harmful

    NARCIS (Netherlands)

    Desain, P.; Honing, H.

    1993-01-01

    In the literature of musicology, computer music research and the psychology of music, timing or tempo measurements are mostly presented in the form of continuous curves. The notion of these tempo curves is dangerous, despite its widespread use, because it lulls its users into the false impression

  4. A scaled underwater launch system accomplished by stress wave propagation technique

    International Nuclear Information System (INIS)

    Wei Yanpeng; Wang Yiwei; Huang Chenguang; Fang Xin; Duan Zhuping

    2011-01-01

    A scaled underwater launch system based on the stress wave theory and the slip Hopkinson pressure bar (SHPB) technique is developed to study the phenomenon of cavitations and other hydrodynamic features of high-speed submerged bodies. The present system can achieve a transient acceleration in the water instead of long-time acceleration outside the water. The projectile can obtain a maximum speed of 30 m/s in about 200 μs by the SHPB launcher. The cavitation characteristics in the stage of acceleration and deceleration are captured by the high-speed camera. The processes of cavitation inception, development and collapse are also simulated with the business software FLUENT, and the results are in good agreement with experiment. There is about 20-30% energy loss during the launching processes, the mechanism of energy loss is also preliminary investigated by measuring the energy of the incident bar and the projectile. (authors)

  5. Patterns and sources of adult personality development: growth curve analyses of the NEO PI-R scales in a longitudinal twin study.

    Science.gov (United States)

    Bleidorn, Wiebke; Kandler, Christian; Riemann, Rainer; Spinath, Frank M; Angleitner, Alois

    2009-07-01

    The present study examined the patterns and sources of 10-year stability and change of adult personality assessed by the 5 domains and 30 facets of the Revised NEO Personality Inventory. Phenotypic and biometric analyses were performed on data from 126 identical and 61 fraternal twins from the Bielefeld Longitudinal Study of Adult Twins (BiLSAT). Consistent with previous research, LGM analyses revealed significant mean-level changes in domains and facets suggesting maturation of personality. There were also substantial individual differences in the change trajectories of both domain and facet scales. Correlations between age and trait changes were modest and there were no significant associations between change and gender. Biometric extensions of growth curve models showed that 10-year stability and change of personality were influenced by both genetic as well as environmental factors. Regarding the etiology of change, the analyses uncovered a more complex picture than originally stated, as findings suggest noticeable differences between traits with respect to the magnitude of genetic and environmental effects. (PsycINFO Database Record (c) 2009 APA, all rights reserved).

  6. Analysis of glow curves of TL readouts of CaSO4:Dy teflon based TLD badge in semiautomatic TLD badge reader

    International Nuclear Information System (INIS)

    Pradhan, S.M.; Sneha, C.; Adtani, M.M.

    2010-01-01

    The facility of glow curve storage and recall provided in the reader software is helpful for manual screening of the glow curves; however no further analysis is possible due to absence of numerical TL data at the sampling intervals. In the present study glow curves are digitized by modifying the reader software and then normalized to make them independent of the dose. The normalized glow curves are then analyzed by dividing them into five equal parts on time scale. This method of analysis is used to correlate the variation of total TL counts of the three discs with time elapsed post irradiation

  7. The Visual Analogue Scale for Rating, Ranking and Paired-Comparison (VAS-RRP): A new technique for psychological measurement.

    Science.gov (United States)

    Sung, Yao-Ting; Wu, Jeng-Shin

    2018-04-17

    Traditionally, the visual analogue scale (VAS) has been proposed to overcome the limitations of ordinal measures from Likert-type scales. However, the function of VASs to overcome the limitations of response styles to Likert-type scales has not yet been addressed. Previous research using ranking and paired comparisons to compensate for the response styles of Likert-type scales has suffered from limitations, such as that the total score of ipsative measures is a constant that cannot be analyzed by means of many common statistical techniques. In this study we propose a new scale, called the Visual Analogue Scale for Rating, Ranking, and Paired-Comparison (VAS-RRP), which can be used to collect rating, ranking, and paired-comparison data simultaneously, while avoiding the limitations of each of these data collection methods. The characteristics, use, and analytic method of VAS-RRPs, as well as how they overcome the disadvantages of Likert-type scales, ranking, and VASs, are discussed. On the basis of analyses of simulated and empirical data, this study showed that VAS-RRPs improved reliability, response style bias, and parameter recovery. Finally, we have also designed a VAS-RRP Generator for researchers' construction and administration of their own VAS-RRPs.

  8. Study of a new glass matrix by the thermoluminescence technique

    International Nuclear Information System (INIS)

    Ferreira, Pamela Z.; Vedovato, Uly P.; Cunha, Diego M. da; Dantas, Noelio O.; Silva, Anielle C.A.; Neves, Lucio P.; Perini, Ana P.; Carrera, Betzabel N.S.; Watanabe, Shigueo

    2015-01-01

    The thermoluminescence technique is widely used for both personal and for high-dose dosimetry. In this work, the thermoluminescence technique was utilized to study a new glass matrix, with nominal composition of 20Li 2 CO 3 .10Al 2 O 3 .20BaO.50B 2 O 3 (mol%), irradiated with different doses in a 60 Co source. The glow curves and the dose-response curve were obtained for radiation doses between 50 Gy and 900 Gy. The results showed that this new glass matrix presents potential use in high-dose dosimetry. (author)

  9. A procedure for the improvement in the determination of a TXRF spectrometer sensitivity curve

    International Nuclear Information System (INIS)

    Bennun, Leonardo; Sanhueza, Vilma

    2010-01-01

    A simple procedure is proposed to determine the total reflection X-ray fluorescence (TXRF) spectrometer sensitivity curve; this procedure provides better accuracy and exactitude than the standard established method. It uses individual pure substances instead of the use of vendor-certified values of reference calibration standards, which are expensive and lack any method to check their quality. This method avoids problems like uncertainties in the determination of the sensitivity curve according to different standards. It also avoids the need for validation studies between different techniques, in order to assure the quality of their TXRF results. (author)

  10. [Chinese neonatal birth weight curve for different gestational age].

    Science.gov (United States)

    Zhu, Li; Zhang, Rong; Zhang, Shulian; Shi, Wenjing; Yan, Weili; Wang, Xiaoli; Lyu, Qin; Liu, Ling; Zhou, Qin; Qiu, Quanfang; Li, Xiaoying; He, Haiying; Wang, Jimei; Li, Ruichun; Lu, Jiarong; Yin, Zhaoqing; Su, Ping; Lin, Xinzhu; Guo, Fang; Zhang, Hui; Li, Shujun; Xin, Hua; Han, Yanqing; Wang, Hongyun; Chen, Dongmei; Li, Zhankui; Wang, Huiqin; Qiu, Yinping; Liu, Huayan; Yang, Jie; Yang, Xiaoli; Li, Mingxia; Li, Wenjing; Han, Shuping; Cao, Bei; Yi, Bin; Zhang, Yihui; Chen, Chao

    2015-02-01

    Since 1986, the reference of birth weight for gestational age has not been updated. The aim of this study was to set up Chinese neonatal network to investigate the current situation of birth weight in China, especially preterm birth weight, to develop the new reference for birth weight for gestational age and birth weight curve. A nationwide neonatology network was established in China. This survey was carried out in 63 hospitals of 23 provinces, municipalities and autonomous regions. We continuously collected the information of live births in participating hospitals during the study period of 2011-2014. Data describing birth weight and gestational age were collected prospectively. Newborn's birth weight was measured by electronic scale within 2 hours after birth when baby was undressed. The evaluation of gestational age was based on the combination of mother's last menstrual period, ultrasound in first trimester and gestational age estimation by gestational age scoring system. the growth curve was drawn by using LMSP method, which was conducted in GAMLSS 1.9-4 software package in R software 2.11.1. A total of 159 334 newborn infants were enrolled in this study. There were 84 447 male and 74 907 female. The mean birth weight was (3 232 ± 555) g, the mean birth weight of male newborn was (3 271 ± 576) g, the mean weight of female newborn was (3 188 ± 528) g. The test of the variables' distribution suggested that the distribution of gestational age and birth weight did not fit the normal distribution, the optimal distribution for them was BCT distribution. The Q-Q plot test and worm plot test suggested that this curve fitted the distribution optimally. The male and female neonatal birth weight curve was developed using the same method. Using GAMLSS method to establish nationwide neonatal birth weight curve, and the first time to update the birth weight reference in recent 28 years.

  11. Physical simulations using centrifuge techniques

    International Nuclear Information System (INIS)

    Sutherland, H.J.

    1981-01-01

    Centrifuge techniques offer a technique for doing physical simulations of the long-term mechanical response of deep ocean sediment to the emplacement of waste canisters and to the temperature gradients generated by them. Preliminary investigations of the scaling laws for pertinent phenomena indicate that the time scaling will be consistent among them and equal to the scaling factor squared. This result implies that this technique will permit accelerated-life-testing of proposed configurations; i.e, long-term studies may be done in relatively short times. Presently, existing centrifuges are being modified to permit scale model testing. This testing will start next year

  12. Anterior Overgrowth in Primary Curves, Compensatory Curves and Junctional Segments in Adolescent Idiopathic Scoliosis.

    Science.gov (United States)

    Schlösser, Tom P C; van Stralen, Marijn; Chu, Winnie C W; Lam, Tsz-Ping; Ng, Bobby K W; Vincken, Koen L; Cheng, Jack C Y; Castelein, René M

    2016-01-01

    Although much attention has been given to the global three-dimensional aspect of adolescent idiopathic scoliosis (AIS), the accurate three-dimensional morphology of the primary and compensatory curves, as well as the intervening junctional segments, in the scoliotic spine has not been described before. A unique series of 77 AIS patients with high-resolution CT scans of the spine, acquired for surgical planning purposes, were included and compared to 22 healthy controls. Non-idiopathic curves were excluded. Endplate segmentation and local longitudinal axis in endplate plane enabled semi-automatic geometric analysis of the complete three-dimensional morphology of the spine, taking inter-vertebral rotation, intra-vertebral torsion and coronal and sagittal tilt into account. Intraclass correlation coefficients for interobserver reliability were 0.98-1.00. Coronal deviation, axial rotation and the exact length discrepancies in the reconstructed sagittal plane, as defined per vertebra and disc, were analyzed for each primary and compensatory curve as well as for the junctional segments in-between. The anterior-posterior difference of spinal length, based on "true" anterior and posterior points on endplates, was +3.8% for thoracic and +9.4% for (thoraco)lumbar curves, while the junctional segments were almost straight. This differed significantly from control group thoracic kyphosis (-4.1%; P<0.001) and lumbar lordosis (+7.8%; P<0.001). For all primary as well as compensatory curves, we observed linear correlations between the coronal Cobb angle, axial rotation and the anterior-posterior length difference (r≥0.729 for thoracic curves; r≥0.485 for (thoraco)lumbar curves). Excess anterior length of the spine in AIS has been described as a generalized growth disturbance, causing relative anterior spinal overgrowth. This study is the first to demonstrate that this anterior overgrowth is not a generalized phenomenon. It is confined to the primary as well as the

  13. Development of a statistically-based lower bound fracture toughness curve (Ksub(IR) curve)

    International Nuclear Information System (INIS)

    Wullaert, R.A.; Server, W.L.; Oldfield, W.; Stahlkopf, K.E.

    1977-01-01

    A program of initiation fracture toughness measurements on fifty heats of nuclear pressure vessel production materials (including weldments) was used to develop a methodology for establishing a revised reference toughness curve. The new methodology was statistically developed and provides a predefined confidence limit (or tolerance limit) for fracture toughness based upon many heats of a particular type of material. Overall reference curves were developed for seven specific materials using large specimen static and dynamic fracture toughness results. The heat-to-heat variation was removed by normalizing both the fracture toughness and temperature data with the precracked Charpy tanh curve coefficients for each particular heat. The variance and distribution about the curve were determined, and lower bounds of predetermined statistical significance were drawn based upon a Pearson distribution in the lower shelf region (since the data were skewed to high values) and a t-distribution in the transition temperature region (since the data were normally distributed)

  14. Prehospital Acute Stroke Severity Scale to Predict Large Artery Occlusion: Design and Comparison With Other Scales.

    Science.gov (United States)

    Hastrup, Sidsel; Damgaard, Dorte; Johnsen, Søren Paaske; Andersen, Grethe

    2016-07-01

    We designed and validated a simple prehospital stroke scale to identify emergent large vessel occlusion (ELVO) in patients with acute ischemic stroke and compared the scale to other published scales for prediction of ELVO. A national historical test cohort of 3127 patients with information on intracranial vessel status (angiography) before reperfusion therapy was identified. National Institutes of Health Stroke Scale (NIHSS) items with the highest predictive value of occlusion of a large intracranial artery were identified, and the most optimal combination meeting predefined criteria to ensure usefulness in the prehospital phase was determined. The predictive performance of Prehospital Acute Stroke Severity (PASS) scale was compared with other published scales for ELVO. The PASS scale was composed of 3 NIHSS scores: level of consciousness (month/age), gaze palsy/deviation, and arm weakness. In derivation of PASS 2/3 of the test cohort was used and showed accuracy (area under the curve) of 0.76 for detecting large arterial occlusion. Optimal cut point ≥2 abnormal scores showed: sensitivity=0.66 (95% CI, 0.62-0.69), specificity=0.83 (0.81-0.85), and area under the curve=0.74 (0.72-0.76). Validation on 1/3 of the test cohort showed similar performance. Patients with a large artery occlusion on angiography with PASS ≥2 had a median NIHSS score of 17 (interquartile range=6) as opposed to PASS <2 with a median NIHSS score of 6 (interquartile range=5). The PASS scale showed equal performance although more simple when compared with other scales predicting ELVO. The PASS scale is simple and has promising accuracy for prediction of ELVO in the field. © 2016 American Heart Association, Inc.

  15. Automated curved planar reformation of 3D spine images

    International Nuclear Information System (INIS)

    Vrtovec, Tomaz; Likar, Bostjan; Pernus, Franjo

    2005-01-01

    Traditional techniques for visualizing anatomical structures are based on planar cross-sections from volume images, such as images obtained by computed tomography (CT) or magnetic resonance imaging (MRI). However, planar cross-sections taken in the coordinate system of the 3D image often do not provide sufficient or qualitative enough diagnostic information, because planar cross-sections cannot follow curved anatomical structures (e.g. arteries, colon, spine, etc). Therefore, not all of the important details can be shown simultaneously in any planar cross-section. To overcome this problem, reformatted images in the coordinate system of the inspected structure must be created. This operation is usually referred to as curved planar reformation (CPR). In this paper we propose an automated method for CPR of 3D spine images, which is based on the image transformation from the standard image-based to a novel spine-based coordinate system. The axes of the proposed spine-based coordinate system are determined on the curve that represents the vertebral column, and the rotation of the vertebrae around the spine curve, both of which are described by polynomial models. The optimal polynomial parameters are obtained in an image analysis based optimization framework. The proposed method was qualitatively and quantitatively evaluated on five CT spine images. The method performed well on both normal and pathological cases and was consistent with manually obtained ground truth data. The proposed spine-based CPR benefits from reduced structural complexity in favour of improved feature perception of the spine. The reformatted images are diagnostically valuable and enable easier navigation, manipulation and orientation in 3D space. Moreover, reformatted images may prove useful for segmentation and other image analysis tasks

  16. Academia-industry collaboration feeds exponential growth curve

    CERN Document Server

    Jones Bey Hassaun, A

    2004-01-01

    The use of silicon strip detectors in high-energy particle tracking is discussed. The functional strength of silicon for high-energy particle physics as well as astrophysics lies in the ability to detect passage of charged particles with micron-scale spatial resolution. In addition to vertex detection, silicon strip detectors also provide full tracking detection to include momentum determination of particles in the magnetic field. Even if silicon detectors for basic science applications do not continue to grow larger, exponential growth of the technology for terrestrial commercial applications is likely to follow a healthy growth curve, as researchers continue to adapt silicon detector technology for low- dose medical x-ray imaging. (Edited abstract)

  17. Melting curve of helium from 4 to 25 K

    Energy Technology Data Exchange (ETDEWEB)

    Krause, J K; Swenson, C A [Ames Lab., Iowa (USA)

    1976-07-01

    New data for the melting curve of He/sup 4/ for temperatures between 4 and 25 K agree with earlier results by Mills and Grilly (Phys. Rev.; 99:480 (1955)) and by Crawford and Daniels (J. Chem. Phys.; 55:5651 (1971)), but disagree with the relationship given by Glassford and Smith (Cryogenics; 6:193 (1966) who carried out equation of state studies on fluid and solid He/sup 4/. The reasons for the disagreement are not clear, but can be interpreted in terms of temperature scale inaccuracies which could have influenced all of their results.

  18. SILC for SILC: Single Institution Learning Curve for Single-Incision Laparoscopic Cholecystectomy

    Directory of Open Access Journals (Sweden)

    Chee Wei Tay

    2013-01-01

    Full Text Available Objectives. We report the single-incision laparoscopic cholecystectomy (SILC learning experience of 2 hepatobiliary surgeons and the factors that could influence the learning curve of SILC. Methods. Patients who underwent SILC by Surgeons A and B were studied retrospectively. Operating time, conversion rate, reason for conversion, identity of first assistants, and their experience with previous laparoscopic cholecystectomy (LC were analysed. CUSUM analysis is used to identify learning curve. Results. Hundred and nineteen SILC cases were performed by Surgeons A and B, respectively. Eight cases required additional port. In CUSUM analysis, most conversion occurred during the first 19 cases. Operating time was significantly lower (62.5 versus 90.6 min, P = 0.04 after the learning curve has been overcome. Operating time decreases as the experience increases, especially Surgeon B. Most conversions are due to adhesion at Calot’s triangle. Acute cholecystitis, patients’ BMI, and previous surgery do not seem to influence conversion rate. Mean operating times of cases assisted by first assistant with and without LC experience were 48 and 74 minutes, respectively (P = 0.004. Conclusion. Nineteen cases are needed to overcome the learning curve of SILC. Team work, assistant with CLC experience, and appropriate equipment and technique are the important factors in performing SILC.

  19. To the calculation technique and interpretation of atom radial distribution curves in ternary alloy systems

    International Nuclear Information System (INIS)

    Dutchak, Ya.I.; Frenchko, V.S.; Voznyak, O.M.

    1975-01-01

    Certain models of the structure of three-component melts are considered: the ''quasi-eutectic'' one, the model of statistical distribution of atoms and the ''polystructural'' model. The analytical expressions are given for the area under the first maximum of the curve describing the radial distribution of atoms for certain versions of the ''polystructural'' model. On the example of In-Ga-Ga and Bi-Cd-Sn eutectic melts the possibility of estimating the nature of atomic ordering in three-component melts through checking the models under consideration has been demonstrated

  20. A note on families of fragility curves

    International Nuclear Information System (INIS)

    Kaplan, S.; Bier, V.M.; Bley, D.C.

    1989-01-01

    In the quantitative assessment of seismic risk, uncertainty in the fragility of a structural component is usually expressed by putting forth a family of fragility curves, with probability serving as the parameter of the family. Commonly, a lognormal shape is used both for the individual curves and for the expression of uncertainty over the family. A so-called composite single curve can also be drawn and used for purposes of approximation. This composite curve is often regarded as equivalent to the mean curve of the family. The equality seems intuitively reasonable, but according to the authors has never been proven. The paper presented proves this equivalence hypothesis mathematically. Moreover, the authors show that this equivalence hypothesis between fragility curves is itself equivalent to an identity property of the standard normal probability curve. Thus, in the course of proving the fragility curve hypothesis, the authors have also proved a rather obscure, but interesting and perhaps previously unrecognized, property of the standard normal curve

  1. Sex- and Site-Specific Normative Data Curves for HR-pQCT.

    Science.gov (United States)

    Burt, Lauren A; Liang, Zhiying; Sajobi, Tolulope T; Hanley, David A; Boyd, Steven K

    2016-11-01

    The purpose of this study was to develop age-, site-, and sex-specific centile curves for common high-resolution peripheral quantitative computed tomography (HR-pQCT) and finite-element (FE) parameters for males and females older than 16 years. Participants (n = 866) from the Calgary cohort of the Canadian Multicentre Osteoporosis Study (CaMos) between the ages of 16 and 98 years were included in this study. Participants' nondominant radius and left tibia were scanned using HR-pQCT. Standard and automated segmentation methods were performed and FE analysis estimated apparent bone strength. Centile curves were generated for males and females at the tibia and radius using the generalized additive models for location, scale, and shape (GAMLSS) package in R. After GAMLSS analysis, age-, sex-, and site-specific centiles (10th, 25th, 50th, 75th, 90th) for total bone mineral density and trabecular number as well as failure load have been calculated. Clinicians and researchers can use these reference curves as a tool to assess bone health and changes in bone quality. © 2016 American Society for Bone and Mineral Research. © 2016 American Society for Bone and Mineral Research.

  2. MICA: Multiple interval-based curve alignment

    Science.gov (United States)

    Mann, Martin; Kahle, Hans-Peter; Beck, Matthias; Bender, Bela Johannes; Spiecker, Heinrich; Backofen, Rolf

    2018-01-01

    MICA enables the automatic synchronization of discrete data curves. To this end, characteristic points of the curves' shapes are identified. These landmarks are used within a heuristic curve registration approach to align profile pairs by mapping similar characteristics onto each other. In combination with a progressive alignment scheme, this enables the computation of multiple curve alignments. Multiple curve alignments are needed to derive meaningful representative consensus data of measured time or data series. MICA was already successfully applied to generate representative profiles of tree growth data based on intra-annual wood density profiles or cell formation data. The MICA package provides a command-line and graphical user interface. The R interface enables the direct embedding of multiple curve alignment computation into larger analyses pipelines. Source code, binaries and documentation are freely available at https://github.com/BackofenLab/MICA

  3. Power forward curves: a managerial perspective

    International Nuclear Information System (INIS)

    Nagarajan, Shankar

    1999-01-01

    This chapter concentrates on managerial application of power forward curves, and examines the determinants of electricity prices such as transmission constraints, its inability to be stored in a conventional way, its seasonality and weather dependence, the generation stack, and the swing risk. The electricity forward curve, classical arbitrage, constructing a forward curve, volatilities, and electricity forward curve models such as the jump-diffusion model, the mean-reverting heteroscedastic volatility model, and an econometric model of forward prices are examined. A managerial perspective of the applications of the forward curve is presented covering plant valuation, capital budgeting, performance measurement, product pricing and structuring, asset optimisation, valuation of transmission options, and risk management

  4. An Improved Ant Colony Matching by Using Discrete Curve Evolution

    OpenAIRE

    Saadi, Younes; Sari, Eka,; Herawan, Tutut

    2014-01-01

    Part 1: Information & Communication Technology-EurAsia Conference 2014, ICT-EurAsia 2014; International audience; In this paper we present an improved Ant Colony Optimization (ACO) for contour matching, which can be used to match 2D shapes. Discrete Curve Evolution (DCE) technique is used to simplify the extracted contour. In order to find the best correspondence between shapes, the match process is formulated as a Quadratic Assignment Problem (QAP) and resolved by using Ant Colony Optimizati...

  5. Template-assisted electrodeposition of Ni and Ni/Au nanowires on planar and curved substrates

    Science.gov (United States)

    Guiliani, Jason; Cadena, John; Monton, Carlos

    2018-02-01

    We present a variant of the template-assisted electrodeposition method that enables the synthesis of large arrays of nanowires (NWs) on flat and curved substrates. This method uses ultra-thin (50 nm-10 μm) anodic aluminum oxide membranes as a template. We have developed a procedure that uses a two-polymer protective layer to transfer these templates onto almost any surface. We have applied this technique to the fabrication of large arrays of Ni and segmented composition Ni/Au NWs on silicon wafers, Cu tapes, and thin (0.2 mm) Cu wires. In all cases, a complete coverage with NWs is achieved. The magnetic properties of these samples show an accentuated in-plane anisotropy which is affected by the form of the substrate (flat or curve) and the length of the NWs. Unlike current lithography techniques, the fabrication method proposed here allows the integration of complex nanostructures into devices, which can be fabricated on unconventional surfaces.

  6. Holographic grating relaxation technique for soft matter science

    Energy Technology Data Exchange (ETDEWEB)

    Lesnichii, Vasilii, E-mail: vasilii.lesnichii@physchem.uni-freiburg.de [Institute of Physical Chemistry, Albertstraße 21, Institute of Macromolecular Chemistry, Stefan-Meier-Str. 31, Albert-Ludwigs Universität, Freiburg im Breisgau 79104 (Germany); ITMO University, Kronverksky prospekt 49, Saint-Petersburg 197101 (Russian Federation); Kiessling, Andy [Institute of Physical Chemistry, Albertstraße 21, Institute of Macromolecular Chemistry, Stefan-Meier-Str. 31, Albert-Ludwigs Universität, Freiburg im Breisgau 79104 (Germany); Current address: Illinois Institute of Technology, 10 West 33rd Street, Chicago,IL60616 (United States); Bartsch, Eckhard [Institute of Physical Chemistry, Albertstraße 21, Institute of Macromolecular Chemistry, Stefan-Meier-Str. 31, Albert-Ludwigs Universität, Freiburg im Breisgau 79104 (Germany); Veniaminov, Andrey, E-mail: veniaminov@phoi.ifmo.ru [ITMO University, Kronverksky prospekt 49, Saint-Petersburg 197101 (Russian Federation)

    2016-06-17

    The holographic grating relaxation technique also known as forced Rayleigh scattering consists basically in writing a holographic grating in the specimen of interest and monitoring its diffraction efficiency as a function of time, from which valuable information on mass or heat transfer and photoinduced transformations can be extracted. In a more detailed view, the shape of the relaxation curve and the relaxation rate as a function of the grating period were found to be affected by the architecture of diffusing species (molecular probes) that constitute the grating, as well as that of the environment they diffuse in, thus making it possible to access and study spatial heterogeneity of materials and different modes of e.g., polymer motion. Minimum displacements and spatial domains approachable by the technique are in nanometer range, well below spatial periods of holographic gratings. In the present paper, several cases of holographic relaxation in heterogeneous media and complex motions are exemplified. Nano- to micro-structures or inhomogeneities comparable in spatial scale with holographic gratings manifest themselves in relaxation experiments via non-exponential decay (stepwise or stretched), spatial-period-dependent apparent diffusion coefficient, or unusual dependence of diffusion coefficient on molecular volume of diffusing probes.

  7. Application of fracture toughness scaling models to the ductile-to- brittle transition

    International Nuclear Information System (INIS)

    Link, R.E.; Joyce, J.A.

    1996-01-01

    An experimental investigation of fracture toughness in the ductile-brittle transition range was conducted. A large number of ASTM A533, Grade B steel, bend and tension specimens with varying crack lengths were tested throughout the transition region. Cleavage fracture toughness scaling models were utilized to correct the data for the loss of constraint in short crack specimens and tension geometries. The toughness scaling models were effective in reducing the scatter in the data, but tended to over-correct the results for the short crack bend specimens. A proposed ASTM Test Practice for Fracture Toughness in the Transition Range, which employs a master curve concept, was applied to the results. The proposed master curve over predicted the fracture toughness in the mid-transition and a modified master curve was developed that more accurately modeled the transition behavior of the material. Finally, the modified master curve and the fracture toughness scaling models were combined to predict the as-measured fracture toughness of the short crack bend and the tension specimens. It was shown that when the scaling models over correct the data for loss of constraint, they can also lead to non-conservative estimates of the increase in toughness for low constraint geometries

  8. Biomechanical study of the funnel technique applied in thoracic ...

    African Journals Online (AJOL)

    of vertebra was made for injury model of anterior and central column ... data were collected to eliminate creep and relaxation of soft tissues in .... 3 Pullout strength curve for Magerl technique (A) and Funnel technique (B). 210x164mm (72 x 72 ...

  9. The training and learning process of transseptal puncture using a modified technique.

    Science.gov (United States)

    Yao, Yan; Ding, Ligang; Chen, Wensheng; Guo, Jun; Bao, Jingru; Shi, Rui; Huang, Wen; Zhang, Shu; Wong, Tom

    2013-12-01

    As the transseptal (TS) puncture has become an integral part of many types of cardiac interventional procedures, its technique that was initial reported for measurement of left atrial pressure in 1950s, continue to evolve. Our laboratory adopted a modified technique which uses only coronary sinus catheter as the landmark to accomplishing TS punctures under fluoroscopy. The aim of this study is prospectively to evaluate the training and learning process for TS puncture guided by this modified technique. Guided by the training protocol, TS puncture was performed in 120 consecutive patients by three trainees without previous personal experience in TS catheterization and one experienced trainer as a controller. We analysed the following parameters: one puncture success rate, total procedure time, fluoroscopic time, and radiation dose. The learning curve was analysed using curve-fitting methodology. The first attempt at TS crossing was successful in 74 (82%), a second attempt was successful in 11 (12%), and 5 patients failed to puncture the interatrial septal finally. The average starting process time was 4.1 ± 0.8 min, and the estimated mean learning plateau was 1.2 ± 0.2 min. The estimated mean learning rate for process time was 25 ± 3 cases. Important aspects of learning curve can be estimated by fitting inverse curves for TS puncture. The study demonstrated that this technique was a simple, safe, economic, and effective approach for learning of TS puncture. Base on the statistical analysis, approximately 29 TS punctures will be needed for trainee to pass the steepest area of learning curve.

  10. Development of a novel once-through flow visualization technique for kinetic study of bulk and surface scaling

    Science.gov (United States)

    Sanni, O.; Bukuaghangin, O.; Huggan, M.; Kapur, N.; Charpentier, T.; Neville, A.

    2017-10-01

    There is a considerable interest to investigate surface crystallization in order to have a full mechanistic understanding of how layers of sparingly soluble salts (scale) build on component surfaces. Despite much recent attention, a suitable methodology to improve on the understanding of the precipitation/deposition systems to enable the construction of an accurate surface deposition kinetic model is still needed. In this work, an experimental flow rig and associated methodology to study mineral scale deposition is developed. The once-through flow rig allows us to follow mineral scale precipitation and surface deposition in situ and in real time. The rig enables us to assess the effects of various parameters such as brine chemistry and scaling indices, temperature, flow rates, and scale inhibitor concentrations on scaling kinetics. Calcium carbonate (CaCO3) scaling at different values of the saturation ratio (SR) is evaluated using image analysis procedures that enable the assessment of surface coverage, nucleation, and growth of the particles with time. The result for turbidity values measured in the flow cell is zero for all the SR considered. The residence time from the mixing point to the sample is shorter than the induction time for bulk precipitation; therefore, there are no crystals in the bulk solution as the flow passes through the sample. The study shows that surface scaling is not always a result of pre-precipitated crystals in the bulk solution. The technique enables both precipitation and surface deposition of scale to be decoupled and for the surface deposition process to be studied in real time and assessed under constant condition.

  11. Consistent Valuation across Curves Using Pricing Kernels

    Directory of Open Access Journals (Sweden)

    Andrea Macrina

    2018-03-01

    Full Text Available The general problem of asset pricing when the discount rate differs from the rate at which an asset’s cash flows accrue is considered. A pricing kernel framework is used to model an economy that is segmented into distinct markets, each identified by a yield curve having its own market, credit and liquidity risk characteristics. The proposed framework precludes arbitrage within each market, while the definition of a curve-conversion factor process links all markets in a consistent arbitrage-free manner. A pricing formula is then derived, referred to as the across-curve pricing formula, which enables consistent valuation and hedging of financial instruments across curves (and markets. As a natural application, a consistent multi-curve framework is formulated for emerging and developed inter-bank swap markets, which highlights an important dual feature of the curve-conversion factor process. Given this multi-curve framework, existing multi-curve approaches based on HJM and rational pricing kernel models are recovered, reviewed and generalised and single-curve models extended. In another application, inflation-linked, currency-based and fixed-income hybrid securities are shown to be consistently valued using the across-curve valuation method.

  12. Water Stress Scatters Nitrogen Dilution Curves in Wheat

    Directory of Open Access Journals (Sweden)

    Marianne Hoogmoed

    2018-04-01

    Full Text Available Nitrogen dilution curves relate a crop’s critical nitrogen concentration (%Nc to biomass (W according to the allometric model %Nc = a W-b. This model has a strong theoretical foundation, and parameters a and b show little variation for well-watered crops. Here we explore the robustness of this model for water stressed crops. We established experiments to examine the combined effects of water stress, phenology, partitioning of biomass, and water-soluble carbohydrates (WSC, as driven by environment and variety, on the %Nc of wheat crops. We compared models where %Nc was plotted against biomass, growth stage and thermal time. The models were similarly scattered. Residuals of the %Nc - biomass model at anthesis were positively related to biomass, stem:biomass ratio, Δ13C and water supply, and negatively related to ear:biomass ratio and concentration of WSC. These are physiologically meaningful associations explaining the scatter of biomass-based dilution curves. Residuals of the thermal time model showed less consistent associations with these variables. The biomass dilution model developed for well-watered crops overestimates nitrogen deficiency of water-stressed crops, and a biomass-based model is conceptually more justified than developmental models. This has implications for diagnostic and modeling. As theory is lagging, a greater degree of empiricism might be useful to capture environmental, chiefly water, and genotype-dependent traits in the determination of critical nitrogen for diagnostic purposes. Sensitivity analysis would help to decide if scaling nitrogen dilution curves for crop water status, and genotype-dependent parameters are needed.

  13. The estimation of I–V curves of PV panel using manufacturers’ I–V curves and evolutionary strategy

    International Nuclear Information System (INIS)

    Barukčić, M.; Hederić, Ž.; Špoljarić, Ž.

    2014-01-01

    Highlights: • The approximation of a I–V curve by two linear and a sigmoid functions is proposed. • The sigmoid function is used to estimate the knee of the I–V curve. • Dependence on irradiance and temperature of sigmoid function parameters is proposed. • The sigmoid function is used to estimate maximum power point (MPP). - Abstract: The method for estimation of I–V curves of photovoltaic (PV) panel by analytic expression is presented in the paper. The problem is defined in the form of an optimization problem. The optimization problem objective is based on data from I–V curves obtained by manufacturers’ or measured I–V curves. In order to estimate PV panel parameters, the optimization problem is solved by using an evolutionary strategy. The proposed method is tested for different PV panel technologies using data sheets. In this method the I–V curve approximation with two linear and a sigmoid function is proposed. The method for estimating the knee of the I–V curve and maximum power point at any irradiance and temperature is proposed

  14. Assessing neural activity related to decision-making through flexible odds ratio curves and their derivatives.

    Science.gov (United States)

    Roca-Pardiñas, Javier; Cadarso-Suárez, Carmen; Pardo-Vazquez, Jose L; Leboran, Victor; Molenberghs, Geert; Faes, Christel; Acuña, Carlos

    2011-06-30

    It is well established that neural activity is stochastically modulated over time. Therefore, direct comparisons across experimental conditions and determination of change points or maximum firing rates are not straightforward. This study sought to compare temporal firing probability curves that may vary across groups defined by different experimental conditions. Odds-ratio (OR) curves were used as a measure of comparison, and the main goal was to provide a global test to detect significant differences of such curves through the study of their derivatives. An algorithm is proposed that enables ORs based on generalized additive models, including factor-by-curve-type interactions to be flexibly estimated. Bootstrap methods were used to draw inferences from the derivatives curves, and binning techniques were applied to speed up computation in the estimation and testing processes. A simulation study was conducted to assess the validity of these bootstrap-based tests. This methodology was applied to study premotor ventral cortex neural activity associated with decision-making. The proposed statistical procedures proved very useful in revealing the neural activity correlates of decision-making in a visual discrimination task. Copyright © 2011 John Wiley & Sons, Ltd.

  15. Different scale land subsidence and ground fissure monitoring with multiple InSAR techniques over Fenwei basin, China

    Directory of Open Access Journals (Sweden)

    C. Zhao

    2015-11-01

    Full Text Available Fenwei basin, China, composed by several sub-basins, has been suffering severe geo-hazards in last 60 years, including large scale land subsidence and small scale ground fissure, which caused serious infrastructure damages and property losses. In this paper, we apply different InSAR techniques with different SAR data to monitor these hazards. Firstly, combined small baseline subset (SBAS InSAR method and persistent scatterers (PS InSAR method is used to multi-track Envisat ASAR data to retrieve the large scale land subsidence covering entire Fenwei basin, from which different land subsidence magnitudes are analyzed of different sub-basins. Secondly, PS-InSAR method is used to monitor the small scale ground fissure deformation in Yuncheng basin, where different spatial deformation gradient can be clearly discovered. Lastly, different track SAR data are contributed to retrieve two-dimensional deformation in both land subsidence and ground fissure region, Xi'an, China, which can be benefitial to explain the occurrence of ground fissure and the correlation between land subsidence and ground fissure.

  16. A longitudinal study of interethnic contacts in Germany: estimates from a multilevel growth curve model

    NARCIS (Netherlands)

    Martinovic, B.; van Tubergen, F.; Maas, I.

    2015-01-01

    Interethnic ties are considered important for the cohesion in society. Previous research has studied the determinants of interethnic ties with cross-sectional data or lagged panel designs. This study improves on prior research by applying multilevel growth curve modelling techniques with lagged

  17. A longitudinal study of interethnic contacts in Germany : Estimates from a multilevel growth curve model

    NARCIS (Netherlands)

    Martinovic, Borja|info:eu-repo/dai/nl/304822752; van Tubergen, Frank|info:eu-repo/dai/nl/271429534; Maas, Ineke|info:eu-repo/dai/nl/075229390

    2015-01-01

    Interethnic ties are considered important for the cohesion in society. Previous research has studied the determinants of interethnic ties with cross-sectional data or lagged panel designs. This study improves on prior research by applying multilevel growth curve modelling techniques with lagged

  18. A random matrix model for elliptic curve L-functions of finite conductor

    International Nuclear Information System (INIS)

    Dueñez, E; Huynh, D K; Keating, J P; Snaith, N C; Miller, S J

    2012-01-01

    We propose a random-matrix model for families of elliptic curve L-functions of finite conductor. A repulsion of the critical zeros of these L-functions away from the centre of the critical strip was observed numerically by Miller (2006 Exp. Math. 15 257–79); such behaviour deviates qualitatively from the conjectural limiting distribution of the zeros (for large conductors this distribution is expected to approach the one-level density of eigenvalues of orthogonal matrices after appropriate rescaling). Our purpose here is to provide a random-matrix model for Miller’s surprising discovery. We consider the family of even quadratic twists of a given elliptic curve. The main ingredient in our model is a calculation of the eigenvalue distribution of random orthogonal matrices whose characteristic polynomials are larger than some given value at the symmetry point in the spectra. We call this sub-ensemble of SO(2N) the excised orthogonal ensemble. The sieving-off of matrices with small values of the characteristic polynomial is akin to the discretization of the central values of L-functions implied by the formulae of Waldspurger and Kohnen–Zagier. The cut-off scale appropriate to modelling elliptic curve L-functions is exponentially small relative to the matrix size N. The one-level density of the excised ensemble can be expressed in terms of that of the well-known Jacobi ensemble, enabling the former to be explicitly calculated. It exhibits an exponentially small (on the scale of the mean spacing) hard gap determined by the cut-off value, followed by soft repulsion on a much larger scale. Neither of these features is present in the one-level density of SO(2N). When N → ∞ we recover the limiting orthogonal behaviour. Our results agree qualitatively with Miller’s discrepancy. Choosing the cut-off appropriately gives a model in good quantitative agreement with the number-theoretical data. (paper)

  19. Global determination of rating curves in the Amazon basin from satellite altimetry

    Science.gov (United States)

    Paris, Adrien; Paiva, Rodrigo C. D.; Santos da Silva, Joecila; Medeiros Moreira, Daniel; Calmant, Stéphane; Collischonn, Walter; Bonnet, Marie-Paule; Seyler, Frédérique

    2014-05-01

    The Amazonian basin is the largest hydrological basin all over the world. Over the past few years, it has experienced an unusual succession of extreme droughts and floods, which origin is still a matter of debate. One of the major issues in understanding such events is to get discharge series distributed over the entire basin. Satellite altimetry can be used to improve our knowledge of the hydrological stream flow conditions in the basin, through rating curves. Rating curves are mathematical relationships between stage and discharge at a given place. The common way to determine the parameters of the relationship is to compute the non-linear regression between the discharge and stage series. In this study, the discharge data was obtained by simulation through the entire basin using the MGB-IPH model with TRMM Merge input rainfall data and assimilation of gage data, run from 1998 to 2009. The stage dataset is made of ~900 altimetry series at ENVISAT and Jason-2 virtual stations, sampling the stages over more than a hundred of rivers in the basin. Altimetry series span between 2002 and 2011. In the present work we present the benefits of using stochastic methods instead of probabilistic ones to determine a dataset of rating curve parameters which are hydrologicaly meaningful throughout the entire Amazon basin. The rating curve parameters have been computed using an optimization technique based on Markov Chain Monte Carlo sampler and Bayesian inference scheme. This technique provides an estimate of the best value for the parameters together with their posterior probability distribution, allowing the determination of a credibility interval for calculated discharge. Also the error over discharges estimates from the MGB-IPH model is included in the rating curve determination. These MGB-IPH errors come from either errors in the discharge derived from the gage readings or errors in the satellite rainfall estimates. The present experiment shows that the stochastic approach

  20. MR Imaging-derived Oxygen-Hemoglobin Dissociation Curves and Fetal-Placental Oxygen-Hemoglobin Affinities.

    Science.gov (United States)

    Avni, Reut; Golani, Ofra; Akselrod-Ballin, Ayelet; Cohen, Yonni; Biton, Inbal; Garbow, Joel R; Neeman, Michal

    2016-07-01

    Purpose To generate magnetic resonance (MR) imaging-derived, oxygen-hemoglobin dissociation curves and to map fetal-placental oxygen-hemoglobin affinity in pregnant mice noninvasively by combining blood oxygen level-dependent (BOLD) T2* and oxygen-weighted T1 contrast mechanisms under different respiration challenges. Materials and Methods All procedures were approved by the Weizmann Institutional Animal Care and Use Committee. Pregnant mice were analyzed with MR imaging at 9.4 T on embryonic days 14.5 (eight dams and 58 fetuses; imprinting control region ICR strain) and 17.5 (21 dams and 158 fetuses) under respiration challenges ranging from hyperoxia to hypoxia (10 levels of oxygenation, 100%-10%; total imaging time, 100 minutes). A shorter protocol with normoxia to hyperoxia was also performed (five levels of oxygenation, 20%-100%; total imaging time, 60 minutes). Fast spin-echo anatomic images were obtained, followed by sequential acquisition of three-dimensional gradient-echo T2*- and T1-weighted images. Automated registration was applied to align regions of interest of the entire placenta, fetal liver, and maternal liver. Results were compared by using a two-tailed unpaired Student t test. R1 and R2* values were derived for each tissue. MR imaging-based oxygen-hemoglobin dissociation curves were constructed by nonlinear least square fitting of 1 minus the change in R2*divided by R2*at baseline as a function of R1 to a sigmoid-shaped curve. The apparent P50 (oxygen tension at which hemoglobin is 50% saturated) value was derived from the curves, calculated as the R1 scaled value (x) at which the change in R2* divided by R2*at baseline scaled (y) equals 0.5. Results The apparent P50 values were significantly lower in fetal liver than in maternal liver for both gestation stages (day 14.5: 21% ± 5 [P = .04] and day 17.5: 41% ± 7 [P hemoglobin dissociation curves with a shorter protocol that excluded the hypoxic periods was demonstrated. Conclusion MR imaging

  1. Experimental and statistical requirements for developing a well-defined K/sub IR/ curve. Final report

    International Nuclear Information System (INIS)

    Server, W.L.; Oldfield, W.; Wullaert, R.A.

    1977-05-01

    Further development of a statistically well-defined reference fracture toughness curve to verify and compliment the K/sub IR/ curve presently specified in Appendix G, Section III of the ASME Code was accomplished by performing critical experiments in small specimen fracture mechanics and improving techniques for statistical analysis of the data. Except for cleavage-initiated fracture, crack initiation was observed to occur prior to maximum load for all of the materials investigated. Initiation fracture toughness values (K/sub Jc/) based on R-curve heat-tinting studies were up to 50 percent less than the previously reported equivalent energy values (K*/sub d/). At upper shelf temperatures, the initiation fracture toughness (K/sub Jc/) generally increased with stress intensification rate. Both K/sub Jc/--Charpy V-notch and K/sub Ic/--specimen strength ratio correlations are promising methods for predicting thick-section behavior from small specimens. The previously developed tanh curve fitting procedure was improved to permit estimates of the variances and covariances of the regression coefficients to be computed. The distribution of the fracture toughness data was determined as a function of temperature. Instrumented precracked Charpy results were used to normalize the larger specimen fracture toughness data. The transformed large specimen fracture toughness data are used to generate statistically based lower-bound fracture toughness curves for either static or dynamic test results. A comparison of these lower bound curves with the K/sub IR/ curve shows that the K/sub IR/ curve is more conservative over most of its range. 143 figures, 26 tables

  2. Comparison of embrittlement trend curves to high fluence surveillance results

    International Nuclear Information System (INIS)

    Bogaert, A.S.; Gerard, R.; Chaouadi, R.

    2011-01-01

    In the regulatory justification of the integrity of the reactor pressure vessels (RPV) for long term operation, use is made of predictive formulas (also called trend curves) to evaluate the RPV embrittlement (expressed in terms of RTNDT shifts) in function of fluence, chemical composition and in some cases temperature, neutron flux or product form. It has been shown recently that some of the existing or proposed trend curves tend to underpredict high dose embrittlement. Due to the scarcity of representative surveillance data at high dose, some test reactor results were used in these evaluations and raise the issue of representativeness of the accelerated test reactor irradiations (dose rate effects). In Belgium the surveillance capsules withdrawal schedule was modified in the nineties in order to obtain results corresponding to 60 years of operation or more with the initial surveillance program. Some of these results are already available and offer a good opportunity to test the validity of the predictive formulas at high dose. In addition, advanced surveillance methods are used in Belgium like the Master Curve, increased tensile tests, and microstructural investigations. These techniques made it possible to show the conservatism of the regulatory approach and to demonstrate increased margins, especially for the first generation units. In this paper the surveillance results are compared to different predictive formulas, as well as to an engineering hardening model developed at SCK.CEN. Generally accepted property-to-property correlations are critically revisited. Conclusions are made on the reliability and applicability of the embrittlement trend curves. (authors)

  3. Design of an Elliptic Curve Cryptography processor for RFID tag chips.

    Science.gov (United States)

    Liu, Zilong; Liu, Dongsheng; Zou, Xuecheng; Lin, Hui; Cheng, Jian

    2014-09-26

    Radio Frequency Identification (RFID) is an important technique for wireless sensor networks and the Internet of Things. Recently, considerable research has been performed in the combination of public key cryptography and RFID. In this paper, an efficient architecture of Elliptic Curve Cryptography (ECC) Processor for RFID tag chip is presented. We adopt a new inversion algorithm which requires fewer registers to store variables than the traditional schemes. A new method for coordinate swapping is proposed, which can reduce the complexity of the controller and shorten the time of iterative calculation effectively. A modified circular shift register architecture is presented in this paper, which is an effective way to reduce the area of register files. Clock gating and asynchronous counter are exploited to reduce the power consumption. The simulation and synthesis results show that the time needed for one elliptic curve scalar point multiplication over GF(2163) is 176.7 K clock cycles and the gate area is 13.8 K with UMC 0.13 μm Complementary Metal Oxide Semiconductor (CMOS) technology. Moreover, the low power and low cost consumption make the Elliptic Curve Cryptography Processor (ECP) a prospective candidate for application in the RFID tag chip.

  4. Dual Smarandache Curves and Smarandache Ruled Surfaces

    OpenAIRE

    Tanju KAHRAMAN; Mehmet ÖNDER; H. Hüseyin UGURLU

    2013-01-01

    In this paper, by considering dual geodesic trihedron (dual Darboux frame) we define dual Smarandache curves lying fully on dual unit sphere S^2 and corresponding to ruled surfaces. We obtain the relationships between the elements of curvature of dual spherical curve (ruled surface) x(s) and its dual Smarandache curve (Smarandache ruled surface) x1(s) and we give an example for dual Smarandache curves of a dual spherical curve.

  5. External validation and comparison of three pediatric clinical dehydration scales.

    Science.gov (United States)

    Jauregui, Joshua; Nelson, Daniel; Choo, Esther; Stearns, Branden; Levine, Adam C; Liebmann, Otto; Shah, Sachita P

    2014-01-01

    To prospectively validate three popular clinical dehydration scales and overall physician gestalt in children with vomiting or diarrhea relative to the criterion standard of percent weight change with rehydration. We prospectively enrolled a non-consecutive cohort of children ≤ 18 years of age with an acute episode of diarrhea or vomiting. Patient weight, clinical scale variables and physician clinical impression, or gestalt, were recorded before and after fluid resuscitation in the emergency department and upon hospital discharge. The percent weight change from presentation to discharge was used to calculate the degree of dehydration, with a weight change of ≥ 5% considered significant dehydration. Receiver operating characteristics (ROC) curves were constructed for each of the three clinical scales and physician gestalt. Sensitivity and specificity were calculated based on the best cut-points of the ROC curve. We approached 209 patients, and of those, 148 were enrolled and 113 patients had complete data for analysis. Of these, 10.6% had significant dehydration based on our criterion standard. The Clinical Dehydration Scale (CDS) and Gorelick scales both had an area under the ROC curve (AUC) statistically different from the reference line with AUCs of 0.72 (95% CI 0.60, 0.84) and 0.71 (95% CI 0.57, 0.85) respectively. The World Health Organization (WHO) scale and physician gestalt had AUCs of 0.61 (95% CI 0.45, 0.77) and 0.61 (0.44, 0.78) respectively, which were not statistically significant. The Gorelick scale and Clinical Dehydration Scale were fair predictors of dehydration in children with diarrhea or vomiting. The World Health Organization scale and physician gestalt were not helpful predictors of dehydration in our cohort.

  6. Explaining the experience curve: Cost reductions of Brazilian ethanol from sugarcane

    International Nuclear Information System (INIS)

    van den Wall Bake, J.D.; Junginger, M.; Faaij, A.; Poot, T.; Walter, A.

    2009-01-01

    Production costs of bio-ethanol from sugarcane in Brazil have declined continuously over the last three decades. The aims of this study are to determine underlying reasons behind these cost reductions, and to assess whether the experience curve concept can be used to describe the development of feedstock costs and industrial production costs. The analysis was performed using average national costs data, a number of prices (as a proxy for production costs) and data on annual Brazilian production volumes. Results show that the progress ratio (PR) for feedstock costs is 0.68 and 0.81 for industrial costs (excluding feedstock costs). The experience curve of total production costs results in a PR of 0.80. Cost breakdowns of sugarcane production show that all sub-processes contributed to the total, but that increasing yields have been the main driving force. Industrial costs mainly decreased because of increasing scales of the ethanol plants. Total production costs at present are approximately 340 US$/m ethanol 3 (16 US$/GJ). Based on the experience curves for feedstock and industrial costs, total ethanol production costs in 2020 are estimated between US$ 200 and 260/m 3 (9.4-12.2 US$/GJ). We conclude that using disaggregated experience curves for feedstock and industrial processing costs provide more insights into the factors that lowered costs in the past, and allow more accurate estimations for future cost developments. (author)

  7. Kummel Disease Treatment by Unipedicular Vertebral Augmentation Using Curved Injection Cannula

    International Nuclear Information System (INIS)

    Masala, Salvatore; Nano, Giovanni; Mammucari, Matteo; Simonetti, Giovanni

    2011-01-01

    Purpose: This study was designed to evaluate the efficacy of the blunt-tipped curved injection needle (BCN) AVAflex (Care Fusion) for vertebral augmentation in cases of Kummel’s disease. Methods: We performed 25 vertebral augmentation procedures on 25 consecutive patients (11 men/14 women; mean age, 67 years) with Kummel’s disease using the blunt-tipped curved injection needle with PMMA cement. We performed all 25 procedures by unipedicular left approach with patients in prone position under local anesthesia and mild sedation. In all cases, an intravertebral cleft was evident on preprocedural imaging. We evaluated pain intensities by Visual Analogic Scale (VAS) before and at first day, 6 months, and 1 year after procedure. Results: In all cases the curved injection cannula permitted the filling of the clefts and surrounding cancellous bone without any complication. A significant reduction of kyphotic deformities of the treated vertebral bodies was evident. A significance decrease in VAS values at 1 year also was evident (mean decrease 7.2). At plain dynamic postprocedural X-rays checks, there was no sign of pathologic intravertebral motion as evidence of optimal stabilization. Conclusions: BCN AVAflex is a safe and effective device for targeted vertebral augmentation in cases of Kummel’s disease. Its distinctive characteristic is the curved injection cannula, which enables targeting the cement injection to areas far off the trajectory of the straight access cannula, thus providing excellent cement spread throughout the entire volume of vertebral body.

  8. Enhancement of global flood damage assessments using building material based vulnerability curves

    Science.gov (United States)

    Englhardt, Johanna; de Ruiter, Marleen; de Moel, Hans; Aerts, Jeroen

    2017-04-01

    This study discusses the development of an enhanced approach for flood damage and risk assessments using vulnerability curves that are based on building material information. The approach draws upon common practices in earthquake vulnerability assessments, and is an alternative for land-use or building occupancy approach in flood risk assessment models. The approach is of particular importance for studies where there is a large variation in building material, such as large scale studies or studies in developing countries. A case study of Ethiopia is used to demonstrate the impact of the different methodological approaches on direct damage assessments due to flooding. Generally, flood damage assessments use damage curves for different land-use or occupancy types (i.e. urban or residential and commercial classes). However, these categories do not necessarily relate directly to vulnerability of damage by flood waters. For this, the construction type and building material may be more important, as is used in earthquake risk assessments. For this study, we use building material classification data of the PAGER1 project to define new building material based vulnerability classes for flood damage. This approach will be compared to the widely applied land-use based vulnerability curves such as used by De Moel et al. (2011). The case of Ethiopia demonstrates and compares the feasibility of this novel flood vulnerability method on a country level which holds the potential to be scaled up to a global level. The study shows that flood vulnerability based on building material also allows for better differentiation between flood damage in urban and rural settings, opening doors to better link to poverty studies when such exposure data is available. Furthermore, this new approach paves the road to the enhancement of multi-risk assessments as the method enables the comparison of vulnerability across different natural hazard types that also use material-based vulnerability curves

  9. The writhe of open and closed curves

    International Nuclear Information System (INIS)

    Berger, Mitchell A; Prior, Chris

    2006-01-01

    Twist and writhe measure basic geometric properties of a ribbon or tube. While these measures have applications in molecular biology, materials science, fluid mechanics and astrophysics, they are under-utilized because they are often considered difficult to compute. In addition, many applications involve curves with endpoints (open curves); but for these curves the definition of writhe can be ambiguous. This paper provides simple expressions for the writhe of closed curves, and provides a new definition of writhe for open curves. The open curve definition is especially appropriate when the curve is anchored at endpoints on a plane or stretches between two parallel planes. This definition can be especially useful for magnetic flux tubes in the solar atmosphere, and for isotropic rods with ends fixed to a plane

  10. F(α) curves: Experimental results

    International Nuclear Information System (INIS)

    Glazier, J.A.; Gunaratne, G.; Libchaber, A.

    1988-01-01

    We study the transition to chaos at the golden and silver means for forced Rayleigh-Benard (RB) convection in mercury. We present f(α) curves below, at, and above the transition, and provide comparisons to the curves calculated for the one-dimensional circle map. We find good agreement at both the golden and silver means. This confirms our earlier observation that for low amplitude forcing, forced RB convection is well described by the one-dimensional circle map and indicates that the f(α) curve is a good measure of the approach to criticality. For selected subcritical experimental data sets we calculate the degree of subcriticality. We also present both experimental and calculated results for f(α) in the presence of a third frequency. Again we obtain agreement: The presence of random noise or a third frequency narrows the right-hand (negative q) side of the f(α) curve. Subcriticality results in symmetrically narrowed curves. We can also distinguish these cases by examining the power spectra and Poincare sections of the time series

  11. Investigation of learning and experience curves

    Energy Technology Data Exchange (ETDEWEB)

    Krawiec, F.; Thornton, J.; Edesess, M.

    1980-04-01

    The applicability of learning and experience curves for predicting future costs of solar technologies is assessed, and the major test case is the production economics of heliostats. Alternative methods for estimating cost reductions in systems manufacture are discussed, and procedures for using learning and experience curves to predict costs are outlined. Because adequate production data often do not exist, production histories of analogous products/processes are analyzed and learning and aggregated cost curves for these surrogates estimated. If the surrogate learning curves apply, they can be used to estimate solar technology costs. The steps involved in generating these cost estimates are given. Second-generation glass-steel and inflated-bubble heliostat design concepts, developed by MDAC and GE, respectively, are described; a costing scenario for 25,000 units/yr is detailed; surrogates for cost analysis are chosen; learning and aggregate cost curves are estimated; and aggregate cost curves for the GE and MDAC designs are estimated. However, an approach that combines a neoclassical production function with a learning-by-doing hypothesis is needed to yield a cost relation compatible with the historical learning curve and the traditional cost function of economic theory.

  12. Dissolution glow curve in LLD

    International Nuclear Information System (INIS)

    Haverkamp, U.; Wiezorek, C.; Poetter, R.

    1990-01-01

    Lyoluminescence dosimetry is based upon light emission during dissolution of previously irradiated dosimetric materials. The lyoluminescence signal is expressed in the dissolution glow curve. These curves begin, depending on the dissolution system, with a high peak followed by an exponentially decreasing intensity. System parameters that influence the graph of the dissolution glow curve, are, for example, injection speed, temperature and pH value of the solution and the design of the dissolution cell. The initial peak does not significantly correlate with the absorbed dose, it is mainly an effect of the injection. The decay of the curve consists of two exponential components: one fast and one slow. The components depend on the absorbed dose and the dosimetric materials used. In particular, the slow component correlates with the absorbed dose. In contrast to the fast component the argument of the exponential function of the slow component is independent of the dosimetric materials investigated: trehalose, glucose and mannitol. The maximum value, following the peak of the curve, and the integral light output are a measure of the absorbed dose. The reason for the different light outputs of various dosimetric materials after irradiation with the same dose is the differing solubility. The character of the dissolution glow curves is the same following irradiation with photons, electrons or neutrons. (author)

  13. Reconstructing the Curve-Skeletons of 3D Shapes Using the Visual Hull.

    Science.gov (United States)

    Livesu, Marco; Guggeri, Fabio; Scateni, Riccardo

    2012-11-01

    Curve-skeletons are the most important descriptors for shapes, capable of capturing in a synthetic manner the most relevant features. They are useful for many different applications: from shape matching and retrieval, to medical imaging, to animation. This has led, over the years, to the development of several different techniques for extraction, each trying to comply with specific goals. We propose a novel technique which stems from the intuition of reproducing what a human being does to deduce the shape of an object holding it in his or her hand and rotating. To accomplish this, we use the formal definitions of epipolar geometry and visual hull. We show how it is possible to infer the curve-skeleton of a broad class of 3D shapes, along with an estimation of the radii of the maximal inscribed balls, by gathering information about the medial axes of their projections on the image planes of the stereographic vision. It is definitely worth to point out that our method works indifferently on (even unoriented) polygonal meshes, voxel models, and point clouds. Moreover, it is insensitive to noise, pose-invariant, resolution-invariant, and robust when applied to incomplete data sets.

  14. Stress analysis in curved composites due to thermal loading

    Science.gov (United States)

    Polk, Jared Cornelius

    of such a problem. It was ascertained and proven that the general, non-modified (original) version of classical lamination theory cannot be used for an analytical solution for a simply curved beam or any other structure that would require rotations of laminates out their planes in space. Finite element analysis was used to ascertain stress variations in a simply curved beam. It was verified that these solutions reduce to the flat beam solutions as the radius of curvature of the beams tends to infinity. MATLAB was used to conduct the classical lamination theory numerical analysis. A MATLAB program was written to conduct the finite element analysis for the flat and curved beams, isotropic and composite. It does not require incompatibility techniques used in mechanics of isotropic materials for indeterminate structures that are equivalent to fixed-beam problems. Finally, it has the ability to enable the user to define and create unique elements not accessible in commercial software, and modify finite element procedures to take advantage of new paradigms.

  15. Pusher curving technique for preventing tilt of femoral Geunther Tulip inferior vena cava filter: in vitro study

    International Nuclear Information System (INIS)

    Xiao Liang; Shen Jing; Huang Desheng; Xu Ke

    2011-01-01

    Objective: To determine whether the adjustment of the pusher of GTF was useful to decrease the degree of tilting of the femoral Geunther Tulip filter (GTF) in an in vitro caval model. Methods: The caval model was constructed by placement of a 25 mm × 100 mm and two 10 mm × 200 mm Dacron graft inside a transparent bifurcate glass tube. The study consisted of two groups: left straight group (GLS) (n = 100) and left curved group (G LC ) (n=100). In the G LC , a 10° to 20° angle was curved on the introducer. The distance (D CH ) between the caval right wall and the hook was measured. The degree of tilting (DT) was classified into 5 grades and recorded. Before and after the GTF being released, the angle (A CM1,2 ) between the axis of IVC and the metal mount, the distance (D CM1 ) between the caval right wall and the metal mount, the angle (ACF) between the axis of IVC and the axis of the filter and the diameter of IVC (D IVC ) were measured. The data were analyzed with Chi-Square test, t test, rank sum. test and Pearson correlation test. Results: The degree of GTF tilting in each group revealed a divergent tendency. In group LC , the apex of the filter tended to be grade Ⅲ compared in group LS (χ 2 value 37.491, P LS and G LC were considered as statistical significance (16.60° vs. 3.05°, 20.60° vs. 3.50°, -3.90° vs. -0.40°, 2.98 mm vs. 10.40 mm, -10.95° vs. -0.485°, 13.17 mm vs. 10.06 mm, -1.70° vs. 0.70°, t or Z values -12.187, -12.188, -8.545, -51.834, -11.395, 9.562, -3.596, P CM1 and A CF , A CM1 - A CM2 and D CH1 - D CH2 in each group, respectively (r values 0.978, 0.344, 0.879, 0.627, P CH1 and A CF in each group, A CP and A CF in group LC (r values -0.974, -0.322, -0.702, P CM1 and A CF , A CM1 - A CM2 and D CH1 - D CH2 in each group, respectively (r values 0.978, 0.344, 0.879, 0.627, P CH1 and A CF in each group, A CP and A CF in group LC (r values -0.974, -0.322, -0.702, P<0.01). Conclusion: The technique of adjusting the orientation of filter

  16. Development of the curve of Spee.

    Science.gov (United States)

    Marshall, Steven D; Caspersen, Matthew; Hardinger, Rachel R; Franciscus, Robert G; Aquilino, Steven A; Southard, Thomas E

    2008-09-01

    Ferdinand Graf von Spee is credited with characterizing human occlusal curvature viewed in the sagittal plane. This naturally occurring phenomenon has clinical importance in orthodontics and restorative dentistry, yet we have little understanding of when, how, or why it develops. The purpose of this study was to expand our understanding by examining the development of the curve of Spee longitudinally in a sample of untreated subjects with normal occlusion from the deciduous dentition to adulthood. Records of 16 male and 17 female subjects from the Iowa Facial Growth Study were selected and examined. The depth of the curve of Spee was measured on their study models at 7 time points from ages 4 (deciduous dentition) to 26 (adult dentition) years. The Wilcoxon signed rank test was used to compare changes in the curve of Spee depth between time points. For each subject, the relative eruption of the mandibular teeth was measured from corresponding cephalometric radiographs, and its contribution to the developing curve of Spee was ascertained. In the deciduous dentition, the curve of Spee is minimal. At mean ages of 4.05 and 5.27 years, the average curve of Spee depths are 0.24 and 0.25 mm, respectively. With change to the transitional dentition, corresponding to the eruption of the mandibular permanent first molars and central incisors (mean age, 6.91 years), the curve of Spee depth increases significantly (P < 0.0001) to a mean maximum depth of 1.32 mm. The curve of Spee then remains essentially unchanged until eruption of the second molars (mean age, 12.38 years), when the depth increases (P < 0.0001) to a mean maximum depth of 2.17 mm. In the adolescent dentition (mean age, 16.21 years), the depth decreases slightly (P = 0.0009) to a mean maximum depth of 1.98 mm, and, in the adult dentition (mean age 26.98 years), the curve remains unchanged (P = 0.66), with a mean maximum depth of 2.02 mm. No significant differences in curve of Spee development were found between

  17. GLOBAL AND STRICT CURVE FITTING METHOD

    NARCIS (Netherlands)

    Nakajima, Y.; Mori, S.

    2004-01-01

    To find a global and smooth curve fitting, cubic B­Spline method and gathering­ line methods are investigated. When segmenting and recognizing a contour curve of character shape, some global method is required. If we want to connect contour curves around a singular point like crossing points,

  18. Simulating Supernova Light Curves

    Energy Technology Data Exchange (ETDEWEB)

    Even, Wesley Paul [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Dolence, Joshua C. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-05-05

    This report discusses supernova light simulations. A brief review of supernovae, basics of supernova light curves, simulation tools used at LANL, and supernova results are included. Further, it happens that many of the same methods used to generate simulated supernova light curves can also be used to model the emission from fireballs generated by explosions in the earth’s atmosphere.

  19. Simulating Supernova Light Curves

    International Nuclear Information System (INIS)

    Even, Wesley Paul; Dolence, Joshua C.

    2016-01-01

    This report discusses supernova light simulations. A brief review of supernovae, basics of supernova light curves, simulation tools used at LANL, and supernova results are included. Further, it happens that many of the same methods used to generate simulated supernova light curves can also be used to model the emission from fireballs generated by explosions in the earth's atmosphere.

  20. The Use of System Codes in Scaling Studies: Relevant Techniques for Qualifying NPP Nodalizations for Particular Scenarios

    Directory of Open Access Journals (Sweden)

    V. Martinez-Quiroga

    2014-01-01

    Full Text Available System codes along with necessary nodalizations are valuable tools for thermal hydraulic safety analysis. Qualifying both codes and nodalizations is an essential step prior to their use in any significant study involving code calculations. Since most existing experimental data come from tests performed on the small scale, any qualification process must therefore address scale considerations. This paper describes the methodology developed at the Technical University of Catalonia in order to contribute to the qualification of Nuclear Power Plant nodalizations by means of scale disquisitions. The techniques that are presented include the so-called Kv-scaled calculation approach as well as the use of “hybrid nodalizations” and “scaled-up nodalizations.” These methods have revealed themselves to be very helpful in producing the required qualification and in promoting further improvements in nodalization. The paper explains both the concepts and the general guidelines of the method, while an accompanying paper will complete the presentation of the methodology as well as showing the results of the analysis of scaling discrepancies that appeared during the posttest simulations of PKL-LSTF counterpart tests performed on the PKL-III and ROSA-2 OECD/NEA Projects. Both articles together produce the complete description of the methodology that has been developed in the framework of the use of NPP nodalizations in the support to plant operation and control.

  1. Planetary-Scale Geospatial Data Analysis Techniques in Google's Earth Engine Platform (Invited)

    Science.gov (United States)

    Hancher, M.

    2013-12-01

    Geoscientists have more and more access to new tools for large-scale computing. With any tool, some tasks are easy and other tasks hard. It is natural to look to new computing platforms to increase the scale and efficiency of existing techniques, but there is a more exiting opportunity to discover and develop a new vocabulary of fundamental analysis idioms that are made easy and effective by these new tools. Google's Earth Engine platform is a cloud computing environment for earth data analysis that combines a public data catalog with a large-scale computational facility optimized for parallel processing of geospatial data. The data catalog includes a nearly complete archive of scenes from Landsat 4, 5, 7, and 8 that have been processed by the USGS, as well as a wide variety of other remotely-sensed and ancillary data products. Earth Engine supports a just-in-time computation model that enables real-time preview during algorithm development and debugging as well as during experimental data analysis and open-ended data exploration. Data processing operations are performed in parallel across many computers in Google's datacenters. The platform automatically handles many traditionally-onerous data management tasks, such as data format conversion, reprojection, resampling, and associating image metadata with pixel data. Early applications of Earth Engine have included the development of Google's global cloud-free fifteen-meter base map and global multi-decadal time-lapse animations, as well as numerous large and small experimental analyses by scientists from a range of academic, government, and non-governmental institutions, working in a wide variety of application areas including forestry, agriculture, urban mapping, and species habitat modeling. Patterns in the successes and failures of these early efforts have begun to emerge, sketching the outlines of a new set of simple and effective approaches to geospatial data analysis.

  2. Test-Anchored Vibration Response Predictions for an Acoustically Energized Curved Orthogrid Panel with Mounted Components

    Science.gov (United States)

    Frady, Gregory P.; Duvall, Lowery D.; Fulcher, Clay W. G.; Laverde, Bruce T.; Hunt, Ronald A.

    2011-01-01

    using a TBL model were demonstrated, and wind tunnel tests have been proposed to anchor the predictions and provide new insight into modeling approaches for this environment. Finally, design load factors were developed from the measured and predicted responses and compared with those derived from traditional techniques such as historical Mass Acceleration Curves and Barrett scaling methods for acreage and component-loaded panels.

  3. Genome scale engineering techniques for metabolic engineering.

    Science.gov (United States)

    Liu, Rongming; Bassalo, Marcelo C; Zeitoun, Ramsey I; Gill, Ryan T

    2015-11-01

    Metabolic engineering has expanded from a focus on designs requiring a small number of genetic modifications to increasingly complex designs driven by advances in genome-scale engineering technologies. Metabolic engineering has been generally defined by the use of iterative cycles of rational genome modifications, strain analysis and characterization, and a synthesis step that fuels additional hypothesis generation. This cycle mirrors the Design-Build-Test-Learn cycle followed throughout various engineering fields that has recently become a defining aspect of synthetic biology. This review will attempt to summarize recent genome-scale design, build, test, and learn technologies and relate their use to a range of metabolic engineering applications. Copyright © 2015 International Metabolic Engineering Society. Published by Elsevier Inc. All rights reserved.

  4. [Adverse Effect Predictions Based on Computational Toxicology Techniques and Large-scale Databases].

    Science.gov (United States)

    Uesawa, Yoshihiro

    2018-01-01

     Understanding the features of chemical structures related to the adverse effects of drugs is useful for identifying potential adverse effects of new drugs. This can be based on the limited information available from post-marketing surveillance, assessment of the potential toxicities of metabolites and illegal drugs with unclear characteristics, screening of lead compounds at the drug discovery stage, and identification of leads for the discovery of new pharmacological mechanisms. This present paper describes techniques used in computational toxicology to investigate the content of large-scale spontaneous report databases of adverse effects, and it is illustrated with examples. Furthermore, volcano plotting, a new visualization method for clarifying the relationships between drugs and adverse effects via comprehensive analyses, will be introduced. These analyses may produce a great amount of data that can be applied to drug repositioning.

  5. ARPEFS as an analytic technique

    International Nuclear Information System (INIS)

    Schach von Wittenau, A.E.

    1991-04-01

    Two modifications to the ARPEFS technique are introduced. These are studied using p(2 x 2)S/Cu(001) as a model system. The first modification is the obtaining of ARPEFS χ(k) curves at temperatures as low as our equipment will permit. While adding to the difficulty of the experiment, this modification is shown to almost double the signal-to-noise ratio of normal emission p(2 x 2)S/Cu(001) χ(k) curves. This is shown by visual comparison of the raw data and by the improved precision of the extracted structural parameters. The second change is the replacement of manual fitting of the Fourier filtered χ(k) curves by the use of the simplex algorithm for parameter determination. Again using p(2 x 2)S/Cu(001) data, this is shown to result in better agreement between experimental χ(k) curves and curves calculated based on model structures. The improved ARPEFS is then applied to p(2 x 2)S/Ni(111) and (√3 x √3) R30 degree S/Ni(111). For p(2 x 2)S/Cu(001) we find a S-Cu bond length of 2.26 Angstrom, with the S adatom 1.31 Angstrom above the fourfold hollow site. The second Cu layer appears to be corrugated. Analysis of the p(2 x 2)S/Ni(111) data indicates that the S adatom adatom adsorbs onto the FCC threefold hollow site 1.53 Angstrom above the Ni surface. The S-Ni bond length is determined to be 2.13 Angstrom, indicating an outwards shift of the first layer Ni atoms. We are unable to assign a unique structure to (√3 x √3)R30 degree S/Ni(111). An analysis of the strengths and weaknesses of ARPEFS as an experimental and analytic technique is presented, along with a summary of problems still to be addressed

  6. Possibilities of LA-ICP-MS technique for the spatial elemental analysis of the recent fish scales: Line scan vs. depth profiling

    International Nuclear Information System (INIS)

    Hola, Marketa; Kalvoda, Jiri; Novakova, Hana; Skoda, Radek; Kanicky, Viktor

    2011-01-01

    LA-ICP-MS and solution based ICP-MS in combination with electron microprobe are presented as a method for the determination of the elemental spatial distribution in fish scales which represent an example of a heterogeneous layered bone structure. Two different LA-ICP-MS techniques were tested on recent common carp (Cyprinus carpio) scales: (a)A line scan through the whole fish scale perpendicular to the growth rings. The ablation crater of 55 μm width and 50 μm depth allowed analysis of the elemental distribution in the external layer. Suitable ablation conditions providing a deeper ablation crater gave average values from the external HAP layer and the collagen basal plate. (b)Depth profiling using spot analysis was tested in fish scales for the first time. Spot analysis allows information to be obtained about the depth profile of the elements at the selected position on the sample. The combination of all mentioned laser ablation techniques provides complete information about the elemental distribution in the fish scale samples. The results were compared with the solution based ICP-MS and EMP analyses. The fact that the results of depth profiling are in a good agreement both with EMP and PIXE results and, with the assumed ways of incorporation of the studied elements in the HAP structure, suggests a very good potential for this method.

  7. Possibilities of LA-ICP-MS technique for the spatial elemental analysis of the recent fish scales: Line scan vs. depth profiling

    Energy Technology Data Exchange (ETDEWEB)

    Hola, Marketa [Department of Chemistry, Masaryk University of Brno, Kamenice 5, 625 00 Brno (Czech Republic); Kalvoda, Jiri, E-mail: jkalvoda@centrum.cz [Department of Geological Sciences, Masaryk University of Brno, Kotlarska 2, 611 37 Brno (Czech Republic); Novakova, Hana [Department of Chemistry, Masaryk University of Brno, Kamenice 5, 625 00 Brno (Czech Republic); Skoda, Radek [Department of Geological Sciences, Masaryk University of Brno, Kotlarska 2, 611 37 Brno (Czech Republic); Kanicky, Viktor [Department of Chemistry, Masaryk University of Brno, Kamenice 5, 625 00 Brno (Czech Republic)

    2011-01-01

    LA-ICP-MS and solution based ICP-MS in combination with electron microprobe are presented as a method for the determination of the elemental spatial distribution in fish scales which represent an example of a heterogeneous layered bone structure. Two different LA-ICP-MS techniques were tested on recent common carp (Cyprinus carpio) scales: (a)A line scan through the whole fish scale perpendicular to the growth rings. The ablation crater of 55 {mu}m width and 50 {mu}m depth allowed analysis of the elemental distribution in the external layer. Suitable ablation conditions providing a deeper ablation crater gave average values from the external HAP layer and the collagen basal plate. (b)Depth profiling using spot analysis was tested in fish scales for the first time. Spot analysis allows information to be obtained about the depth profile of the elements at the selected position on the sample. The combination of all mentioned laser ablation techniques provides complete information about the elemental distribution in the fish scale samples. The results were compared with the solution based ICP-MS and EMP analyses. The fact that the results of depth profiling are in a good agreement both with EMP and PIXE results and, with the assumed ways of incorporation of the studied elements in the HAP structure, suggests a very good potential for this method.

  8. Domain walls of gauged supergravity, M-branes and algebraic curves

    CERN Document Server

    Bakas, I.; Sfetsos, K.

    1999-01-01

    We provide an algebraic classification of all supersymmetric domain wall solutions of maximal gauged supergravity in four and seven dimensions, in the presence of non-trivial scalar fields in the coset SL(8,R)/SO(8) and SL(5,R)/SO(5) respectively. These solutions satisfy first-order equations, which can be obtained using the method of Bogomol'nyi. From an eleven-dimensional point of view they correspond to various continuous distributions of M2- and M5-branes. The Christoffel-Schwarz transformation and the uniformization of the associated algebraic curves are used in order to determine the Schrodinger potential for the scalar and graviton fluctuations on the corresponding backgrounds. In many cases we explicitly solve the Schrodinger problem by employing techniques of supersymmetric quantum mechanics. The analysis is parallel to the construction of domain walls of five-dimensional gauged supergravity, with scalar fields in the coset SL(6,R)/SO(6), using algebraic curves or continuous distributions of D3-brane...

  9. Mode Identification of Guided Waves in a Curved Pipe

    International Nuclear Information System (INIS)

    Eom, Heung-Seop; Lim, Sa-Hoe; Kim, Jae-Hee

    2006-01-01

    Ultrasonic guided wave technique has been widely employed for the long range inspection of structures such as plates and pipes because it has the ability to propagate over long distances. In the nuclear power field, there recently appeared a need for on-line nondestructive monitoring which can be employed during the operation stage of power plants. As ultrasonic guided waves have shown promise for on-line monitoring of power plants, a lot of work has been done in the institutes and universities on this matter. In the case of detecting defects in simple straight pipes, the dispersion curves obtained from the modeling processes are closely akin to the experimental results. But the modeling of wave propagation in some structures, such as an elbow region of a pipe, is not practical due to elbow echo and unpredictable interface conditions. This paper presents an experimental approach to identify the most dominant modes of guided waves in a curved region of a pipe, which is a key factor in detecting flaws in a pipe

  10. Reduced Calibration Curve for Proton Computed Tomography

    International Nuclear Information System (INIS)

    Yevseyeva, Olga; Assis, Joaquim de; Evseev, Ivan; Schelin, Hugo; Paschuk, Sergei; Milhoretto, Edney; Setti, Joao; Diaz, Katherin; Hormaza, Joel; Lopes, Ricardo

    2010-01-01

    The pCT deals with relatively thick targets like the human head or trunk. Thus, the fidelity of pCT as a tool for proton therapy planning depends on the accuracy of physical formulas used for proton interaction with thick absorbers. Although the actual overall accuracy of the proton stopping power in the Bethe-Bloch domain is about 1%, the analytical calculations and the Monte Carlo simulations with codes like TRIM/SRIM, MCNPX and GEANT4 do not agreed with each other. A tentative to validate the codes against experimental data for thick absorbers bring some difficulties: only a few data is available and the existing data sets have been acquired at different initial proton energies, and for different absorber materials. In this work we compare the results of our Monte Carlo simulations with existing experimental data in terms of reduced calibration curve, i.e. the range - energy dependence normalized on the range scale by the full projected CSDA range for given initial proton energy in a given material, taken from the NIST PSTAR database, and on the final proton energy scale - by the given initial energy of protons. This approach is almost energy and material independent. The results of our analysis are important for pCT development because the contradictions observed at arbitrary low initial proton energies could be easily scaled now to typical pCT energies.

  11. Use of tidal breathing curves for evaluating expiratory airway obstruction in infants.

    Science.gov (United States)

    Hevroni, Avigdor; Goldman, Aliza; Blank-Brachfeld, Miriam; Abu Ahmad, Wiessam; Ben-Dov, Lior; Springer, Chaim

    2018-01-15

    To evaluate tidal breathing (TB) flow-volume and flow-time curves for identification of expiratory airway obstruction in infants. Pulmonary function tests were analyzed retrospectively in 156 infants aged 3-24 months with persistent or recurrent respiratory complaints. Parameters derived from TB curves were compared to maximal expiratory flow at functional residual capacity ([Formula: see text]maxFRC) measured by rapid thoracoabdominal compression technique. Analyzed parameters were: inspiratory time (t I ), expiratory time (t E ), tidal volume, peak tidal expiratory flow (PTEF), time to peak tidal expiratory flow (t PTEF ), expiratory flow when 50% and 25% of tidal volume remains in the lungs (FEF 50 , FEF 25 , respectively), and the ratios t PTEF /t E , t I /t E , FEF 50 /PTEF, and FEF 25 /PTEF. Statistical comparisons between flow indices and TB parameters were performed using mean squared error and Pearson's sample correlation coefficient. The study population was also divided into two groups based on severity of expiratory obstruction (above or below z-score for [Formula: see text]maxFRC of -2) to generate receiver operating characteristic (ROC) curves and calculate discriminatory values between the groups. TB parameters that were best correlated to [Formula: see text]maxFRC were: t PTEF /t E , FEF 50 /PTEF, and FEF 25 /PTEF, with r = 0.61, 0.67, 0.65, respectively (p < 0.0001 for all). ROC curves for FEF 50 /PTEF, FEF 25 /PTEF and t PTEF /t E showed areas under the curve of 0.813, 0.797, and 0.796, respectively. Cutoff value z-scores of -0.35, -0.34, and -0.43 for these three parameters, respectively, showed an 86% negative predictive value for severe airway obstructions. TB curves can assist in ruling out severe expiratory airway obstruction in infants.

  12. Development of Future Rule Curves for Multipurpose Reservoir Operation Using Conditional Genetic and Tabu Search Algorithms

    Directory of Open Access Journals (Sweden)

    Anongrit Kangrang

    2018-01-01

    Full Text Available Optimal rule curves are necessary guidelines in the reservoir operation that have been used to assess performance of any reservoir to satisfy water supply, irrigation, industrial, hydropower, and environmental conservation requirements. This study applied the conditional genetic algorithm (CGA and the conditional tabu search algorithm (CTSA technique to connect with the reservoir simulation model in order to search optimal reservoir rule curves. The Ubolrat Reservoir located in the northeast region of Thailand was an illustrative application including historic monthly inflow, future inflow generated by the SWAT hydrological model using 50-year future climate data from the PRECIS regional climate model in case of B2 emission scenario by IPCC SRES, water demand, hydrologic data, and physical reservoir data. The future and synthetic inflow data of reservoirs were used to simulate reservoir system for evaluating water situation. The situations of water shortage and excess water were shown in terms of frequency magnitude and duration. The results have shown that the optimal rule curves from CGA and CTSA connected with the simulation model can mitigate drought and flood situations than the existing rule curves. The optimal future rule curves were more suitable for future situations than the other rule curves.

  13. Trigonometric Characterization of Some Plane Curves

    Indian Academy of Sciences (India)

    IAS Admin

    (Figure 1). A relation between tan θ and tanψ gives the trigonometric equation of the family of curves. In this article, trigonometric equations of some known plane curves are deduced and it is shown that these equations reveal some geometric characteristics of the families of the curves under consideration. In Section 2,.

  14. Knitting Technologies And Tensile Properties Of A Novel Curved Flat-Knitted Three-Dimensional Spacer Fabrics

    Directory of Open Access Journals (Sweden)

    Li Xiaoying

    2015-09-01

    Full Text Available This paper introduces a knitting technique for making innovative curved three-dimensional (3D spacer fabrics by the computer flat-knitting machine. During manufacturing, a number of reinforcement yarns made of aramid fibres are inserted into 3D spacer fabrics along the weft direction to enhance the fabric tensile properties. Curved, flat-knitted 3D spacer fabrics with different angles (in the warp direction were also developed. Tensile tests were carried out in the weft and warp directions for the two spacer fabrics (with and without reinforcement yarns, and their stress–strain curves were compared. The results showed that the reinforcement yarns can reduce the fabric deformation and improve tensile stress and dimensional stability of 3D spacer fabrics. This research can help the further study of 3D spacer fabric when applied to composites.

  15. Calibration curves for quantifying praseodymium by UV-VIS

    International Nuclear Information System (INIS)

    Gonzalez M, R.; Lopez G, H.; Rojas H, A.

    2007-01-01

    The UV-Vis spectroscopic technique was used to determine the absorption bands depending on the concentration from the praseodymium solutions at pH3. Those more appropriate were in the wavelength of 215 nm, for concentrations of 0.0001-0.026 M, of 481nm, 468 nm and 443 nm, for concentrations of 0.026-0.325 M, and of 589 nm, for concentrations of 0.026-0.65 M of the praseodymium. To these wavelengths the calibration curves were determined, which presented correlation coefficients between 0.9976 and 0.9999 except of the absorption of 589 nm that gave R 2 = 0.9014. (Author)

  16. Management of the learning curve

    DEFF Research Database (Denmark)

    Pedersen, Peter-Christian; Slepniov, Dmitrij

    2016-01-01

    Purpose – This paper focuses on the management of the learning curve in overseas capacity expansions. The purpose of this paper is to unravel the direct as well as indirect influences on the learning curve and to advance the understanding of how these affect its management. Design...... the dimensions of the learning process involved in a capacity expansion project and identified the direct and indirect labour influences on the production learning curve. On this basis, the study proposes solutions to managing learning curves in overseas capacity expansions. Furthermore, the paper concludes...... with measures that have the potential to significantly reduce the non-value-added time when establishing new capacities overseas. Originality/value – The paper uses a longitudinal in-depth case study of a Danish wind turbine manufacturer and goes beyond a simplistic treatment of the lead time and learning...

  17. Numerical generation of boundary-fitted curvilinear coordinate systems for arbitrarily curved surfaces

    International Nuclear Information System (INIS)

    Takagi, T.; Miki, K.; Chen, B.C.J.; Sha, W.T.

    1985-01-01

    A new method is presented for numerically generating boundary-fitted coordinate systems for arbitrarily curved surfaces. The three-dimensional surface has been expressed by functions of two parameters using the geometrical modeling techniques in computer graphics. This leads to new quasi-one- and two-dimensional elliptic partial differential equations for coordinate transformation. Since the equations involve the derivatives of the surface expressions, the grids geneated by the equations distribute on the surface depending on its slope and curvature. A computer program GRID-CS based on the method was developed and applied to a surface of the second order, a torus and a surface of a primary containment vessel for a nuclear reactor. These applications confirm that GRID-CS is a convenient and efficient tool for grid generation on arbitrarily curved surfaces

  18. ADVANCING THE FUNDAMENTAL UNDERSTANDING AND SCALE-UP OF TRISO FUEL COATERS VIA ADVANCED MEASUREMENT AND COMPUTATIONAL TECHNIQUES

    Energy Technology Data Exchange (ETDEWEB)

    Biswas, Pratim; Al-Dahhan, Muthanna

    2012-11-01

    to advance the fundamental understanding of the hydrodynamics by systematically investigating the effect of design and operating variables, to evaluate the reported dimensionless groups as scaling factors, and to establish a reliable scale-up methodology for the TRISO fuel particle spouted bed coaters based on hydrodynamic similarity via advanced measurement and computational techniques. An additional objective is to develop an on-line non-invasive measurement technique based on gamma ray densitometry (i.e. Nuclear Gauge Densitometry) that can be installed and used for coater process monitoring to ensure proper performance and operation and to facilitate the developed scale-up methodology. To achieve the objectives set for the project, the work will use optical probes and gamma ray computed tomography (CT) (for the measurements of solids/voidage holdup cross-sectional distribution and radial profiles along the bed height, spouted diameter, and fountain height) and radioactive particle tracking (RPT) (for the measurements of the 3D solids flow field, velocity, turbulent parameters, circulation time, solids lagrangian trajectories, and many other of spouted bed related hydrodynamic parameters). In addition, gas dynamic measurement techniques and pressure transducers will be utilized to complement the obtained information. The measurements obtained by these techniques will be used as benchmark data to evaluate and validate the computational fluid dynamic (CFD) models (two fluid model or discrete particle model) and their closures. The validated CFD models and closures will be used to facilitate the developed methodology for scale-up, design and hydrodynamic similarity. Successful execution of this work and the proposed tasks will advance the fundamental understanding of the coater flow field and quantify it for proper and safe design, scale-up, and performance. Such achievements will overcome the barriers to AGR applications and will help assure that the US maintains

  19. An hp-adaptive strategy for the solution of the exact kernel curved wire Pocklington equation

    NARCIS (Netherlands)

    D.J.P. Lahaye (Domenico); P.W. Hemker (Piet)

    2007-01-01

    textabstractIn this paper we introduce an adaptive method for the numerical solution of the Pocklington integro-differential equation with exact kernel for the current induced in a smoothly curved thin wire antenna. The hp-adaptive technique is based on the representation of the discrete solution,

  20. Curve-of-growth analysis of a red giant in the globular cluster M13

    International Nuclear Information System (INIS)

    Griffin, R.

    1979-01-01

    A coude spectrogram of a red giant (L973) in the globular cluster M13 is analysed, with respect to α Boo, by the differential curve-of-growth technique. The overall metal abundance is found to be approximately one-tenth of that of α Boo, or one-fortieth that of the Sun. (author)