WorldWideScience

Sample records for curve scale technique

  1. Image scaling curve generation

    NARCIS (Netherlands)

    2012-01-01

    The present invention relates to a method of generating an image scaling curve, where local saliency is detected in a received image. The detected local saliency is then accumulated in the first direction. A final scaling curve is derived from the detected local saliency and the image is then

  2. Image scaling curve generation.

    NARCIS (Netherlands)

    2011-01-01

    The present invention relates to a method of generating an image scaling curve, where local saliency is detected in a received image. The detected local saliency is then accumulated in the first direction. A final scaling curve is derived from the detected local saliency and the image is then

  3. Fermat's Technique of Finding Areas under Curves

    Science.gov (United States)

    Staples, Ed

    2004-01-01

    Perhaps next time teachers head towards the fundamental theorem of calculus in their classroom, they may wish to consider Fermat's technique of finding expressions for areas under curves, beautifully outlined in Boyer's History of Mathematics. Pierre de Fermat (1601-1665) developed some important results in the journey toward the discovery of the…

  4. Asymptotic scalings of developing curved pipe flow

    Science.gov (United States)

    Ault, Jesse; Chen, Kevin; Stone, Howard

    2015-11-01

    Asymptotic velocity and pressure scalings are identified for the developing curved pipe flow problem in the limit of small pipe curvature and high Reynolds numbers. The continuity and Navier-Stokes equations in toroidal coordinates are linearized about Dean's analytical curved pipe flow solution (Dean 1927). Applying appropriate scaling arguments to the perturbation pressure and velocity components and taking the limits of small curvature and large Reynolds number yields a set of governing equations and boundary conditions for the perturbations, independent of any Reynolds number and pipe curvature dependence. Direct numerical simulations are used to confirm these scaling arguments. Fully developed straight pipe flow is simulated entering a curved pipe section for a range of Reynolds numbers and pipe-to-curvature radius ratios. The maximum values of the axial and secondary velocity perturbation components along with the maximum value of the pressure perturbation are plotted along the curved pipe section. The results collapse when the scaling arguments are applied. The numerically solved decay of the velocity perturbation is also used to determine the entrance/development lengths for the curved pipe flows, which are shown to scale linearly with the Reynolds number.

  5. Learning curve estimation techniques for nuclear industry

    International Nuclear Information System (INIS)

    Vaurio, Jussi K.

    1983-01-01

    Statistical techniques are developed to estimate the progress made by the nuclear industry in learning to prevent accidents. Learning curves are derived for accident occurrence rates based on actuarial data, predictions are made for the future, and compact analytical equations are obtained for the statistical accuracies of the estimates. Both maximum likelihood estimation and the method of moments are applied to obtain parameters for the learning models, and results are compared to each other and to earlier graphical and analytical results. An effective statistical test is also derived to assess the significance of trends. The models used associate learning directly to accidents, to the number of plants and to the cumulative number of operating years. Using as a data base nine core damage accidents in electricity-producing plants, it is estimated that the probability of a plant to have a serious flaw has decreased from 0.1 to 0.01 during the developmental phase of the nuclear industry. At the same time the frequency of accidents has decreased from 0.04 per reactor year to 0.0004 per reactor year

  6. Trends in scale and shape of survival curves.

    Science.gov (United States)

    Weon, Byung Mook; Je, Jung Ho

    2012-01-01

    The ageing of the population is an issue in wealthy countries worldwide because of increasing costs for health care and welfare. Survival curves taken from demographic life tables may help shed light on the hypotheses that humans are living longer and that human populations are growing older. We describe a methodology that enables us to obtain separate measurements of scale and shape variances in survival curves. Specifically, 'living longer' is associated with the scale variance of survival curves, whereas 'growing older' is associated with the shape variance. We show how the scale and shape of survival curves have changed over time during recent decades, based on period and cohort female life tables for selected wealthy countries. Our methodology will be useful for performing better tracking of ageing statistics and it is possible that this methodology can help identify the causes of current trends in human ageing.

  7. Scaling of counter-current imbibition recovery curves using artificial neural networks

    Science.gov (United States)

    Jafari, Iman; Masihi, Mohsen; Nasiri Zarandi, Masoud

    2018-06-01

    Scaling imbibition curves are of great importance in the characterization and simulation of oil production from naturally fractured reservoirs. Different parameters such as matrix porosity and permeability, oil and water viscosities, matrix dimensions, and oil/water interfacial tensions have an effective on the imbibition process. Studies on the scaling imbibition curves along with the consideration of different assumptions have resulted in various scaling equations. In this work, using an artificial neural network (ANN) method, a novel technique is presented for scaling imbibition recovery curves, which can be used for scaling the experimental and field-scale imbibition cases. The imbibition recovery curves for training and testing the neural network were gathered through the simulation of different scenarios using a commercial reservoir simulator. In this ANN-based method, six parameters were assumed to have an effect on the imbibition process and were considered as the inputs for training the network. Using the ‘Bayesian regularization’ training algorithm, the network was trained and tested. Training and testing phases showed superior results in comparison with the other scaling methods. It is concluded that using the new technique is useful for scaling imbibition recovery curves, especially for complex cases, for which the common scaling methods are not designed.

  8. Machine Learning Techniques for Stellar Light Curve Classification

    Science.gov (United States)

    Hinners, Trisha A.; Tat, Kevin; Thorp, Rachel

    2018-07-01

    We apply machine learning techniques in an attempt to predict and classify stellar properties from noisy and sparse time-series data. We preprocessed over 94 GB of Kepler light curves from the Mikulski Archive for Space Telescopes (MAST) to classify according to 10 distinct physical properties using both representation learning and feature engineering approaches. Studies using machine learning in the field have been primarily done on simulated data, making our study one of the first to use real light-curve data for machine learning approaches. We tuned our data using previous work with simulated data as a template and achieved mixed results between the two approaches. Representation learning using a long short-term memory recurrent neural network produced no successful predictions, but our work with feature engineering was successful for both classification and regression. In particular, we were able to achieve values for stellar density, stellar radius, and effective temperature with low error (∼2%–4%) and good accuracy (∼75%) for classifying the number of transits for a given star. The results show promise for improvement for both approaches upon using larger data sets with a larger minority class. This work has the potential to provide a foundation for future tools and techniques to aid in the analysis of astrophysical data.

  9. Characteristics of soil water retention curve at macro-scale

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    Scale adaptable hydrological models have attracted more and more attentions in the hydrological modeling research community, and the constitutive relationship at the macro-scale is one of the most important issues, upon which there are not enough research activities yet. Taking the constitutive relationships of soil water movement--soil water retention curve (SWRC) as an example, this study extends the definition of SWRC at the micro-scale to that at the macro-scale, and aided by Monte Carlo method we demonstrate that soil property and the spatial distribution of soil moisture will affect the features of SWRC greatly. Furthermore, we assume that the spatial distribution of soil moisture is the result of self-organization of climate, soil, ground water and soil water movement under the specific boundary conditions, and we also carry out numerical experiments of soil water movement at the vertical direction in order to explore the relationship between SWRC at the macro-scale and the combinations of climate, soil, and groundwater. The results show that SWRCs at the macro-scale and micro-scale presents totally different features, e.g., the essential hysteresis phenomenon which is exaggerated with increasing aridity index and rising groundwater table. Soil property plays an important role in the shape of SWRC which will even lead to a rectangular shape under drier conditions, and power function form of SWRC widely adopted in hydrological model might be revised for most situations at the macro-scale.

  10. Momentum-subtraction renormalization techniques in curved space-time

    Energy Technology Data Exchange (ETDEWEB)

    Foda, O.

    1987-10-01

    Momentum-subtraction techniques, specifically BPHZ and Zimmermann's Normal Product algorithm, are introduced as useful tools in the study of quantum field theories in the presence of background fields. In a model of a self-interacting massive scalar field, conformally coupled to a general asymptotically-flat curved space-time with a trivial topology, momentum-subtractions are shown to respect invariance under general coordinate transformations. As an illustration, general expressions for the trace anomalies are derived, and checked by explicit evaluation of the purely gravitational contributions in the free field theory limit. Furthermore, the trace of the renormalized energy-momentum tensor is shown to vanish at the Gell-Mann Low eigenvalue as it should.

  11. Momentum-subtraction renormalization techniques in curved space-time

    International Nuclear Information System (INIS)

    Foda, O.

    1987-01-01

    Momentum-subtraction techniques, specifically BPHZ and Zimmermann's Normal Product algorithm, are introduced as useful tools in the study of quantum field theories in the presence of background fields. In a model of a self-interacting massive scalar field, conformally coupled to a general asymptotically-flat curved space-time with a trivial topology, momentum-subtractions are shown to respect invariance under general coordinate transformations. As an illustration, general expressions for the trace anomalies are derived, and checked by explicit evaluation of the purely gravitational contributions in the free field theory limit. Furthermore, the trace of the renormalized energy-momentum tensor is shown to vanish at the Gell-Mann Low eigenvalue as it should

  12. Learning-curve estimation techniques for nuclear industry

    Energy Technology Data Exchange (ETDEWEB)

    Vaurio, J.K.

    1983-01-01

    Statistical techniques are developed to estimate the progress made by the nuclear industry in learning to prevent accidents. Learning curves are derived for accident occurrence rates based on acturial data, predictions are made for the future, and compact analytical equations are obtained for the statistical accuracies of the estimates. Both maximum likelihood estimation and the method of moments are applied to obtain parameters for the learning models, and results are compared to each other and to earlier graphical and analytical results. An effective statistical test is also derived to assess the significance of trends. The models used associate learning directly to accidents, to the number of plants and to the cumulative number of operating years. Using as a data base nine core damage accidents in electricity-producing plants, it is estimated that the probability of a plant to have a serious flaw has decreased from 0.1 to 0.01 during the developmental phase of the nuclear industry. At the same time the frequency of accidents has decreased from 0.04 per reactor year to 0.0004 per reactor year.

  13. Learning curve estimation techniques for the nuclear industry

    International Nuclear Information System (INIS)

    Vaurio, J.K.

    1983-01-01

    Statistical techniques are developed to estimate the progress made by the nuclear industry in learning to prevent accidents. Learning curves are derived for accident occurrence rates based on actuarial data, predictions are made for the future, and compact analytical equations are obtained for the statistical accuracies of the estimates. Both maximum likelihood estimation and the method of moments are applied to obtain parameters for the learning models, and results are compared to each other and to earlier graphical and analytical results. An effective statistical test is also derived to assess the significance of trends. The models used associate learning directly to accidents, to the number of plants and to the cumulative number of operating years. Using as a data base nine core damage accidents in electricity-producing plants, it is estimated that the probability of a plant to have a serious flaw has decreased from 0.1 to 0.01 during the developmental phase of the nuclear industry. At the same time the frequency of accidents has decreased from 0.04 per reactor year to 0.0004 per reactor year

  14. Learning-curve estimation techniques for nuclear industry

    International Nuclear Information System (INIS)

    Vaurio, J.K.

    1983-01-01

    Statistical techniques are developed to estimate the progress made by the nuclear industry in learning to prevent accidents. Learning curves are derived for accident occurrence rates based on acturial data, predictions are made for the future, and compact analytical equations are obtained for the statistical accuracies of the estimates. Both maximum likelihood estimation and the method of moments are applied to obtain parameters for the learning models, and results are compared to each other and to earlier graphical and analytical results. An effective statistical test is also derived to assess the significance of trends. The models used associate learning directly to accidents, to the number of plants and to the cumulative number of operating years. Using as a data base nine core damage accidents in electricity-producing plants, it is estimated that the probability of a plant to have a serious flaw has decreased from 0.1 to 0.01 during the developmental phase of the nuclear industry. At the same time the frequency of accidents has decreased from 0.04 per reactor year to 0.0004 per reactor year

  15. Guidelines for using the Delphi Technique to develop habitat suitability index curves

    Science.gov (United States)

    Crance, Johnie H.

    1987-01-01

    Habitat Suitability Index (SI) curves are one method of presenting species habitat suitability criteria. The curves are often used with the Habitat Evaluation Procedures (HEP) and are necessary components of the Instream Flow Incremental Methodology (IFIM) (Armour et al. 1984). Bovee (1986) described three categories of SI curves or habitat suitability criteria based on the procedures and data used to develop the criteria. Category I curves are based on professional judgment, with 1ittle or no empirical data. Both Category II (utilization criteria) and Category III (preference criteria) curves have as their source data collected at locations where target species are observed or collected. Having Category II and Category III curves for all species of concern would be ideal. In reality, no SI curves are available for many species, and SI curves that require intensive field sampling often cannot be developed under prevailing constraints on time and costs. One alternative under these circumstances is the development and interim use of SI curves based on expert opinion. The Delphi technique (Pill 1971; Delbecq et al. 1975; Linstone and Turoff 1975) is one method used for combining the knowledge and opinions of a group of experts. The purpose of this report is to describe how the Delphi technique may be used to develop expert-opinion-based SI curves.

  16. Heterotic superstring and curved, scale-invariant superspace

    International Nuclear Information System (INIS)

    Kuusk, P.K.

    1988-01-01

    It is shown that the modified heterotic superstring [R. E. Kallosh, JETP Lett. 43, 456 (1986); Phys. Lett. 176B, 50 (1986)] demands a scale-invariant superspace for its existence. Explicit expressions are given for the connection, the torsion, and the curvature of an extended scale-invariant superspace with 506 bosonic and 16 fermionic coordinates

  17. Fourier techniques for an analysis of eclipsing binary light curves. Pt. 6b

    International Nuclear Information System (INIS)

    Demircan, O.

    1980-01-01

    This is a continuation of a previous paper which appeared in this journal (Demircan, 1980b) and aims at ascertaining some other relations between the integral transforms of the light curves of eclipsing binary systems. The appropriate use of these relations should facilitate the numerical computations for an analysis of eclipsing binary light curves by different Fourier techniques. (orig.)

  18. Phonon transport across nano-scale curved thin films

    Energy Technology Data Exchange (ETDEWEB)

    Mansoor, Saad B.; Yilbas, Bekir S., E-mail: bsyilbas@kfupm.edu.sa

    2016-12-15

    Phonon transport across the curve thin silicon film due to temperature disturbance at film edges is examined. The equation for radiative transport is considered via incorporating Boltzmann transport equation for the energy transfer. The effect of the thin film curvature on phonon transport characteristics is assessed. In the analysis, the film arc length along the film centerline is considered to be constant and the film arc angle is varied to obtain various film curvatures. Equivalent equilibrium temperature is introduced to assess the phonon intensity distribution inside the curved thin film. It is found that equivalent equilibrium temperature decay along the arc length is sharper than that of in the radial direction, which is more pronounced in the region close to the film inner radius. Reducing film arc angle increases the film curvature; in which case, phonon intensity decay becomes sharp in the close region of the high temperature edge. Equivalent equilibrium temperature demonstrates non-symmetric distribution along the radial direction, which is more pronounced in the near region of the high temperature edge.

  19. Phonon transport across nano-scale curved thin films

    International Nuclear Information System (INIS)

    Mansoor, Saad B.; Yilbas, Bekir S.

    2016-01-01

    Phonon transport across the curve thin silicon film due to temperature disturbance at film edges is examined. The equation for radiative transport is considered via incorporating Boltzmann transport equation for the energy transfer. The effect of the thin film curvature on phonon transport characteristics is assessed. In the analysis, the film arc length along the film centerline is considered to be constant and the film arc angle is varied to obtain various film curvatures. Equivalent equilibrium temperature is introduced to assess the phonon intensity distribution inside the curved thin film. It is found that equivalent equilibrium temperature decay along the arc length is sharper than that of in the radial direction, which is more pronounced in the region close to the film inner radius. Reducing film arc angle increases the film curvature; in which case, phonon intensity decay becomes sharp in the close region of the high temperature edge. Equivalent equilibrium temperature demonstrates non-symmetric distribution along the radial direction, which is more pronounced in the near region of the high temperature edge.

  20. Unraveling the photovoltaic technology learning curve by incorporation of input price changes and scale effects

    International Nuclear Information System (INIS)

    Yu, C.F.; van Sark, W.G.J.H.M.; Alsema, E.A.

    2011-01-01

    In a large number of energy models, the use of learning curves for estimating technological improvements has become popular. This is based on the assumption that technological development can be monitored by following cost development as a function of market size. However, recent data show that in some stages of photovoltaic technology (PV) production, the market price of PV modules stabilizes even though the cumulative capacity increases. This implies that no technological improvement takes place in these periods: the cost predicted by the learning curve in the PV study is lower than the market one. We propose that this bias results from ignoring the effects of input prices and scale effects, and that incorporating the input prices and scale effects into the learning curve theory is an important issue in making cost predictions more reliable. In this paper, a methodology is described to incorporate the scale and input-prices effect as the additional variables into the one factor learning curve, which leads to the definition of the multi-factor learning curve. This multi-factor learning curve is not only derived from economic theories, but also supported by an empirical study. The results clearly show that input prices and scale effects are to be included, and that, although market prices are stabilizing, learning is still taking place. (author)

  1. Scale effect on the water retention curve of a volcanic ash

    Science.gov (United States)

    Damiano, Emilia; Comegna, Luca; Greco, Roberto; Guida, Andrea; Olivares, Lucio; Picarelli, Luciano

    2015-04-01

    During the last decades, a number of flowslides and debris flows triggered by intense rainfall affected a wide mountainous area surrounding the "Campania Plain" (southern Italy). The involved slopes are constituted by shallow unsaturated air-fall deposits of pyroclastic nature, which stability is guaranteed by the contribution of suction on shear strength. To reliably predict the onset of slope failure triggered by critical precipitations, is essential to understand the infiltration process and the soil suction distribution in such granular deposits. The paper presents the results of a series of investigation performed at different scales to determine the soil water retention curve (SWRC) of a volcanic ash which is an es-sential element in the analysis of the infiltration processes. The soil, a silty sand, was taken at Cervinara hillslope, 30 km East of Naples, just aside an area which had been subjected to a catastrophic flowslide. The SWRC was obtained through: - standard tests in a suction-controlled triaxial apparatus (SCTX), in a pressure plate and by the Wind technique (1968) on small natural and reconstituted soil samples (sample dimensions in the order of the 1•10-6m3) ; - infiltration tests on small-scale model slopes reconstituted in an instrumented flume (sample dimensions in the order of 5•10-3m3); - suction and water content monitoring at the automatic station installed along the Cervinara hillslope. The experimental points generally were defined by coupling suction measurements through jet-fill tensiometers and water content through TDR probes installed close each others. The obtained data sets individuate three different curves characterized by different shapes in the transition zone: at larger volume element dimensions correspond curves which exhibit steeper slopes and lower values of the water content in the transition zone. This result confirms the great role of the volume element dimensions in the de-termination of hydraulic characteristics

  2. Local gray level S-curve transformation - A generalized contrast enhancement technique for medical images.

    Science.gov (United States)

    Gandhamal, Akash; Talbar, Sanjay; Gajre, Suhas; Hani, Ahmad Fadzil M; Kumar, Dileep

    2017-04-01

    Most medical images suffer from inadequate contrast and brightness, which leads to blurred or weak edges (low contrast) between adjacent tissues resulting in poor segmentation and errors in classification of tissues. Thus, contrast enhancement to improve visual information is extremely important in the development of computational approaches for obtaining quantitative measurements from medical images. In this research, a contrast enhancement algorithm that applies gray-level S-curve transformation technique locally in medical images obtained from various modalities is investigated. The S-curve transformation is an extended gray level transformation technique that results into a curve similar to a sigmoid function through a pixel to pixel transformation. This curve essentially increases the difference between minimum and maximum gray values and the image gradient, locally thereby, strengthening edges between adjacent tissues. The performance of the proposed technique is determined by measuring several parameters namely, edge content (improvement in image gradient), enhancement measure (degree of contrast enhancement), absolute mean brightness error (luminance distortion caused by the enhancement), and feature similarity index measure (preservation of the original image features). Based on medical image datasets comprising 1937 images from various modalities such as ultrasound, mammograms, fluorescent images, fundus, X-ray radiographs and MR images, it is found that the local gray-level S-curve transformation outperforms existing techniques in terms of improved contrast and brightness, resulting in clear and strong edges between adjacent tissues. The proposed technique can be used as a preprocessing tool for effective segmentation and classification of tissue structures in medical images. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Measurement of scintillation decay curves by a single photon counting technique

    International Nuclear Information System (INIS)

    Noguchi, Tsutomu

    1978-01-01

    An improved apparatus suitable for the measurement of spectroscopic scintillation decay curves has been developed by combination of a single photon counting technique and a delayed coincidence method. The time resolution of the apparatus is improved up to 1.16 nsec (FWHM), which is obtained from the resolution function of the system for very weak Cherenkov light flashes. Systematic measurement of scintillation decay curves is made for liquid and crystal scintillators including PPO-toluene, PBD-xylene, PPO-POPOP-toluene, anthracene and stilbene. (auth.)

  4. A location-scale model for non-crossing expectile curves

    NARCIS (Netherlands)

    Schnabel, S.K.; Eilers, P.H.C.

    2013-01-01

    In quantile smoothing, crossing of the estimated curves is a common nuisance, in particular with small data sets and dense sets of quantiles. Similar problems arise in expectile smoothing. We propose a novel method to avoid crossings. It is based on a location-scale model for expectiles and

  5. Technique for improving the quality of images from digital cameras using ink-jet printers and smoothed RGB transfer curves

    Science.gov (United States)

    Sampat, Nitin; Grim, John F.; O'Hara, James E.

    1998-04-01

    The digital camera market is growing at an explosive rate. At the same time, the quality of photographs printed on ink- jet printers continues to improve. Most of the consumer cameras are designed with the monitor as the target output device and ont the printer. When a user is printing his images from a camera, he/she needs to optimize the camera and printer combination in order to maximize image quality. We describe the details of one such method for improving image quality using a AGFA digital camera and an ink jet printer combination. Using Adobe PhotoShop, we generated optimum red, green and blue transfer curves that match the scene content to the printers output capabilities. Application of these curves to the original digital image resulted in a print with more shadow detail, no loss of highlight detail, a smoother tone scale, and more saturated colors. The image also exhibited an improved tonal scale and visually more pleasing images than those captured and printed without any 'correction'. While we report the results for one camera-printer combination we tested this technique on numbers digital cameras and printer combinations and in each case produced a better looking image. We also discuss the problems we encountered in implementing this technique.

  6. Novel hybrid (magnet plus curve grasper) technique during transumbilical cholecystectomy: initial experience of a promising approach.

    Science.gov (United States)

    Millan, Carolina; Bignon, Horacion; Bellia, Gaston; Buela, Enrique; Rabinovich, Fernando; Albertal, Mariano; Martinez Ferro, Marcelo

    2013-10-01

    The use of magnets in transumbilical cholecystectomy (TUC) improves triangulation and achieves an optimal critical view. Nonetheless, the tendency of the magnets to collide hinders the process. In order to simplify the surgical technique, we developed a hybrid model with a single magnet and a curved grasper. All TUCs performed with a hybrid strategy in our pediatric population between September 2009 and July 2012 were retrospectively reviewed. Of 260 surgical procedures in which at least one magnet was used, 87 were TUCs. Of those, 62 were hybrid: 33 in adults and 29 in pediatric patients. The technique combines a magnet and a curved grasper. Through a transumbilical incision, we placed a 12-mm trocar and another flexible 5-mm trocar. The laparoscope with the working channel used the 12-mm trocar. The magnetic grasper was introduced to the abdominal cavity using the working channel to provide cephalic retraction of the gallbladder fundus. Across the flexible trocar, the assistant manipulated the curved grasper to mobilize the infundibulum. The surgeon operated through the working channel of the laparoscope. In this pediatric population, the mean age was 14 years (range, 4-17 years), and mean weight was 50 kg (range, 18-90 kg); 65% were girls. Mean operative time was 62 minutes. All procedures achieved a critical view of safety with no instrumental collision. There were no intraoperative or postoperative complications. The hospital stay was 1.4±0.6 days, and the median follow-up was 201 days. A hybrid technique, combining magnets and a curved grasper, simplifies transumbilical surgery. It seems feasible and safe for TUC and potentially reproducible.

  7. Nonlinear Filtering Effects of Reservoirs on Flood Frequency Curves at the Regional Scale: RESERVOIRS FILTER FLOOD FREQUENCY CURVES

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Wei; Li, Hong-Yi; Leung, Lai-Yung; Yigzaw, Wondmagegn Y.; Zhao, Jianshi; Lu, Hui; Deng, Zhiqun; Demissie, Yonas; Bloschl, Gunter

    2017-10-01

    Anthropogenic activities, e.g., reservoir operation, may alter the characteristics of Flood Frequency Curve (FFC) and challenge the basic assumption of stationarity used in flood frequency analysis. This paper presents a combined data-modeling analysis of the nonlinear filtering effects of reservoirs on the FFCs over the contiguous United States. A dimensionless Reservoir Impact Index (RII), defined as the total upstream reservoir storage capacity normalized by the annual streamflow volume, is used to quantify reservoir regulation effects. Analyses are performed for 388 river stations with an average record length of 50 years. The first two moments of the FFC, mean annual maximum flood (MAF) and coefficient of variations (CV), are calculated for the pre- and post-dam periods and compared to elucidate the reservoir regulation effects as a function of RII. It is found that MAF generally decreases with increasing RII but stabilizes when RII exceeds a threshold value, and CV increases with RII until a threshold value beyond which CV decreases with RII. The processes underlying the nonlinear threshold behavior of MAF and CV are investigated using three reservoir models with different levels of complexity. All models capture the non-linear relationships of MAF and CV with RII, suggesting that the basic flood control function of reservoirs is key to the non-linear relationships. The relative roles of reservoir storage capacity, operation objectives, available storage prior to a flood event, and reservoir inflow pattern are systematically investigated. Our findings may help improve flood-risk assessment and mitigation in regulated river systems at the regional scale.

  8. Spotted star light curve numerical modeling technique and its application to HII 1883 surface imaging

    Science.gov (United States)

    Kolbin, A. I.; Shimansky, V. V.

    2014-04-01

    We developed a code for imaging the surfaces of spotted stars by a set of circular spots with a uniform temperature distribution. The flux from the spotted surface is computed by partitioning the spots into elementary areas. The code takes into account the passing of spots behind the visible stellar limb, limb darkening, and overlapping of spots. Modeling of light curves includes the use of recent results of the theory of stellar atmospheres needed to take into account the temperature dependence of flux intensity and limb darkening coefficients. The search for spot parameters is based on the analysis of several light curves obtained in different photometric bands. We test our technique by applying it to HII 1883.

  9. Estimating GHG emission mitigation supply curves of large-scale biomass use on a country level

    International Nuclear Information System (INIS)

    Dornburg, Veronika; Dam, Jinke van; Faaij, Andre

    2007-01-01

    This study evaluates the possible influences of a large-scale introduction of biomass material and energy systems and their market volumes on land, material and energy market prices and their feedback to greenhouse gas (GHG) emission mitigation costs. GHG emission mitigation supply curves for large-scale biomass use were compiled using a methodology that combines a bottom-up analysis of biomass applications, biomass cost supply curves and market prices of land, biomaterials and bioenergy carriers. These market prices depend on the scale of biomass use and the market volume of materials and energy carriers and were estimated using own-price elasticities of demand. The methodology was demonstrated for a case study of Poland in the year 2015 applying different scenarios on economic development and trade in Europe. For the key technologies considered, i.e. medium density fibreboard, poly lactic acid, electricity and methanol production, GHG emission mitigation costs increase strongly with the scale of biomass production. Large-scale introduction of biomass use decreases the GHG emission reduction potential at costs below 50 Euro /Mg CO 2eq with about 13-70% depending on the scenario. Biomaterial production accounts for only a small part of this GHG emission reduction potential due to relatively small material markets and the subsequent strong decrease of biomaterial market prices at large scale of production. GHG emission mitigation costs depend strongly on biomass supply curves, own-price elasticity of land and market volumes of bioenergy carriers. The analysis shows that these influences should be taken into account for developing biomass implementations strategies

  10. Spatial reflection patterns of iridescent wings of male pierid butterflies: curved scales reflect at a wider angle than flat scales.

    Science.gov (United States)

    Pirih, Primož; Wilts, Bodo D; Stavenga, Doekele G

    2011-10-01

    The males of many pierid butterflies have iridescent wings, which presumably function in intraspecific communication. The iridescence is due to nanostructured ridges of the cover scales. We have studied the iridescence in the males of a few members of Coliadinae, Gonepteryx aspasia, G. cleopatra, G. rhamni, and Colias croceus, and in two members of the Colotis group, Hebomoia glaucippe and Colotis regina. Imaging scatterometry demonstrated that the pigmentary colouration is diffuse whereas the structural colouration creates a directional, line-shaped far-field radiation pattern. Angle-dependent reflectance measurements demonstrated that the directional iridescence distinctly varies among closely related species. The species-dependent scale curvature determines the spatial properties of the wing iridescence. Narrow beam illumination of flat scales results in a narrow far-field iridescence pattern, but curved scales produce broadened patterns. The restricted spatial visibility of iridescence presumably plays a role in intraspecific signalling.

  11. Peak and Tail Scaling of Breakthrough Curves in Hydrologic Tracer Tests

    Science.gov (United States)

    Aquino, T.; Aubeneau, A. F.; Bolster, D.

    2014-12-01

    Power law tails, a marked signature of anomalous transport, have been observed in solute breakthrough curves time and time again in a variety of hydrologic settings, including in streams. However, due to the low concentrations at which they occur they are notoriously difficult to measure with confidence. This leads us to ask if there are other associated signatures of anomalous transport that can be sought. We develop a general stochastic transport framework and derive an asymptotic relation between the tail scaling of a breakthrough curve for a conservative tracer at a fixed downstream position and the scaling of the peak concentration of breakthrough curves as a function of downstream position, demonstrating that they provide equivalent information. We then quantify the relevant spatiotemporal scales for the emergence of this asymptotic regime, where the relationship holds, in the context of a very simple model that represents transport in an idealized river. We validate our results using random walk simulations. The potential experimental benefits and limitations of these findings are discussed.

  12. Introducer Curving Technique for the Prevention of Tilting of Transfemoral Gunther Tulip Inferior Vena Cava Filter

    International Nuclear Information System (INIS)

    Xiao, Liang; Shen, Jing; Tong, Jia Jie; Huang, De Sheng

    2012-01-01

    To determine whether the introducer curving technique is useful in decreasing the degree of tilting of transfemoral Tulip filters. The study sample group consisted of 108 patients with deep vein thrombosis who were enrolled and planned to undergo thrombolysis, and who accepted transfemoral Tulip filter insertion procedure. The patients were randomly divided into Group C and Group T. The introducer curving technique was Adopted in Group T. The post-implantation filter tilting angle (ACF) was measured in an anteroposterior projection. The retrieval hook adhering to the vascular wall was measured via tangential cavogram during retrieval. The overall average ACF was 5.8 ± 4.14 degrees. In Group C, the average ACF was 7.1 ± 4.52 degrees. In Group T, the average ACF was 4.4 ± 3.20 degrees. The groups displayed a statistically significant difference (t = 3.573, p = 0.001) in ACF. Additionally, the difference of ACF between the left and right approaches turned out to be statistically significant (7.1 ± 4.59 vs. 5.1 ± 3.82, t = 2.301, p = 0.023). The proportion of severe tilt (ACF ≥ 10 degree) in Group T was significantly lower than that in Group C (9.3% vs. 24.1%, X 2 = 4.267, p = 0.039). Between the groups, the difference in the rate of the retrieval hook adhering to the vascular wall was also statistically significant (2.9% vs. 24.2%, X 2 = 5.030, p = 0.025). The introducer curving technique appears to minimize the incidence and extent of transfemoral Tulip filter tilting.

  13. Introducer Curving Technique for the Prevention of Tilting of Transfemoral Gunther Tulip Inferior Vena Cava Filter

    Energy Technology Data Exchange (ETDEWEB)

    Xiao, Liang; Shen, Jing; Tong, Jia Jie [The First Hospital of China Medical University, Shenyang (China); Huang, De Sheng [College of Basic Medical Science, China Medical University, Shenyang (China)

    2012-07-15

    To determine whether the introducer curving technique is useful in decreasing the degree of tilting of transfemoral Tulip filters. The study sample group consisted of 108 patients with deep vein thrombosis who were enrolled and planned to undergo thrombolysis, and who accepted transfemoral Tulip filter insertion procedure. The patients were randomly divided into Group C and Group T. The introducer curving technique was Adopted in Group T. The post-implantation filter tilting angle (ACF) was measured in an anteroposterior projection. The retrieval hook adhering to the vascular wall was measured via tangential cavogram during retrieval. The overall average ACF was 5.8 {+-} 4.14 degrees. In Group C, the average ACF was 7.1 {+-} 4.52 degrees. In Group T, the average ACF was 4.4 {+-} 3.20 degrees. The groups displayed a statistically significant difference (t = 3.573, p = 0.001) in ACF. Additionally, the difference of ACF between the left and right approaches turned out to be statistically significant (7.1 {+-} 4.59 vs. 5.1 {+-} 3.82, t = 2.301, p = 0.023). The proportion of severe tilt (ACF {>=} 10 degree) in Group T was significantly lower than that in Group C (9.3% vs. 24.1%, X{sup 2} = 4.267, p = 0.039). Between the groups, the difference in the rate of the retrieval hook adhering to the vascular wall was also statistically significant (2.9% vs. 24.2%, X{sup 2} = 5.030, p = 0.025). The introducer curving technique appears to minimize the incidence and extent of transfemoral Tulip filter tilting.

  14. Numerical Integration Techniques for Curved-Element Discretizations of Molecule–Solvent Interfaces

    Science.gov (United States)

    Bardhan, Jaydeep P.; Altman, Michael D.; Willis, David J.; Lippow, Shaun M.; Tidor, Bruce; White, Jacob K.

    2012-01-01

    Surface formulations of biophysical modeling problems offer attractive theoretical and computational properties. Numerical simulations based on these formulations usually begin with discretization of the surface under consideration; often, the surface is curved, possessing complicated structure and possibly singularities. Numerical simulations commonly are based on approximate, rather than exact, discretizations of these surfaces. To assess the strength of the dependence of simulation accuracy on the fidelity of surface representation, we have developed methods to model several important surface formulations using exact surface discretizations. Following and refining Zauhar’s work (J. Comp.-Aid. Mol. Des. 9:149-159, 1995), we define two classes of curved elements that can exactly discretize the van der Waals, solvent-accessible, and solvent-excluded (molecular) surfaces. We then present numerical integration techniques that can accurately evaluate nonsingular and singular integrals over these curved surfaces. After validating the exactness of the surface discretizations and demonstrating the correctness of the presented integration methods, we present a set of calculations that compare the accuracy of approximate, planar-triangle-based discretizations and exact, curved-element-based simulations of surface-generalized-Born (sGB), surface-continuum van der Waals (scvdW), and boundary-element method (BEM) electrostatics problems. Results demonstrate that continuum electrostatic calculations with BEM using curved elements, piecewise-constant basis functions, and centroid collocation are nearly ten times more accurate than planartriangle BEM for basis sets of comparable size. The sGB and scvdW calculations give exceptional accuracy even for the coarsest obtainable discretized surfaces. The extra accuracy is attributed to the exact representation of the solute–solvent interface; in contrast, commonly used planar-triangle discretizations can only offer improved

  15. Evaluation of J-R curve testing of nuclear piping materials using the direct current potential drop technique

    International Nuclear Information System (INIS)

    Hackett, E.M.; Kirk, M.T.; Hays, R.A.

    1986-08-01

    A method is described for developing J-R curves for nuclear piping materials using the DC Potential Drop (DCPD) technique. Experimental calibration curves were developed for both three point bend and compact specimen geometries using ASTM A106 steel, a type 304 stainless steel and a high strength aluminum alloy. These curves were fit with a power law expression over the range of crack extension encountered during J-R curve tests (0.6 a/W to 0.8 a/W). The calibration curves were insensitive to both material and sidegrooving and depended solely on specimen geometry and lead attachment points. Crack initiation in J-R curve tests using DCPD was determined by a deviation from a linear region on a plot of COD vs. DCPD. The validity of this criterion for ASTM A106 steel was determined by a series of multispecimen tests that bracketed the initiation region. A statistical differential slope procedure for determination of the crack initiation point is presented and discussed. J-R curve tests were performed on ASTM A106 steel and type 304 stainless steel using both the elastic compliance and DCPD techniques to assess R-curve comparability. J-R curves determined using the two approaches were found to be in good agreement for ASTM A106 steel. The applicability of the DCPD technique to type 304 stainless steel and high rate loading of ferromagnetic materials is discussed. 15 refs., 33 figs

  16. Renormalization and scaling behavior of non-Abelian gauge fields in curved spacetime

    International Nuclear Information System (INIS)

    Leen, T.K.

    1983-01-01

    In this article we discuss the one loop renormalization and scaling behavior of non-Abelian gauge field theories in a general curved spacetime. A generating functional is constructed which forms the basis for both the perturbation expansion and the Ward identifies. Local momentum space representations for the vector and ghost particles are developed and used to extract the divergent parts of Feynman integrals. The one loop diagram for the ghost propagator and the vector-ghost vertex are shown to have no divergences not present in Minkowski space. The Ward identities insure that this is true for the vector propagator as well. It is shown that the above renormalizations render the three- and four-vector vertices finite. Finally, a renormalization group equation valid in curved spacetimes is derived. Its solution is given and the theory is shown to be asymptotically free as in Minkowski space

  17. Laparoscopic colorectal surgery in learning curve: Role of implementation of a standardized technique and recovery protocol. A cohort study

    Directory of Open Access Journals (Sweden)

    Gaetano Luglio

    2015-06-01

    Conclusion: Proper laparoscopic colorectal surgery is safe and leads to excellent results in terms of recovery and short term outcomes, even in a learning curve setting. Key factors for better outcomes and shortening the learning curve seem to be the adoption of a standardized technique and training model along with the strict supervision of an expert colorectal surgeon.

  18. Optimum conditions for the determination of ionization potentials, appearance potentials and fine structure in ionization efficiency curves using edd technique

    International Nuclear Information System (INIS)

    Selim, Ezzat T.; El-Kholy, S.B.; Zahran, Nagwa F.

    1978-01-01

    The optimum conditions for determining ionization potentials as well as fine structure in electron impact ionization efficiency curves are studied using energy distribution difference technique. Applying these conditions to Ar + , Kr + , CO + 2 and N + from N 2 , very good agreement is obtained when compared with results determined by other techniques including UV spectroscopy. The merits and limitation of the technique are also discussed

  19. Polygonal approximation and scale-space analysis of closed digital curves

    CERN Document Server

    Ray, Kumar S

    2013-01-01

    This book covers the most important topics in the area of pattern recognition, object recognition, computer vision, robot vision, medical computing, computational geometry, and bioinformatics systems. Students and researchers will find a comprehensive treatment of polygonal approximation and its real life applications. The book not only explains the theoretical aspects but also presents applications with detailed design parameters. The systematic development of the concept of polygonal approximation of digital curves and its scale-space analysis are useful and attractive to scholars in many fi

  20. Fluid flow profile in a packed bead column using residence time curves and radiotracer techniques

    Energy Technology Data Exchange (ETDEWEB)

    Almeida, Ana Paula F. de; Gonçalves, Eduardo Ramos; Brandão, Luis Eduardo B.; Salgado, Cesar M., E-mail: anacamiqui@gmail.com, E-mail: egoncalves@con.ufrj.br, E-mail: brandao@ien.gov.br, E-mail: otero@ien.gov.br [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil)

    2017-07-01

    Filling columns are extremely important in the chemical industry and are used for purification, separation and treatment processes of gas or liquid mixtures. The objective of this work is to study the hydrodynamics of the fluid for a characterization of aqueous phase flow patterns in the filling column, associating with the methodology of the Curves of Residence Time Distribution (RTD) to analyze and associate theoretical models that put as conditions column operating. RTD can be obtained by using the pulse-stimulus response technique which is characterized by the instantaneous injection of a radiotracer into the system input. In this work, 68Ga was used as radiotracer. Five shielded and collimated NaI (Tl) 1 x 1″ scintillator detectors were suitably positioned to record the movement of the radiotracer's path in the conveying line and filling column. Making possible the analysis of the RTD curve in the regions of interest. With the data generated by the NaI (Tl) detectors with the passage of the radiotracer in the transport line and inside the column, it was possible to evaluate the flow profile of the aqueous phase and to identify operational failures, such as internal conduit and the existence of a retention zone in the inside the column. Theoretical models were used for different flow flows: the piston flow and perfect mixing. (author)

  1. Fluid flow profile in a packed bead column using residence time curves and radiotracer techniques

    International Nuclear Information System (INIS)

    Almeida, Ana Paula F. de; Gonçalves, Eduardo Ramos; Brandão, Luis Eduardo B.; Salgado, Cesar M.

    2017-01-01

    Filling columns are extremely important in the chemical industry and are used for purification, separation and treatment processes of gas or liquid mixtures. The objective of this work is to study the hydrodynamics of the fluid for a characterization of aqueous phase flow patterns in the filling column, associating with the methodology of the Curves of Residence Time Distribution (RTD) to analyze and associate theoretical models that put as conditions column operating. RTD can be obtained by using the pulse-stimulus response technique which is characterized by the instantaneous injection of a radiotracer into the system input. In this work, 68Ga was used as radiotracer. Five shielded and collimated NaI (Tl) 1 x 1″ scintillator detectors were suitably positioned to record the movement of the radiotracer's path in the conveying line and filling column. Making possible the analysis of the RTD curve in the regions of interest. With the data generated by the NaI (Tl) detectors with the passage of the radiotracer in the transport line and inside the column, it was possible to evaluate the flow profile of the aqueous phase and to identify operational failures, such as internal conduit and the existence of a retention zone in the inside the column. Theoretical models were used for different flow flows: the piston flow and perfect mixing. (author)

  2. A comparison of two different techniques for deriving the quiet day curve from SARINET riometer data

    Directory of Open Access Journals (Sweden)

    J. Moro

    2012-08-01

    Full Text Available In this work, an upgrade of the technique for estimating the Quiet Day Curve (QDC as proposed by Tanaka et al. (2007 is suggested. To validate our approach, the QDC is estimated from data acquired by the Imaging Riometer for Ionospheric Studies (IRIS installed at the Southern Space Observatory (SSO/CRS/CCR/INPE – MCT, 29°4´ S, 53°8´ W, 480 m a.s.l., São Martinho da Serra – Brazil. The evaluation was performed by comparing the difference between the QDCs derived using our upgrade technique with the one proposed by Tanaka et al. (2007. The results are discussed in terms of the seasonal variability and the level of magnetic disturbance. Also, the cosmic noise absorption (CNA images for IRIS data operated at SSO was built using both the techniques aiming to check the implications of the changes in the methods of QDC determination on the CNA that resulted from it.

  3. The composition-explicit distillation curve technique: Relating chemical analysis and physical properties of complex fluids.

    Science.gov (United States)

    Bruno, Thomas J; Ott, Lisa S; Lovestead, Tara M; Huber, Marcia L

    2010-04-16

    The analysis of complex fluids such as crude oils, fuels, vegetable oils and mixed waste streams poses significant challenges arising primarily from the multiplicity of components, the different properties of the components (polarity, polarizability, etc.) and matrix properties. We have recently introduced an analytical strategy that simplifies many of these analyses, and provides the added potential of linking compositional information with physical property information. This aspect can be used to facilitate equation of state development for the complex fluids. In addition to chemical characterization, the approach provides the ability to calculate thermodynamic properties for such complex heterogeneous streams. The technique is based on the advanced distillation curve (ADC) metrology, which separates a complex fluid by distillation into fractions that are sampled, and for which thermodynamically consistent temperatures are measured at atmospheric pressure. The collected sample fractions can be analyzed by any method that is appropriate. The analytical methods we have applied include gas chromatography (with flame ionization, mass spectrometric and sulfur chemiluminescence detection), thin layer chromatography, FTIR, corrosivity analysis, neutron activation analysis and cold neutron prompt gamma activation analysis. By far, the most widely used analytical technique we have used with the ADC is gas chromatography. This has enabled us to study finished fuels (gasoline, diesel fuels, aviation fuels, rocket propellants), crude oils (including a crude oil made from swine manure) and waste oils streams (used automotive and transformer oils). In this special issue of the Journal of Chromatography, specifically dedicated to extraction technologies, we describe the essential features of the advanced distillation curve metrology as an analytical strategy for complex fluids. Published by Elsevier B.V.

  4. Generation of large-scale PV scenarios using aggregated power curves

    DEFF Research Database (Denmark)

    Nuño Martinez, Edgar; Cutululis, Nicolaos Antonio

    2017-01-01

    The contribution of solar photovoltaic (PV) power to the generation is becoming more relevant in modern power system. Therefore, there is a need to model the variability large-scale PV generation accurately. This paper presents a novel methodology to generate regional PV scenarios based...... on aggregated power curves rather than traditional physical PV conversion models. Our approach is based on hourly mesoscale reanalysis irradiation data and power measurements and do not require additional variables such as ambient temperature or wind speed. It was used to simulate the PV generation...... on the German system between 2012 and 2015 showing high levels of correlation with actual measurements (93.02–97.60%) and small deviations from the expected capacity factors (0.02–1.80%). Therefore, we are confident about the ability of the proposed model to accurately generate realistic large-scale PV...

  5. Mapping the Extinction Curve in 3D: Structure on Kiloparsec Scales

    Energy Technology Data Exchange (ETDEWEB)

    Schlafly, E. F. [Lawrence Berkeley National Laboratory, One Cyclotron Road, Berkeley, CA 94720 (United States); Peek, J. E. G. [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Finkbeiner, D. P. [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Green, G. M. [Kavli Institute for Particle Astrophysics and Cosmology, Physics and Astrophysics Building, 452 Lomita Mall, Stanford, CA 94305 (United States)

    2017-03-20

    Near-infrared spectroscopy from APOGEE and wide-field optical photometry from Pan-STARRS1 have recently made precise measurements of the shape of the extinction curve possible for tens of thousands of stars, parameterized by R ( V ). These measurements revealed structures in R ( V ) with large angular scales, which are challenging to explain in existing dust paradigms. In this work, we combine three-dimensional maps of dust column density with R ( V ) measurements to constrain the three-dimensional distribution of R ( V ) in the Milky Way. We find that the variations in R ( V ) are correlated on kiloparsec scales. In particular, most of the dust within one kiloparsec in the outer Galaxy, including many local molecular clouds (Orion, Taurus, Perseus, California, and Cepheus), has a significantly lower R ( V ) than more distant dust in the Milky Way. These results provide new input to models of dust evolution and processing, and complicate the application of locally derived extinction curves to more distant regions of the Milky Way and to other galaxies.

  6. Scaling Transformation in the Rembrandt Technique

    DEFF Research Database (Denmark)

    Barfod, Michael Bruhn; Leleur, Steen

    2013-01-01

    This paper examines a decision support system (DSS) for the appraisal of complex decision problems using multi-criteria decision analysis (MCDA). The DSS makes use of a structured hierarchical approach featuring the multiplicative AHP also known as the REMBRANDT technique. The paper addresses...... of a conventional AHP calculation in order to examine what impact the choice of progression factors as well as the choice of technique have on the decision making. Based on this a modified progression factor for the calculation of scores for the alternatives in REMBRANDT is suggested while the progression factor...

  7. Femtosecond laser-assisted cataract surgery with bimanual technique: learning curve for an experienced cataract surgeon.

    Science.gov (United States)

    Cavallini, Gian Maria; Verdina, Tommaso; De Maria, Michele; Fornasari, Elisa; Volpini, Elisa; Campi, Luca

    2017-11-29

    To describe the intraoperative complications and the learning curve of microincision cataract surgery assisted by femtosecond laser (FLACS) with bimanual technique performed by an experienced surgeon. It is a prospective, observational, comparative case series. A total of 120 eyes which underwent bimanual FLACS by the same experienced surgeon during his first experience were included in the study; we considered the first 60 cases as Group A and the second 60 cases as Group B. In both groups, only nuclear sclerosis of grade 2 or 3 was included; an intraocular lens was implanted through a 1.4-mm incision. Best-corrected visual acuity (BCVA), surgically induced astigmatism (SIA), central corneal thickness and endothelial cell loss (ECL) were evaluated before and at 1 and 3 months after surgery. Intraoperative parameters, and intra- and post-operative complications were recorded. In Group A, we had femtosecond laser-related minor complications in 11 cases (18.3%) and post-operative complications in 2 cases (3.3%); in Group B, we recorded 2 cases (3.3%) of femtosecond laser-related minor complications with no post-operative complications. Mean effective phaco time (EPT) was 5.32 ± 3.68 s in Group A and 4.34 ± 2.39 s in Group B with a significant difference (p = 0.046). We recorded a significant mean BCVA improvement at 3 months in both groups (p  0.05). Finally, we found significant ECL in both groups with a significant difference between the two groups (p = 0.042). FLACS with bimanual technique and low-energy LDV Z8 is associated with a necessary initial learning curve. After the first adjustments in the surgical technique, this technology seems to be safe and effective with rapid visual recovery and it helps surgeons to standardize the crucial steps of cataract surgery.

  8. Genome scale engineering techniques for metabolic engineering.

    Science.gov (United States)

    Liu, Rongming; Bassalo, Marcelo C; Zeitoun, Ramsey I; Gill, Ryan T

    2015-11-01

    Metabolic engineering has expanded from a focus on designs requiring a small number of genetic modifications to increasingly complex designs driven by advances in genome-scale engineering technologies. Metabolic engineering has been generally defined by the use of iterative cycles of rational genome modifications, strain analysis and characterization, and a synthesis step that fuels additional hypothesis generation. This cycle mirrors the Design-Build-Test-Learn cycle followed throughout various engineering fields that has recently become a defining aspect of synthetic biology. This review will attempt to summarize recent genome-scale design, build, test, and learn technologies and relate their use to a range of metabolic engineering applications. Copyright © 2015 International Metabolic Engineering Society. Published by Elsevier Inc. All rights reserved.

  9. Reflection curves—new computation and rendering techniques

    Directory of Open Access Journals (Sweden)

    Dan-Eugen Ulmet

    2004-05-01

    Full Text Available Reflection curves on surfaces are important tools for free-form surface interrogation. They are essential for industrial 3D CAD/CAM systems and for rendering purposes. In this note, new approaches regarding the computation and rendering of reflection curves on surfaces are introduced. These approaches are designed to take the advantage of the graphics libraries of recent releases of commercial systems such as the OpenInventor toolkit (developed by Silicon Graphics or Matlab (developed by The Math Works. A new relation between reflection curves and contour curves is derived; this theoretical result is used for a straightforward Matlab implementation of reflection curves. A new type of reflection curves is also generated using the OpenInventor texture and environment mapping implementations. This allows the computation, rendering, and animation of reflection curves at interactive rates, which makes it particularly useful for industrial applications.

  10. Characterizing Synergistic Water and Energy Efficiency at the Residential Scale Using a Cost Abatement Curve Approach

    Science.gov (United States)

    Stillwell, A. S.; Chini, C. M.; Schreiber, K. L.; Barker, Z. A.

    2015-12-01

    Energy and water are two increasingly correlated resources. Electricity generation at thermoelectric power plants requires cooling such that large water withdrawal and consumption rates are associated with electricity consumption. Drinking water and wastewater treatment require significant electricity inputs to clean, disinfect, and pump water. Due to this energy-water nexus, energy efficiency measures might be a cost-effective approach to reducing water use and water efficiency measures might support energy savings as well. This research characterizes the cost-effectiveness of different efficiency approaches in households by quantifying the direct and indirect water and energy savings that could be realized through efficiency measures, such as low-flow fixtures, energy and water efficient appliances, distributed generation, and solar water heating. Potential energy and water savings from these efficiency measures was analyzed in a product-lifetime adjusted economic model comparing efficiency measures to conventional counterparts. Results were displayed as cost abatement curves indicating the most economical measures to implement for a target reduction in water and/or energy consumption. These cost abatement curves are useful in supporting market innovation and investment in residential-scale efficiency.

  11. The Rayleigh curve as a model for effort distribution over the life of medium scale software systems. M.S. Thesis - Maryland Univ.

    Science.gov (United States)

    Picasso, G. O.; Basili, V. R.

    1982-01-01

    It is noted that previous investigations into the applicability of Rayleigh curve model to medium scale software development efforts have met with mixed results. The results of these investigations are confirmed by analyses of runs and smoothing. The reasons for the models' failure are found in the subcycle effort data. There are four contributing factors: uniqueness of the environment studied, the influence of holidays, varying management techniques and differences in the data studied.

  12. New measurement technique of ductility curve for ductility-dip cracking susceptibility in Alloy 690 welds

    Energy Technology Data Exchange (ETDEWEB)

    Kadoi, Kota, E-mail: kadoi@hiroshima-u.ac.jp [Graduate School of Engineering, Hiroshima University, 1-4-1 Kagamiyama, Higashi-Hiroshima, Hiroshima 739-8527 (Japan); Uegaki, Takanori; Shinozaki, Kenji; Yamamoto, Motomichi [Graduate School of Engineering, Hiroshima University, 1-4-1 Kagamiyama, Higashi-Hiroshima, Hiroshima 739-8527 (Japan)

    2016-08-30

    The coupling of a hot tensile test with a novel in situ observation technique using a high-speed camera was investigated as a high-accuracy quantitative evaluation method for ductility-dip cracking (DDC) susceptibility. Several types of Alloy 690 filler wire were tested in this study owing to its susceptibility to DDC. The developed test method was used to directly measure the critical strain for DDC and high temperature ductility curves with a gauge length of 0.5 mm. Minimum critical strains of 1.3%, 4.0%, and 3.9% were obtained for ERNiCrFe-7, ERNiCrFe-13, and ERNiCrFe-15, respectively. The DDC susceptibilities of ERNiCrFe-13 and ERNiCrFe-15 were nearly the same and quite low compared with that of ERNiCrFe-7. This was likely caused by the tortuosity of the grain boundaries arising from the niobium content of around 2.5% in the former samples. Besides, ERNiCrFe-13 and ERNiCrFe-15 indicated higher minimum critical strains even though these specimens include higher content of sulfur and phosphorus than ERNiCrFe-7. Thus, containing niobium must be more effective to improve the susceptibility compared to sulfur and phosphorous in the alloy system.

  13. New measurement technique of ductility curve for ductility-dip cracking susceptibility in Alloy 690 welds

    International Nuclear Information System (INIS)

    Kadoi, Kota; Uegaki, Takanori; Shinozaki, Kenji; Yamamoto, Motomichi

    2016-01-01

    The coupling of a hot tensile test with a novel in situ observation technique using a high-speed camera was investigated as a high-accuracy quantitative evaluation method for ductility-dip cracking (DDC) susceptibility. Several types of Alloy 690 filler wire were tested in this study owing to its susceptibility to DDC. The developed test method was used to directly measure the critical strain for DDC and high temperature ductility curves with a gauge length of 0.5 mm. Minimum critical strains of 1.3%, 4.0%, and 3.9% were obtained for ERNiCrFe-7, ERNiCrFe-13, and ERNiCrFe-15, respectively. The DDC susceptibilities of ERNiCrFe-13 and ERNiCrFe-15 were nearly the same and quite low compared with that of ERNiCrFe-7. This was likely caused by the tortuosity of the grain boundaries arising from the niobium content of around 2.5% in the former samples. Besides, ERNiCrFe-13 and ERNiCrFe-15 indicated higher minimum critical strains even though these specimens include higher content of sulfur and phosphorus than ERNiCrFe-7. Thus, containing niobium must be more effective to improve the susceptibility compared to sulfur and phosphorous in the alloy system.

  14. Multilayer Strip Dipole Antenna Using Stacking Technique and Its Application for Curved Surface

    Directory of Open Access Journals (Sweden)

    Charinsak Saetiaw

    2013-01-01

    Full Text Available This paper presents the design of multilayer strip dipole antenna by stacking a flexible copper-clad laminate utilized for curved surface on the cylindrical objects. The designed antenna will reduce the effects of curving based on relative lengths that are changed in each stacking flexible copper-clad laminate layer. Curving is different from each layer of the antenna, so the resonance frequency that resulted from an extended antenna provides better frequency response stability compared to modern antenna when it is curved or attached to cylindrical objects. The frequency of multilayer antenna is designed at 920 MHz for UHF RFID applications.

  15. Contact mechanics at nanometric scale using nanoindentation technique for brittle and ductile materials.

    Science.gov (United States)

    Roa, J J; Rayon, E; Morales, M; Segarra, M

    2012-06-01

    In the last years, Nanoindentation or Instrumented Indentation Technique has become a powerful tool to study the mechanical properties at micro/nanometric scale (commonly known as hardness, elastic modulus and the stress-strain curve). In this review, the different contact mechanisms (elastic and elasto-plastic) are discussed, the recent patents for each mechanism (elastic and elasto-plastic) are summarized in detail, and the basic equations employed to know the mechanical behaviour for brittle and ductile materials are described.

  16. Measurement of activated rCBF by the 133Xe inhalation technique: a comparison of total versus partial curve analysis

    International Nuclear Information System (INIS)

    Leli, D.A.; Katholi, C.R.; Hazelrig, J.B.; Falgout, J.C.; Hannay, H.J.; Wilson, E.M.; Wills, E.L.; Halsey, J.H. Jr.

    1985-01-01

    An initial assessment of the differential sensitivity of total versus partial curve analysis in estimating task related focal changes in cortical blood flow measured by the 133 Xe inhalation technique was accomplished by comparing the patterns during the performance of two sensorimotor tasks by normal subjects. The validity of these patterns was evaluated by comparing them to the activation patterns expected from activation studies with the intra-arterial technique and the patterns expected from neuropsychological research literature. Subjects were 10 young adult nonsmoking healthy male volunteers. They were administered two tasks having identical sensory and cognitive components but different response requirements (oral versus manual). The regional activation patterns produced by the tasks varied with the method of curve analysis. The activation produced by the two tasks was very similar to that predicted from the research literature only for total curve analysis. To the extent that the predictions are correct, these data suggest that the 133 Xe inhalation technique is more sensitive to regional flow changes when flow parameters are estimated from the total head curve. The utility of the total head curve analysis will be strengthened if similar sensitivity is demonstrated in future studies assessing normal subjects and patients with neurological and psychiatric disorders

  17. Improvements in scaling of counter-current imbibition recovery curves using a shape factor including permeability anisotropy

    Science.gov (United States)

    Abbasi, Jassem; Sarafrazi, Shiva; Riazi, Masoud; Ghaedi, Mojtaba

    2018-02-01

    Spontaneous imbibition is the main oil production mechanism in the water invaded zone of a naturally fractured reservoir (NFR). Different scaling equations have been presented in the literature for upscaling of core scale imbibition recovery curves to field scale matrix blocks. Various scale dependent parameters such as gravity effects and boundary influences are required to be considered in the upscaling process. Fluid flow from matrix blocks to the fracture system is highly dependent on the permeability value in the horizontal and vertical directions. The purpose of this study is to include permeability anisotropy in the available scaling equations to improve the prediction of imbibition assisted oil production in NFRs. In this paper, a commercial reservoir simulator was used to obtain imbibition recovery curves for different scenarios. Then, the effect of permeability anisotropy on imbibition recovery curves was investigated, and the weakness of the existing scaling equations for anisotropic rocks was demonstrated. Consequently, an analytical shape factor was introduced that can better scale all the curves related to anisotropic matrix blocks.

  18. Fabricating small-scale, curved, polymeric structures with convex and concave menisci through interfacial free energy equilibrium.

    Science.gov (United States)

    Cheng, Chao-Min; Matsuura, Koji; Wang, I-Jan; Kuroda, Yuka; LeDuc, Philip R; Naruse, Keiji

    2009-11-21

    Polymeric curved structures are widely used in imaging systems including optical fibers and microfluidic channels. Here, we demonstrate that small-scale, poly(dimethylsiloxane) (PDMS)-based, curved structures can be fabricated through controlling interfacial free energy equilibrium. Resultant structures have a smooth, symmetric, curved surface, and may be convex or concave in form based on surface tension balance. Their curvatures are controlled by surface characteristics (i.e., hydrophobicity and hydrophilicity) of the molds and semi-liquid PDMS. In addition, these structures are shown to be biocompatible for cell culture. Our system provides a simple, efficient and economical method for generating integrateable optical components without costly fabrication facilities.

  19. Lightweight and Statistical Techniques for Petascale PetaScale Debugging

    Energy Technology Data Exchange (ETDEWEB)

    Miller, Barton

    2014-06-30

    This project investigated novel techniques for debugging scientific applications on petascale architectures. In particular, we developed lightweight tools that narrow the problem space when bugs are encountered. We also developed techniques that either limit the number of tasks and the code regions to which a developer must apply a traditional debugger or that apply statistical techniques to provide direct suggestions of the location and type of error. We extend previous work on the Stack Trace Analysis Tool (STAT), that has already demonstrated scalability to over one hundred thousand MPI tasks. We also extended statistical techniques developed to isolate programming errors in widely used sequential or threaded applications in the Cooperative Bug Isolation (CBI) project to large scale parallel applications. Overall, our research substantially improved productivity on petascale platforms through a tool set for debugging that complements existing commercial tools. Previously, Office Of Science application developers relied either on primitive manual debugging techniques based on printf or they use tools, such as TotalView, that do not scale beyond a few thousand processors. However, bugs often arise at scale and substantial effort and computation cycles are wasted in either reproducing the problem in a smaller run that can be analyzed with the traditional tools or in repeated runs at scale that use the primitive techniques. New techniques that work at scale and automate the process of identifying the root cause of errors were needed. These techniques significantly reduced the time spent debugging petascale applications, thus leading to a greater overall amount of time for application scientists to pursue the scientific objectives for which the systems are purchased. We developed a new paradigm for debugging at scale: techniques that reduced the debugging scenario to a scale suitable for traditional debuggers, e.g., by narrowing the search for the root-cause analysis

  20. Final Aperture Superposition Technique applied to fast calculation of electron output factors and depth dose curves

    International Nuclear Information System (INIS)

    Faddegon, B.A.; Villarreal-Barajas, J.E.

    2005-01-01

    The Final Aperture Superposition Technique (FAST) is described and applied to accurate, near instantaneous calculation of the relative output factor (ROF) and central axis percentage depth dose curve (PDD) for clinical electron beams used in radiotherapy. FAST is based on precalculation of dose at select points for the two extreme situations of a fully open final aperture and a final aperture with no opening (fully shielded). This technique is different than conventional superposition of dose deposition kernels: The precalculated dose is differential in position of the electron or photon at the downstream surface of the insert. The calculation for a particular aperture (x-ray jaws or MLC, insert in electron applicator) is done with superposition of the precalculated dose data, using the open field data over the open part of the aperture and the fully shielded data over the remainder. The calculation takes explicit account of all interactions in the shielded region of the aperture except the collimator effect: Particles that pass from the open part into the shielded part, or visa versa. For the clinical demonstration, FAST was compared to full Monte Carlo simulation of 10x10,2.5x2.5, and 2x8 cm 2 inserts. Dose was calculated to 0.5% precision in 0.4x0.4x0.2 cm 3 voxels, spaced at 0.2 cm depth intervals along the central axis, using detailed Monte Carlo simulation of the treatment head of a commercial linear accelerator for six different electron beams with energies of 6-21 MeV. Each simulation took several hours on a personal computer with a 1.7 Mhz processor. The calculation for the individual inserts, done with superposition, was completed in under a second on the same PC. Since simulations for the pre calculation are only performed once, higher precision and resolution can be obtained without increasing the calculation time for individual inserts. Fully shielded contributions were largest for small fields and high beam energy, at the surface, reaching a maximum

  1. Establishment of 60Co dose calibration curve using fluorescent in situ hybridization assay technique: Result of preliminary study

    International Nuclear Information System (INIS)

    Rahimah Abdul Rahim; Noriah Jamal; Noraisyah Mohd Yusof; Juliana Mahamad Napiah; Nelly Bo Nai Lee

    2010-01-01

    This study aims at establishing an in-vitro 60 Co dose calibration curve using Fluorescent In-Situ Hybridization assay technique for the Malaysian National Bio dosimetry Laboratory. Blood samples collected from a female healthy donor were irradiated with several doses of 60 Co radiation. Following culturing of lymphocytes, microscopic slides are prepared, denatured and hybridized. The frequencies of translocation are estimated in the metaphases. A calibration curve was then generated using a regression technique. It shows a good fit to a linear-quadratic model. The results of this study might be useful in estimating absorbed dose for the individual exposed to ionizing radiation retrospectively. This information may be useful as a guide for medical treatment for the assessment of possible health consequences. (author)

  2. Millennial-scale climate variability recorded by gamma logging curve in Chaidam Basin

    International Nuclear Information System (INIS)

    Yuan Linwang; Chen Ye; Liu Zechun

    2000-01-01

    Using a natural gamma-ray logging curve of Dacan-1 core to inverse paleo-climate changes in Chaidam Basin, the process of environmental change of the past 150,000 years has been revealed. He in rich events and D-O cycles were identified, and can be matched well with those recorded in Greedland ice core. It suggests that the GR curve can identify tectonic and climatic events, is a sensitive proxy indicator of environmental and climatic changes

  3. New digital demodulator with matched filters and curve segmentation techniques for BFSK demodulation: Analytical description

    Directory of Open Access Journals (Sweden)

    Jorge Torres Gómez

    2015-09-01

    Full Text Available The present article relates in general to digital demodulation of Binary Frequency Shift Keying (BFSK. The objective of the present research is to obtain a new processing method for demodulating BFSK-signals in order to reduce hardware complexity in comparison with other methods reported. The solution proposed here makes use of the matched filter theory and curve segmentation algorithms. This paper describes the integration and configuration of a Sampler Correlator and curve segmentation blocks in order to obtain a digital receiver for a proper demodulation of the received signal. The proposed solution is shown to strongly reduce hardware complexity. In this part a presentation of the proposed solution regarding the analytical expressions is addressed. The paper covers in detail the elements needed for properly configuring the system. In a second part it is presented the implementation of the system for FPGA technology and the simulation results in order to validate the overall performance.

  4. Depth dose curves from 90Sr+90Y clinical applicators using the thermoluminescent technique

    International Nuclear Information System (INIS)

    Antonio, Patricia L.; Caldas, Linda V.E.; Oliveira, Mercia L.

    2009-01-01

    The 90 Sr+ 90 Y beta-ray sources widely used in brachytherapy applications were developed in the 1950's. Many of these sources, called clinical applicators, are still routinely used in several Brazilian radiotherapy clinics for the treatment of superficial lesions in the skin and eyes, although they are not commercialized anymore. These applicators have to be periodically calibrated, according to international recommendations, because these sources have to be very well specified in order to reach the traceability of calibration standards. In the case of beta-ray sources, the recommended quantity is the absorbed dose rate in water at a reference distance from the source. Moreover, there are other important quantities, as the depth dose curves and the source uniformity for beta-ray plaque sources. In this work, depth dose curves were obtained and studied of five dermatological applicators, using thin thermoluminescent dosimeters of CaSO 4 :Dy and phantoms of PMMA with different thicknesses (between 1.0 mm and 5.0 mm) positioned between each applicator and the TL pellets. The depth dose curves obtained presented the expected attenuation response in PMMA, and the results were compared with data obtained for a 90 Sr+ 90 Y standard source reported by the IAEA, and they were considered satisfactory. (author)

  5. The learning curve of the three-port two-instrument complete thoracoscopic lobectomy for lung cancer—A feasible technique worthy of popularization

    Directory of Open Access Journals (Sweden)

    Yu-Jen Cheng

    2015-07-01

    Conclusion: Three-port complete thoracoscopic lobectomy with the two-instrument technique is feasible for lung cancer treatment. The length of the learning curve consisted of 28 cases. This TPTI technique should be popularized.

  6. To the calculation technique and interpretation of atom radial distribution curves in ternary alloy systems

    International Nuclear Information System (INIS)

    Dutchak, Ya.I.; Frenchko, V.S.; Voznyak, O.M.

    1975-01-01

    Certain models of the structure of three-component melts are considered: the ''quasi-eutectic'' one, the model of statistical distribution of atoms and the ''polystructural'' model. The analytical expressions are given for the area under the first maximum of the curve describing the radial distribution of atoms for certain versions of the ''polystructural'' model. On the example of In-Ga-Ga and Bi-Cd-Sn eutectic melts the possibility of estimating the nature of atomic ordering in three-component melts through checking the models under consideration has been demonstrated

  7. Characterization of KS-material by means of J-R-curves especially using the partial unloading technique

    International Nuclear Information System (INIS)

    Voss, B.; Blauel, J.G.; Schmitt, W.

    1983-01-01

    Essential components of nuclear reactor systems are fabricated from materials of high thoughness to exclude brittle failure. With increasing load, a crack tip will blunt, a plastic zone will be formed, voids may nucleate and coalesce thus initiating stable crack extension when the crack driving parameter, e.g. J, exceeds the initiation value Jsub(i). Further stable crack growth will occur with further increasing J prior to complete failure of the structure. The specific material resistance against crack extension is characterized by J resistance curves Jsub(R)=J(Δa). ASTM provides a standard to determine the initiation toughness Jsub(Ic) from a Jsub(R)-curve [1] and a tentative standard for determining the Jsub(R)-curve by a single specimen test [2]. To generate a Jsub(R)-curve values for the crack driving parameter J and the corresponding stable crack growth Δa have to be measured. Besides the multiple specimen technique [1], the potential drop and especially the partial unloading compliance method [2] are used to measure stable crack growth. Some special problems and some results for pressure vessel steels are discussed in this paper. (orig./RW)

  8. Evaluation of convergence behavior of metamodeling techniques for bridging scales in multi-scale multimaterial simulation

    International Nuclear Information System (INIS)

    Sen, Oishik; Davis, Sean; Jacobs, Gustaaf; Udaykumar, H.S.

    2015-01-01

    The effectiveness of several metamodeling techniques, viz. the Polynomial Stochastic Collocation method, Adaptive Stochastic Collocation method, a Radial Basis Function Neural Network, a Kriging Method and a Dynamic Kriging Method is evaluated. This is done with the express purpose of using metamodels to bridge scales between micro- and macro-scale models in a multi-scale multimaterial simulation. The rate of convergence of the error when used to reconstruct hypersurfaces of known functions is studied. For sufficiently large number of training points, Stochastic Collocation methods generally converge faster than the other metamodeling techniques, while the DKG method converges faster when the number of input points is less than 100 in a two-dimensional parameter space. Because the input points correspond to computationally expensive micro/meso-scale computations, the DKG is favored for bridging scales in a multi-scale solver

  9. Scaling Techniques for Massive Scale-Free Graphs in Distributed (External) Memory

    KAUST Repository

    Pearce, Roger; Gokhale, Maya; Amato, Nancy M.

    2013-01-01

    We present techniques to process large scale-free graphs in distributed memory. Our aim is to scale to trillions of edges, and our research is targeted at leadership class supercomputers and clusters with local non-volatile memory, e.g., NAND Flash

  10. Improving 3d Spatial Queries Search: Newfangled Technique of Space Filling Curves in 3d City Modeling

    Science.gov (United States)

    Uznir, U.; Anton, F.; Suhaibah, A.; Rahman, A. A.; Mioc, D.

    2013-09-01

    The advantages of three dimensional (3D) city models can be seen in various applications including photogrammetry, urban and regional planning, computer games, etc.. They expand the visualization and analysis capabilities of Geographic Information Systems on cities, and they can be developed using web standards. However, these 3D city models consume much more storage compared to two dimensional (2D) spatial data. They involve extra geometrical and topological information together with semantic data. Without a proper spatial data clustering method and its corresponding spatial data access method, retrieving portions of and especially searching these 3D city models, will not be done optimally. Even though current developments are based on an open data model allotted by the Open Geospatial Consortium (OGC) called CityGML, its XML-based structure makes it challenging to cluster the 3D urban objects. In this research, we propose an opponent data constellation technique of space-filling curves (3D Hilbert curves) for 3D city model data representation. Unlike previous methods, that try to project 3D or n-dimensional data down to 2D or 3D using Principal Component Analysis (PCA) or Hilbert mappings, in this research, we extend the Hilbert space-filling curve to one higher dimension for 3D city model data implementations. The query performance was tested using a CityGML dataset of 1,000 building blocks and the results are presented in this paper. The advantages of implementing space-filling curves in 3D city modeling will improve data retrieval time by means of optimized 3D adjacency, nearest neighbor information and 3D indexing. The Hilbert mapping, which maps a subinterval of the [0, 1] interval to the corresponding portion of the d-dimensional Hilbert's curve, preserves the Lebesgue measure and is Lipschitz continuous. Depending on the applications, several alternatives are possible in order to cluster spatial data together in the third dimension compared to its

  11. Design techniques for large scale linear measurement systems

    International Nuclear Information System (INIS)

    Candy, J.V.

    1979-03-01

    Techniques to design measurement schemes for systems modeled by large scale linear time invariant systems, i.e., physical systems modeled by a large number (> 5) of ordinary differential equations, are described. The techniques are based on transforming the physical system model to a coordinate system facilitating the design and then transforming back to the original coordinates. An example of a three-stage, four-species, extraction column used in the reprocessing of spent nuclear fuel elements is presented. The basic ideas are briefly discussed in the case of noisy measurements. An example using a plutonium nitrate storage vessel (reprocessing) with measurement uncertainty is also presented

  12. Introducer curving technique for the prevention of tilting of transfemoral Günther Tulip inferior vena cava filter.

    Science.gov (United States)

    Xiao, Liang; Huang, De-sheng; Shen, Jing; Tong, Jia-jie

    2012-01-01

    To determine whether the introducer curving technique is useful in decreasing the degree of tilting of transfemoral Tulip filters. The study sample group consisted of 108 patients with deep vein thrombosis who were enrolled and planned to undergo thrombolysis, and who accepted transfemoral Tulip filter insertion procedure. The patients were randomly divided into Group C and Group T. The introducer curving technique was Adopted in Group T. The post-implantation filter tilting angle (ACF) was measured in an anteroposterior projection. The retrieval hook adhering to the vascular wall was measured via tangential cavogram during retrieval. The overall average ACF was 5.8 ± 4.14 degrees. In Group C, the average ACF was 7.1 ± 4.52 degrees. In Group T, the average ACF was 4.4 ± 3.20 degrees. The groups displayed a statistically significant difference (t = 3.573, p = 0.001) in ACF. Additionally, the difference of ACF between the left and right approaches turned out to be statistically significant (7.1 ± 4.59 vs. 5.1 ± 3.82, t = 2.301, p = 0.023). The proportion of severe tilt (ACF ≥ 10°) in Group T was significantly lower than that in Group C (9.3% vs. 24.1%, χ(2) = 4.267, p = 0.039). Between the groups, the difference in the rate of the retrieval hook adhering to the vascular wall was also statistically significant (2.9% vs. 24.2%, χ(2) = 5.030, p = 0.025). The introducer curving technique appears to minimize the incidence and extent of transfemoral Tulip filter tilting.

  13. A comparison of two centrifuge techniques for constructing vulnerability curves: insight into the 'open-vessel' artifact.

    Science.gov (United States)

    Yin, Pengxian; Meng, Feng; Liu, Qing; An, Rui; Cai, Jing; Du, Guangyuan

    2018-03-30

    A vulnerability curve (VC) describes the extent of xylem cavitation resistance. Centrifuges have been used to generate VCs for decades via static- and flow-centrifuge methods. Recently, the validity of the centrifuge techniques has been questioned. Researchers have hypothesized that the centrifuge techniques might yield unreliable VCs due to the open-vessel artifact. However, other researchers reject this hypothesis. The focus of the dispute is centred on whether exponential VCs are more reliable when the static-centrifuge method is used than with the flow-centrifuge method. To further test the reliability of the centrifuge technique, two centrifuges were manufactured to simulate the static- and flow-centrifuge methods. VCs of three species with open vessels of known lengths were constructed using the two centrifuges. The results showed that both centrifuge techniques produced invalid VCs for Robinia because the water flow through stems under mild tension in centrifuges led to an increasing loss of water conductivity. Additionally, the injection of water in the flow-centrifuge exacerbated the loss of water conductivity. However, both centrifuge techniques yielded reliable VCs for Prunus, regardless of the presence of open vessels in the tested samples. We conclude that centrifuge techniques can be used in species with open vessels only when the centrifuge produces a VC that matches the bench-dehydration VC. This article is protected by copyright. All rights reserved.

  14. Getting the most from your curves: Exploring and reporting data using informative graphical techniques

    Directory of Open Access Journals (Sweden)

    Masaki Matsunaga

    2009-09-01

    Full Text Available Most psychological research employs tables to report descriptive and inferential statistics. Unfortunately, those tables often misrepresent critical information on the shape and variability of the data’s distribution. In addition, certain information such as the modality and score probability density is hard to report succinctly in tables and, indeed, not reported typically in published research. This paper discusses the importance of using graphical techniques not only to explore data but also to report it effectively. In so doing, the role of exploratory data analysis in detecting Type I and Type II errors is considered. A small data set resembling a Type II error is simulated to demonstrate this procedure, using a conventional parametric test. A potential analysis routine to explore data is also presented. The paper proposes that essential summary statistics and information about the shape and variability of data should be reported via graphical techniques.

  15. Effect of Non Submerged Vanes on Separation Zone at Strongly-curved Channel Bends, a Laboratory Scale Study

    Directory of Open Access Journals (Sweden)

    Ali Akbar Akhtari

    2010-03-01

    Full Text Available Bends along open channels always pose difficulties for water transfer systems. One undesirable effect of bends in such channels, i.e. separation of water from inner banks, was studied. For the purposes of this study, the literature on the subject was first reviewed, and a strongly-curved open channel was designed and constructed on the laboratory scale. Several tests were performed to evaluate the accuracy of the lab model, data homogeneity, and systematic errors. The model was then calibrated and the influence of curvature on flow pattern past the curve was investigated. Also, for the first time, the influence of separation walls on flow pattern was investigated. Experimental results on three strongly-curved open channels with a curvature radius to channel width ratio of 1.5 and curvature angles of 30°, 60°, and 90° showed that, in all the cases studied, the effect of flow separation could be observed immediately after the curve. In addition, the greatest effect of flow separation was seen at a distance equal to channel width from the bend end. In the presence of middle walls and flow separation, the effect of water separation reduced at the bend, especially for a curvature of 90°.

  16. Curve fitting and modeling with splines using statistical variable selection techniques

    Science.gov (United States)

    Smith, P. L.

    1982-01-01

    The successful application of statistical variable selection techniques to fit splines is demonstrated. Major emphasis is given to knot selection, but order determination is also discussed. Two FORTRAN backward elimination programs, using the B-spline basis, were developed. The program for knot elimination is compared in detail with two other spline-fitting methods and several statistical software packages. An example is also given for the two-variable case using a tensor product basis, with a theoretical discussion of the difficulties of their use.

  17. Effectiveness of four different final irrigation activation techniques on smear layer removal in curved root canals : a scanning electron microscopy study.

    Directory of Open Access Journals (Sweden)

    Puneet Ahuja

    2014-02-01

    Full Text Available The aim of this study was to assess the efficacy of apical negative pressure (ANP, manual dynamic agitation (MDA, passive ultrasonic irrigation (PUI and needle irrigation (NI as final irrigation activation techniques for smear layer removal in curved root canals.Mesiobuccal root canals of 80 freshly extracted maxillary first molars with curvatures ranging between 25° and 35° were used. A glide path with #08-15 K files was established before cleaning and shaping with Mtwo rotary instruments (VDW, Munich, Germany up to size 35/0.04 taper. During instrumentation, 1 ml of 2.5% NaOCl was used at each change of file. Samples were divided into 4 equal groups (n=20 according to the final irrigation activation technique: group 1, apical negative pressure (ANP (EndoVac; group 2, manual dynamic agitation (MDA; group 3, passive ultrasonic irrigation (PUI; and group 4, needle irrigation (NI. Root canals were split longitudinally and subjected to scanning electron microscopy. The presence of smear layer at coronal, middle and apical levels was evaluated by superimposing 300-μm square grid over the obtained photomicrographs using a four-score scale with X1,000 magnification.Amongst all the groups tested, ANP showed the overall best smear layer removal efficacy (p < 0.05. Removal of smear layer was least effective with the NI technique.ANP (EndoVac system can be used as the final irrigation activation technique for effective smear layer removal in curved root canals.

  18. Glycation and secondary conformational changes of human serum albumin: study of the FTIR spectroscopic curve-fitting technique

    Directory of Open Access Journals (Sweden)

    Yu-Ting Huang

    2016-05-01

    Full Text Available The aim of this study was attempted to investigate both the glycation kinetics and protein secondary conformational changes of human serum albumin (HSA after the reaction with ribose. The browning and fluorescence determinations as well as Fourier transform infrared (FTIR microspectroscopy with a curve-fitting technique were applied. Various concentrations of ribose were incubated over a 12-week period at 37 ± 0.5 oC under dark conditions. The results clearly shows that the glycation occurred in HSA-ribose reaction mixtures was markedly increased with the amount of ribose used and incubation time, leading to marked alterations of protein conformation of HSA after FTIR determination. In addition, the browning intensity of reaction solutions were colored from light to deep brown, as determined by optical observation. The increase in fluorescence intensity from HSA–ribose mixtures seemed to occur more quickly than browning, suggesting that the fluorescence products were produced earlier on in the process than compounds causing browning. Moreover, the predominant α-helical composition of HSA decreased with an increase in ribose concentration and incubation time, whereas total β-structure and random coil composition increased, as determined by curve-fitted FTIR microspectroscopy analysis. We also found that the peak intensity ratios at 1044 cm−1/1542 cm−1 markedly decreased prior to 4 weeks of incubation, then almost plateaued, implying that the consumption of ribose in the glycation reaction might have been accelerated over the first 4 weeks of incubation, and gradually decreased. This study first evidences that two unique IR peaks at 1710 cm−1 [carbonyl groups of irreversible products produced by the reaction and deposition of advanced glycation end products (AGEs] and 1621 cm−1 (aggregated HSA molecules were clearly observed from the curve-fitted FTIR spectra of HSA-ribose mixtures over the course of incubation time. This study

  19. Spatial reflection patterns of iridescent wings of male pierid butterflies : Curved scales reflect at a wider angle than flat scales

    NARCIS (Netherlands)

    Pirih, Primož; Wilts, Bodo D.; Stavenga, Doekele G.

    2011-01-01

    The males of many pierid butterflies have iridescent wings, which presumably function in intraspecific communication. The iridescence is due to nanostructured ridges of the cover scales. We have studied the iridescence in the males of a few members of Coliadinae, Gonepteryx aspasia, G. cleopatra, G.

  20. Scaling Techniques for Massive Scale-Free Graphs in Distributed (External) Memory

    KAUST Repository

    Pearce, Roger

    2013-05-01

    We present techniques to process large scale-free graphs in distributed memory. Our aim is to scale to trillions of edges, and our research is targeted at leadership class supercomputers and clusters with local non-volatile memory, e.g., NAND Flash. We apply an edge list partitioning technique, designed to accommodate high-degree vertices (hubs) that create scaling challenges when processing scale-free graphs. In addition to partitioning hubs, we use ghost vertices to represent the hubs to reduce communication hotspots. We present a scaling study with three important graph algorithms: Breadth-First Search (BFS), K-Core decomposition, and Triangle Counting. We also demonstrate scalability on BG/P Intrepid by comparing to best known Graph500 results. We show results on two clusters with local NVRAM storage that are capable of traversing trillion-edge scale-free graphs. By leveraging node-local NAND Flash, our approach can process thirty-two times larger datasets with only a 39% performance degradation in Traversed Edges Per Second (TEPS). © 2013 IEEE.

  1. Laparoscopic colorectal surgery in learning curve: Role of implementation of a standardized technique and recovery protocol. A cohort study

    Science.gov (United States)

    Luglio, Gaetano; De Palma, Giovanni Domenico; Tarquini, Rachele; Giglio, Mariano Cesare; Sollazzo, Viviana; Esposito, Emanuela; Spadarella, Emanuela; Peltrini, Roberto; Liccardo, Filomena; Bucci, Luigi

    2015-01-01

    Background Despite the proven benefits, laparoscopic colorectal surgery is still under utilized among surgeons. A steep learning is one of the causes of its limited adoption. Aim of the study is to determine the feasibility and morbidity rate after laparoscopic colorectal surgery in a single institution, “learning curve” experience, implementing a well standardized operative technique and recovery protocol. Methods The first 50 patients treated laparoscopically were included. All the procedures were performed by a trainee surgeon, supervised by a consultant surgeon, according to the principle of complete mesocolic excision with central vascular ligation or TME. Patients underwent a fast track recovery programme. Recovery parameters, short-term outcomes, morbidity and mortality have been assessed. Results Type of resections: 20 left side resections, 8 right side resections, 14 low anterior resection/TME, 5 total colectomy and IRA, 3 total panproctocolectomy and pouch. Mean operative time: 227 min; mean number of lymph-nodes: 18.7. Conversion rate: 8%. Mean time to flatus: 1.3 days; Mean time to solid stool: 2.3 days. Mean length of hospital stay: 7.2 days. Overall morbidity: 24%; major morbidity (Dindo–Clavien III): 4%. No anastomotic leak, no mortality, no 30-days readmission. Conclusion Proper laparoscopic colorectal surgery is safe and leads to excellent results in terms of recovery and short term outcomes, even in a learning curve setting. Key factors for better outcomes and shortening the learning curve seem to be the adoption of a standardized technique and training model along with the strict supervision of an expert colorectal surgeon. PMID:25859386

  2. 3D CT cerebral angiography technique using a 320-detector machine with a time–density curve and low contrast medium volume: Comparison with fixed time delay technique

    International Nuclear Information System (INIS)

    Das, K.; Biswas, S.; Roughley, S.; Bhojak, M.; Niven, S.

    2014-01-01

    Aim: To describe a cerebral computed tomography angiography (CTA) technique using a 320-detector CT machine and a small contrast medium volume (35 ml, 15 ml for test bolus). Also, to compare the quality of these images with that of the images acquired using a larger contrast medium volume (90 or 120 ml) and a fixed time delay (FTD) of 18 s using a 16-detector CT machine. Materials and methods: Cerebral CTA images were acquired using a 320-detector machine by synchronizing the scanning time with the time of peak enhancement as determined from the time–density curve (TDC) using a test bolus dose. The quality of CTA images acquired using this technique was compared with that obtained using a FTD of 18 s (by 16-detector CT), retrospectively. Average densities in four different intracranial arteries, overall opacification of arteries, and the degree of venous contamination were graded and compared. Results: Thirty-eight patients were scanned using the TDC technique and 40 patients using the FTD technique. The arterial densities achieved by the TDC technique were higher (significant for supraclinoid and basilar arteries, p < 0.05). The proportion of images deemed as having “good” arterial opacification was 95% for TDC and 90% for FTD. The degree of venous contamination was significantly higher in images produced by the FTD technique (p < 0.001%). Conclusion: Good diagnostic quality CTA images with significant reduction of venous contamination can be achieved with a low contrast medium dose using a 320-detector machine by coupling the time of data acquisition with the time of peak enhancement

  3. Ion beam analysis techniques applied to large scale pollution studies

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, D D; Bailey, G; Martin, J; Garton, D; Noorman, H; Stelcer, E; Johnson, P [Australian Nuclear Science and Technology Organisation, Lucas Heights, NSW (Australia)

    1994-12-31

    Ion Beam Analysis (IBA) techniques are ideally suited to analyse the thousands of filter papers a year that may originate from a large scale aerosol sampling network. They are fast multi-elemental and, for the most part, non-destructive so other analytical methods such as neutron activation and ion chromatography can be performed afterwards. ANSTO in collaboration with the NSW EPA, Pacific Power and the Universities of NSW and Macquarie has established a large area fine aerosol sampling network covering nearly 80,000 square kilometres of NSW with 25 fine particle samplers. This network known as ASP was funded by the Energy Research and Development Corporation (ERDC) and commenced sampling on 1 July 1991. The cyclone sampler at each site has a 2.5 {mu}m particle diameter cut off and runs for 24 hours every Sunday and Wednesday using one Gillman 25mm diameter stretched Teflon filter for each day. These filters are ideal targets for ion beam analysis work. Currently ANSTO receives 300 filters per month from this network for analysis using its accelerator based ion beam techniques on the 3 MV Van de Graaff accelerator. One week a month of accelerator time is dedicated to this analysis. Four simultaneous accelerator based IBA techniques are used at ANSTO, to analyse for the following 24 elements: H, C, N, O, F, Na, Al, Si, P, S, Cl, K, Ca, Ti, V, Cr, Mn, Fe, Cu, Ni, Co, Zn, Br and Pb. The IBA techniques were proved invaluable in identifying sources of fine particles and their spatial and seasonal variations accross the large area sampled by the ASP network. 3 figs.

  4. Ion beam analysis techniques applied to large scale pollution studies

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, D.D.; Bailey, G.; Martin, J.; Garton, D.; Noorman, H.; Stelcer, E.; Johnson, P. [Australian Nuclear Science and Technology Organisation, Lucas Heights, NSW (Australia)

    1993-12-31

    Ion Beam Analysis (IBA) techniques are ideally suited to analyse the thousands of filter papers a year that may originate from a large scale aerosol sampling network. They are fast multi-elemental and, for the most part, non-destructive so other analytical methods such as neutron activation and ion chromatography can be performed afterwards. ANSTO in collaboration with the NSW EPA, Pacific Power and the Universities of NSW and Macquarie has established a large area fine aerosol sampling network covering nearly 80,000 square kilometres of NSW with 25 fine particle samplers. This network known as ASP was funded by the Energy Research and Development Corporation (ERDC) and commenced sampling on 1 July 1991. The cyclone sampler at each site has a 2.5 {mu}m particle diameter cut off and runs for 24 hours every Sunday and Wednesday using one Gillman 25mm diameter stretched Teflon filter for each day. These filters are ideal targets for ion beam analysis work. Currently ANSTO receives 300 filters per month from this network for analysis using its accelerator based ion beam techniques on the 3 MV Van de Graaff accelerator. One week a month of accelerator time is dedicated to this analysis. Four simultaneous accelerator based IBA techniques are used at ANSTO, to analyse for the following 24 elements: H, C, N, O, F, Na, Al, Si, P, S, Cl, K, Ca, Ti, V, Cr, Mn, Fe, Cu, Ni, Co, Zn, Br and Pb. The IBA techniques were proved invaluable in identifying sources of fine particles and their spatial and seasonal variations accross the large area sampled by the ASP network. 3 figs.

  5. Residence time distribution measurements in a pilot-scale poison tank using radiotracer technique.

    Science.gov (United States)

    Pant, H J; Goswami, Sunil; Samantray, J S; Sharma, V K; Maheshwari, N K

    2015-09-01

    Various types of systems are used to control the reactivity and shutting down of a nuclear reactor during emergency and routine shutdown operations. Injection of boron solution (borated water) into the core of a reactor is one of the commonly used methods during emergency operation. A pilot-scale poison tank was designed and fabricated to simulate injection of boron poison into the core of a reactor along with coolant water. In order to design a full-scale poison tank, it was desired to characterize flow of liquid from the tank. Residence time distribution (RTD) measurement and analysis was adopted to characterize the flow dynamics. Radiotracer technique was applied to measure RTD of aqueous phase in the tank using Bromine-82 as a radiotracer. RTD measurements were carried out with two different modes of operation of the tank and at different flow rates. In Mode-1, the radiotracer was instantaneously injected at the inlet and monitored at the outlet, whereas in Mode-2, the tank was filled with radiotracer and its concentration was measured at the outlet. From the measured RTD curves, mean residence times (MRTs), dead volume and fraction of liquid pumped in with time were determined. The treated RTD curves were modeled using suitable mathematical models. An axial dispersion model with high degree of backmixing was found suitable to describe flow when operated in Mode-1, whereas a tanks-in-series model with backmixing was found suitable to describe flow of the poison in the tank when operated in Mode-2. The results were utilized to scale-up and design a full-scale poison tank for a nuclear reactor. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Applying CFD in the Analysis of Heavy-Oil Transportation in Curved Pipes Using Core-Flow Technique

    Directory of Open Access Journals (Sweden)

    S Conceição

    2017-06-01

    Full Text Available Multiphase flow of oil, gas and water occurs in the petroleum industry from the reservoir to the processing units. The occurrence of heavy oils in the world is increasing significantly and points to the need for greater investment in the reservoirs exploitation and, consequently, to the development of new technologies for the production and transport of this oil. Therefore, it is interesting improve techniques to ensure an increase in energy efficiency in the transport of this oil. The core-flow technique is one of the most advantageous methods of lifting and transporting of oil. The core-flow technique does not alter the oil viscosity, but change the flow pattern and thus, reducing friction during heavy oil transportation. This flow pattern is characterized by a fine water pellicle that is formed close to the inner wall of the pipe, aging as lubricant of the oil flowing in the core of the pipe. In this sense, the objective of this paper is to study the isothermal flow of heavy oil in curved pipelines, employing the core-flow technique. A three-dimensional, transient and isothermal mathematical model that considers the mixture and k-e  turbulence models to address the gas-water-heavy oil three-phase flow in the pipe was applied for analysis. Simulations with different flow patterns of the involved phases (oil-gas-water have been done, in order to optimize the transport of heavy oils. Results of pressure and volumetric fraction distribution of the involved phases are presented and analyzed. It was verified that the oil core lubricated by a fine water layer flowing in the pipe considerably decreases pressure drop.

  7. Development of a Watershed-Scale Long-Term Hydrologic Impact Assessment Model with the Asymptotic Curve Number Regression Equation

    Directory of Open Access Journals (Sweden)

    Jichul Ryu

    2016-04-01

    Full Text Available In this study, 52 asymptotic Curve Number (CN regression equations were developed for combinations of representative land covers and hydrologic soil groups. In addition, to overcome the limitations of the original Long-term Hydrologic Impact Assessment (L-THIA model when it is applied to larger watersheds, a watershed-scale L-THIA Asymptotic CN (ACN regression equation model (watershed-scale L-THIA ACN model was developed by integrating the asymptotic CN regressions and various modules for direct runoff/baseflow/channel routing. The watershed-scale L-THIA ACN model was applied to four watersheds in South Korea to evaluate the accuracy of its streamflow prediction. The coefficient of determination (R2 and Nash–Sutcliffe Efficiency (NSE values for observed versus simulated streamflows over intervals of eight days were greater than 0.6 for all four of the watersheds. The watershed-scale L-THIA ACN model, including the asymptotic CN regression equation method, can simulate long-term streamflow sufficiently well with the ten parameters that have been added for the characterization of streamflow.

  8. Mapping Koch curves into scale-free small-world networks

    International Nuclear Information System (INIS)

    Zhang Zhongzhi; Gao Shuyang; Zhou Shuigeng; Chen Lichao; Zhang Hongjuan; Guan Jihong

    2010-01-01

    The class of Koch fractals is one of the most interesting families of fractals, and the study of complex networks is a central issue in the scientific community. In this paper, inspired by the famous Koch fractals, we propose a mapping technique converting Koch fractals into a family of deterministic networks called Koch networks. This novel class of networks incorporates some key properties characterizing a majority of real-life networked systems-a power-law distribution with exponent in the range between 2 and 3, a high clustering coefficient, a small diameter and average path length and degree correlations. Besides, we enumerate the exact numbers of spanning trees, spanning forests and connected spanning subgraphs in the networks. All these features are obtained exactly according to the proposed generation algorithm of the networks considered. The network representation approach could be used to investigate the complexity of some real-world systems from the perspective of complex networks.

  9. Tools and Techniques for Basin-Scale Climate Change Assessment

    Science.gov (United States)

    Zagona, E.; Rajagopalan, B.; Oakley, W.; Wilson, N.; Weinstein, P.; Verdin, A.; Jerla, C.; Prairie, J. R.

    2012-12-01

    The Department of Interior's WaterSMART Program seeks to secure and stretch water supplies to benefit future generations and identify adaptive measures to address climate change. Under WaterSMART, Basin Studies are comprehensive water studies to explore options for meeting projected imbalances in water supply and demand in specific basins. Such studies could be most beneficial with application of recent scientific advances in climate projections, stochastic simulation, operational modeling and robust decision-making, as well as computational techniques to organize and analyze many alternatives. A new integrated set of tools and techniques to facilitate these studies includes the following components: Future supply scenarios are produced by the Hydrology Simulator, which uses non-parametric K-nearest neighbor resampling techniques to generate ensembles of hydrologic traces based on historical data, optionally conditioned on long paleo reconstructed data using various Markov Chain techniuqes. Resampling can also be conditioned on climate change projections from e.g., downscaled GCM projections to capture increased variability; spatial and temporal disaggregation is also provided. The simulations produced are ensembles of hydrologic inputs to the RiverWare operations/infrastucture decision modeling software. Alternative demand scenarios can be produced with the Demand Input Tool (DIT), an Excel-based tool that allows modifying future demands by groups such as states; sectors, e.g., agriculture, municipal, energy; and hydrologic basins. The demands can be scaled at future dates or changes ramped over specified time periods. Resulting data is imported directly into the decision model. Different model files can represent infrastructure alternatives and different Policy Sets represent alternative operating policies, including options for noticing when conditions point to unacceptable vulnerabilities, which trigger dynamically executing changes in operations or other

  10. Effects of statistical quality, sampling rate and temporal filtering techniques on the extraction of functional parameters from the left ventricular time-activity curves

    Energy Technology Data Exchange (ETDEWEB)

    Guignard, P.A.; Chan, W. (Royal Melbourne Hospital, Parkville (Australia). Dept. of Nuclear Medicine)

    1984-09-01

    Several techniques for the processing of a series of curves derived from two left ventricular time-activity curves acquired at rest and during exercise with a nuclear stethoscope were evaluated. They were three and five point time smoothing. Fourier filtering preserving one to four harmonics (H), truncated curve Fourier filtering, and third degree polynomial curve fitting. Each filter's ability to recover, with fidelity, systolic and diastolic function parameters was evaluated under increasingly 'noisy' conditions and at several sampling rates. Third degree polynomial curve fittings and truncated Fourier filters exhibited very high sensitivity to noise. Three and five point time smoothing had moderate sensitivity to noise, but were highly affected by sampling rate. Fourier filtering preserving 2H or 3H produced the best compromise with high resilience to noise and independence of sampling rate as far as the recovery of these functional parameters is concerned.

  11. Effects of statistical quality, sampling rate and temporal filtering techniques on the extraction of functional parameters from the left ventricular time-activity curves

    International Nuclear Information System (INIS)

    Guignard, P.A.; Chan, W.

    1984-01-01

    Several techniques for the processing of a series of curves derived from two left ventricular time-activity curves acquired at rest and during exercise with a nuclear stethoscope were evaluated. They were three and five point time smoothing. Fourier filtering preserving one to four harmonics (H), truncated curve Fourier filtering, and third degree polynomial curve fitting. Each filter's ability to recover, with fidelity, systolic and diastolic function parameters was evaluated under increasingly 'noisy' conditions and at several sampling rates. Third degree polynomial curve fittings and truncated Fourier filters exhibited very high sensitivity to noise. Three and five point time smoothing had moderate sensitivity to noise, but were highly affected by sampling rate. Fourier filtering preserving 2H or 3H produced the best compromise with high resilience to noise and independence of sampling rate as far as the recovery of these functional parameters is concerned. (author)

  12. Study on high density multi-scale calculation technique

    International Nuclear Information System (INIS)

    Sekiguchi, S.; Tanaka, Y.; Nakada, H.; Nishikawa, T.; Yamamoto, N.; Yokokawa, M.

    2004-01-01

    To understand degradation of nuclear materials under irradiation, it is essential to know as much about each phenomenon observed from multi-scale points of view; they are micro-scale in atomic-level, macro-level in structural scale and intermediate level. In this study for application to meso-scale materials (100A ∼ 2μm), computer technology approaching from micro- and macro-scales was developed including modeling and computer application using computational science and technology method. And environmental condition of grid technology for multi-scale calculation was prepared. The software and MD (molecular dynamics) stencil for verifying the multi-scale calculation were improved and their movement was confirmed. (A. Hishinuma)

  13. Photogrammetric techniques for across-scale soil erosion assessment

    OpenAIRE

    Eltner, Anette

    2016-01-01

    Soil erosion is a complex geomorphological process with varying influences of different impacts at different spatio-temporal scales. To date, measurement of soil erosion is predominantly realisable at specific scales, thereby detecting separate processes, e.g. interrill erosion contrary to rill erosion. It is difficult to survey soil surface changes at larger areal coverage such as field scale with high spatial resolution. Either net changes at the system outlet or remaining traces after the ...

  14. Mathematical analysis of the dimensional scaling technique for the Schroedinger equation with power-law potentials

    International Nuclear Information System (INIS)

    Ding Zhonghai; Chen, Goong; Lin, Chang-Shou

    2010-01-01

    The dimensional scaling (D-scaling) technique is an innovative asymptotic expansion approach to study the multiparticle systems in molecular quantum mechanics. It enables the calculation of ground and excited state energies of quantum systems without having to solve the Schroedinger equation. In this paper, we present a mathematical analysis of the D-scaling technique for the Schroedinger equation with power-law potentials. By casting the D-scaling technique in an appropriate variational setting and studying the corresponding minimization problem, the D-scaling technique is justified rigorously. A new asymptotic dimensional expansion scheme is introduced to compute asymptotic expansions for ground state energies.

  15. Pusher curving technique for preventing tilt of femoral Geunther Tulip inferior vena cava filter: in vitro study

    International Nuclear Information System (INIS)

    Xiao Liang; Shen Jing; Huang Desheng; Xu Ke

    2011-01-01

    Objective: To determine whether the adjustment of the pusher of GTF was useful to decrease the degree of tilting of the femoral Geunther Tulip filter (GTF) in an in vitro caval model. Methods: The caval model was constructed by placement of a 25 mm × 100 mm and two 10 mm × 200 mm Dacron graft inside a transparent bifurcate glass tube. The study consisted of two groups: left straight group (GLS) (n = 100) and left curved group (G LC ) (n=100). In the G LC , a 10° to 20° angle was curved on the introducer. The distance (D CH ) between the caval right wall and the hook was measured. The degree of tilting (DT) was classified into 5 grades and recorded. Before and after the GTF being released, the angle (A CM1,2 ) between the axis of IVC and the metal mount, the distance (D CM1 ) between the caval right wall and the metal mount, the angle (ACF) between the axis of IVC and the axis of the filter and the diameter of IVC (D IVC ) were measured. The data were analyzed with Chi-Square test, t test, rank sum. test and Pearson correlation test. Results: The degree of GTF tilting in each group revealed a divergent tendency. In group LC , the apex of the filter tended to be grade Ⅲ compared in group LS (χ 2 value 37.491, P LS and G LC were considered as statistical significance (16.60° vs. 3.05°, 20.60° vs. 3.50°, -3.90° vs. -0.40°, 2.98 mm vs. 10.40 mm, -10.95° vs. -0.485°, 13.17 mm vs. 10.06 mm, -1.70° vs. 0.70°, t or Z values -12.187, -12.188, -8.545, -51.834, -11.395, 9.562, -3.596, P CM1 and A CF , A CM1 - A CM2 and D CH1 - D CH2 in each group, respectively (r values 0.978, 0.344, 0.879, 0.627, P CH1 and A CF in each group, A CP and A CF in group LC (r values -0.974, -0.322, -0.702, P CM1 and A CF , A CM1 - A CM2 and D CH1 - D CH2 in each group, respectively (r values 0.978, 0.344, 0.879, 0.627, P CH1 and A CF in each group, A CP and A CF in group LC (r values -0.974, -0.322, -0.702, P<0.01). Conclusion: The technique of adjusting the orientation of filter

  16. Establishment of Accurate Calibration Curve for National Verification at a Large Scale Input Accountability Tank in RRP - For Strengthening State System for Meeting Safeguards Obligation

    International Nuclear Information System (INIS)

    Goto, Y.; Kato, T.; Nidaira, K.

    2010-01-01

    Tanks are installed in a reprocessing plant for spent fuel in order to account solution of nuclear material. The careful measurement of volume in tanks is crucial to implement accurate accounting of nuclear material. The calibration curve related with the volume and level of solution needs to be constructed, where the level is determined by differential pressure of dip tubes in tanks. More than one calibration curves depending on the height are commonly applied for each tank, but it's not explicitly decided how many segments are used, where to select segment, or what order of polynomial curve. Here we present the rational construction technique of giving optimum calibration curves and their characteristics. The tank calibration work has been conducted in the course of contract with Japan Safeguards Office (JSGO) about safeguards information treatment. (author)

  17. Multidimensional scaling technique for analysis of magnetic storms ...

    Indian Academy of Sciences (India)

    R.Narasimhan(krishtel emaging) 1461 1996 Oct 15 13:05:22

    Multidimensional Scaling (MDS) comprises a set of models and associated methods for construct- ing a geometrical representation of proximity and dominance relationship between elements in one or more sets of entities. MDS can be applied to data that express two types of relationships: proxim- ity relations and ...

  18. Development of a Scaling Technique for Sociometric Data.

    Science.gov (United States)

    Peper, John B.; Chansky, Norman M.

    This study explored the stability and interjudge agreements of a sociometric scaling device to which children could easily respond, which teachers could easily administer and score, and which provided scores that researchers could use in parametric statistical analyses. Each student was paired with every other member of his class. He voted on each…

  19. Techniques for Scaling Up Analyses Based on Pre-interpretations

    DEFF Research Database (Denmark)

    Gallagher, John Patrick; Henriksen, Kim Steen; Banda, Gourinath

    2005-01-01

    a variety of analyses, both generic (such as mode analysis) and program-specific (with respect to a type describing some particular property of interest). Previous work demonstrated the approach using pre-interpretations over small domains. In this paper we present techniques that allow the method...

  20. Reliability analysis of large scaled structures by optimization technique

    International Nuclear Information System (INIS)

    Ishikawa, N.; Mihara, T.; Iizuka, M.

    1987-01-01

    This paper presents a reliability analysis based on the optimization technique using PNET (Probabilistic Network Evaluation Technique) method for the highly redundant structures having a large number of collapse modes. This approach makes the best use of the merit of the optimization technique in which the idea of PNET method is used. The analytical process involves the minimization of safety index of the representative mode, subjected to satisfaction of the mechanism condition and of the positive external work. The procedure entails the sequential performance of a series of the NLP (Nonlinear Programming) problems, where the correlation condition as the idea of PNET method pertaining to the representative mode is taken as an additional constraint to the next analysis. Upon succeeding iterations, the final analysis is achieved when a collapse probability at the subsequent mode is extremely less than the value at the 1st mode. The approximate collapse probability of the structure is defined as the sum of the collapse probabilities of the representative modes classified by the extent of correlation. Then, in order to confirm the validity of the proposed method, the conventional Monte Carlo simulation is also revised by using the collapse load analysis. Finally, two fairly large structures were analyzed to illustrate the scope and application of the approach. (orig./HP)

  1. Large-scale computing techniques for complex system simulations

    CERN Document Server

    Dubitzky, Werner; Schott, Bernard

    2012-01-01

    Complex systems modeling and simulation approaches are being adopted in a growing number of sectors, including finance, economics, biology, astronomy, and many more. Technologies ranging from distributed computing to specialized hardware are explored and developed to address the computational requirements arising in complex systems simulations. The aim of this book is to present a representative overview of contemporary large-scale computing technologies in the context of complex systems simulations applications. The intention is to identify new research directions in this field and

  2. Optimized evaporation technique for leachate treatment: Small scale implementation.

    Science.gov (United States)

    Benyoucef, Fatima; Makan, Abdelhadi; El Ghmari, Abderrahman; Ouatmane, Aziz

    2016-04-01

    This paper introduces an optimized evaporation technique for leachate treatment. For this purpose and in order to study the feasibility and measure the effectiveness of the forced evaporation, three cuboidal steel tubs were designed and implemented. The first control-tub was installed at the ground level to monitor natural evaporation. Similarly, the second and the third tub, models under investigation, were installed respectively at the ground level (equipped-tub 1) and out of the ground level (equipped-tub 2), and provided with special equipment to accelerate the evaporation process. The obtained results showed that the evaporation rate at the equipped-tubs was much accelerated with respect to the control-tub. It was accelerated five times in the winter period, where the evaporation rate was increased from a value of 0.37 mm/day to reach a value of 1.50 mm/day. In the summer period, the evaporation rate was accelerated more than three times and it increased from a value of 3.06 mm/day to reach a value of 10.25 mm/day. Overall, the optimized evaporation technique can be applied effectively either under electric or solar energy supply, and will accelerate the evaporation rate from three to five times whatever the season temperature. Copyright © 2016. Published by Elsevier Ltd.

  3. Development of engineering scale HLLW vitrification technique at PNC

    International Nuclear Information System (INIS)

    Nagaki, H.; Oguino, N.; Tsunoda, N.; Segawa, T.

    1979-01-01

    Some processes have been investigated to develop the technology of solidification of the high-level radioactive liquid waste generated from the nuclear fuel reprocessing plant operated by the Power Reactor and Nuclear Fuel Development Corporation (PNC) at Tokai-mura. This report covers the present state of development of a Joule-heated ceramic melter and a direct megahertz induction-heated melter. Engineering-scale tests have been performed with both melters. The Joule-heated melter could produce 45 kg or 16 liters of glass per hour. The direct-induction furnace was able to melt 5 kg or 1.8 liters of glass per hour. Both melters were composed of electrofused cast refractory brick. Thus it was possible to melt the glass at above 1200 0 C. Glass produced at higher melting temperatures is generally superior. 3 figures, 2 tables

  4. Heterodyne interferometric technique for displacement control at the nanometric scale

    Science.gov (United States)

    Topcu, Suat; Chassagne, Luc; Haddad, Darine; Alayli, Yasser; Juncar, Patrick

    2003-11-01

    We propose a method of displacement control that addresses the measurement requirements of the nanotechnology community and provide a traceability to the definition of the mèter at the nanometric scale. The method is based on the use of both a heterodyne Michelson's interferometer and a homemade high frequency electronic circuit. The system so established allows us to control the displacement of a translation stage with a known step of 4.945 nm. Intrinsic relative uncertainty on the step value is 1.6×10-9. Controls of the period of repetition of these steps with a high-stability quartz oscillator permits to impose an uniform speed to the translation stage with the same accuracy. This property will be used for the watt balance project of the Bureau National de Métrologie of France.

  5. Deriving Snow-Cover Depletion Curves for Different Spatial Scales from Remote Sensing and Snow Telemetry Data

    Science.gov (United States)

    Fassnacht, Steven R.; Sexstone, Graham A.; Kashipazha, Amir H.; Lopez-Moreno, Juan Ignacio; Jasinski, Michael F.; Kampf, Stephanie K.; Von Thaden, Benjamin C.

    2015-01-01

    During the melting of a snowpack, snow water equivalent (SWE) can be correlated to snow-covered area (SCA) once snow-free areas appear, which is when SCA begins to decrease below 100%. This amount of SWE is called the threshold SWE. Daily SWE data from snow telemetry stations were related to SCA derived from moderate-resolution imaging spectro radiometer images to produce snow-cover depletion curves. The snow depletion curves were created for an 80,000 sq km domain across southern Wyoming and northern Colorado encompassing 54 snow telemetry stations. Eight yearly snow depletion curves were compared, and it is shown that the slope of each is a function of the amount of snow received. Snow-cover depletion curves were also derived for all the individual stations, for which the threshold SWE could be estimated from peak SWE and the topography around each station. A stations peak SWE was much more important than the main topographic variables that included location, elevation, slope, and modelled clear sky solar radiation. The threshold SWE mostly illustrated inter-annual consistency.

  6. Improving 3D spatial queries search: newfangled technique of space filling curves in 3D city modeling

    DEFF Research Database (Denmark)

    Uznir, U.; Anton, François; Suhaibah, A.

    2013-01-01

    , retrieving portions of and especially searching these 3D city models, will not be done optimally. Even though current developments are based on an open data model allotted by the Open Geospatial Consortium (OGC) called CityGML, its XML-based structure makes it challenging to cluster the 3D urban objects......The advantages of three dimensional (3D) city models can be seen in various applications including photogrammetry, urban and regional planning, computer games, etc.. They expand the visualization and analysis capabilities of Geographic Information Systems on cities, and they can be developed using...... modeling will improve data retrieval time by means of optimized 3D adjacency, nearest neighbor information and 3D indexing. The Hilbert mapping, which maps a subinterval of the [0, 1] interval to the corresponding portion of the d-dimensional Hilbert’s curve, preserves the Lebesgue measure and is Lipschitz...

  7. Effects of Pathologic Stage on the Learning Curve for Radical Prostatectomy: Evidence That Recurrence in Organ-Confined Cancer Is Largely Related to Inadequate Surgical Technique

    Science.gov (United States)

    Vickers, Andrew J.; Bianco, Fernando J.; Gonen, Mithat; Cronin, Angel M.; Eastham, James A.; Schrag, Deborah; Klein, Eric A.; Reuther, Alwyn M.; Kattan, Michael W.; Pontes, J. Edson; Scardino, Peter T.

    2008-01-01

    Objectives We previously demonstrated that there is a learning curve for open radical prostatectomy. We sought to determine whether the effects of the learning curve are modified by pathologic stage. Methods The study included 7765 eligible prostate cancer patients treated with open radical prostatectomy by one of 72 surgeons. Surgeon experience was coded as the total number of radical prostatectomies conducted by the surgeon prior to a patient’s surgery. Multivariable regression models of survival time were used to evaluate the association between surgeon experience and biochemical recurrence, with adjustment for PSA, stage, and grade. Analyses were conducted separately for patients with organ-confined and locally advanced disease. Results Five-year recurrence-free probability for patients with organ-confined disease approached 100% for the most experienced surgeons. Conversely, the learning curve for patients with locally advanced disease reached a plateau at approximately 70%, suggesting that about a third of these patients cannot be cured by surgery alone. Conclusions Excellent rates of cancer control for patients with organ-confined disease treated by the most experienced surgeons suggest that the primary reason such patients recur is inadequate surgical technique. PMID:18207316

  8. A reduced scale two loop PWR core designed with particle swarm optimization technique

    International Nuclear Information System (INIS)

    Lima Junior, Carlos A. Souza; Pereira, Claudio M.N.A; Lapa, Celso M.F.; Cunha, Joao J.; Alvim, Antonio C.M.

    2007-01-01

    Reduced scale experiments are often employed in engineering projects because they are much cheaper than real scale testing. Unfortunately, designing reduced scale thermal-hydraulic circuit or equipment, with the capability of reproducing, both accurately and simultaneously, all physical phenomena that occur in real scale and at operating conditions, is a difficult task. To solve this problem, advanced optimization techniques, such as Genetic Algorithms, have been applied. Following this research line, we have performed investigations, using the Particle Swarm Optimization (PSO) Technique, to design a reduced scale two loop Pressurized Water Reactor (PWR) core, considering 100% of nominal power and non accidental operating conditions. Obtained results show that the proposed methodology is a promising approach for forced flow reduced scale experiments. (author)

  9. Decomposing the trade-environment nexus for Malaysia: what do the technique, scale, composition, and comparative advantage effect indicate?

    Science.gov (United States)

    Ling, Chong Hui; Ahmed, Khalid; Binti Muhamad, Rusnah; Shahbaz, Muhammad

    2015-12-01

    This paper investigates the impact of trade openness on CO2 emissions using time series data over the period of 1970QI-2011QIV for Malaysia. We disintegrate the trade effect into scale, technique, composition, and comparative advantage effects to check the environmental consequence of trade at four different transition points. To achieve the purpose, we have employed augmented Dickey-Fuller (ADF) and Phillips-Perron (PP) unit root tests in order to examine the stationary properties of the variables. Later, the long-run association among the variables is examined by applying autoregressive distributed lag (ARDL) bounds testing approach to cointegration. Our results confirm the presence of cointegration. Further, we find that scale effect has positive and technique effect has negative impact on CO2 emissions after threshold income level and form inverted U-shaped relationship-hence validates the environmental Kuznets curve hypothesis. Energy consumption adds in CO2 emissions. Trade openness and composite effect improve environmental quality by lowering CO2 emissions. The comparative advantage effect increases CO2 emissions and impairs environmental quality. The results provide the innovative approach to see the impact of trade openness in four sub-dimensions of trade liberalization. Hence, this study attributes more comprehensive policy tool for trade economists to better design environmentally sustainable trade rules and agreements.

  10. Experimental Assessment on the Hysteretic Behavior of a Full-Scale Traditional Chinese Timber Structure Using a Synchronous Loading Technique

    Directory of Open Access Journals (Sweden)

    XiWang Shi

    2018-01-01

    Full Text Available In traditional Chinese timber structures, few tie beams were used between columns, and the column base was placed directly on a stone base. In order to study the hysteretic behavior of such structures, a full-scale model was established. The model size was determined according to the requirements of an eighth grade material system specified in the architectural treatise Ying-zao-fa-shi written during the Song Dynasty. In light of the vertical lift and drop of the test model during horizontal reciprocating motions, the horizontal low-cycle reciprocating loading experiments were conducted using a synchronous loading technique. By analyzing the load-displacement hysteresis curves, envelope curves, deformation capacity, energy dissipation, and change in stiffness under different vertical loads, it is found that the timber frame exhibits obvious signs of self-restoring and favorable plastic deformation capacity. As the horizontal displacement increases, the equivalent viscous damping coefficient generally declines first and then increases. At the same time, the stiffness degrades rapidly first and then decreases slowly. Increasing vertical loading will improve the deformation, energy-dissipation capacity, and stiffness of the timber frame.

  11. 3D Analysis of D-RaCe and Self-Adjusting File in Removing Filling Materials from Curved Root Canals Instrumented and Filled with Different Techniques

    Directory of Open Access Journals (Sweden)

    Neslihan Simsek

    2014-01-01

    Full Text Available The aim of this study was to compare the efficacy of D-RaCe files and a self-adjusting file (SAF system in removing filling material from curved root canals instrumented and filled with different techniques by using microcomputed tomography (micro-CT. The mesial roots of 20 extracted mandibular first molars were used. Root canals (mesiobuccal and mesiolingual were instrumented with SAF or Revo-S. The canals were then filled with gutta-percha and AH Plus sealer using cold lateral compaction or thermoplasticized injectable techniques. The root fillings were first removed with D-RaCe (Step 1, followed by Step 2, in which a SAF system was used to remove the residual fillings in all groups. Micro-CT scans were used to measure the volume of residual filling after root canal filling, reinstrumentation with D-RaCe (Step 1, and reinstrumentation with SAF (Step 2. Data were analyzed using Wilcoxon and Kruskal-Wallis tests. There were no statistically significant differences between filling techniques in the canals instrumented with SAF (P=0.292 and Revo-S (P=0.306. The amount of remaining filling material was similar in all groups (P=0.363; all of the instrumentation techniques left filling residue inside the canals. However, the additional use of SAF was more effective than using D-RaCe alone.

  12. Medium scale test study of chemical cleaning technique for secondary side of SG in PWR

    International Nuclear Information System (INIS)

    Zhang Mengqin; Zhang Shufeng; Yu Jinghua; Hou Shufeng

    1997-08-01

    The medium scale test study of chemical cleaning technique for removing corrosion product (Fe 3 O 4 ) in secondary side of SG in PWR has been completed. The test has been carried out in a medium scale test loop. The medium scale test evaluated the effect of the chemical cleaning technique (temperature, flow rate, cleaning time, cleaning process), the state of corrosion product deposition on magnetite (Fe 3 O 4 ) solubility and safety of materials of SG in cleaning process. The inhibitor component of chemical cleaning agent has been improved by electrochemical linear polarization method, the effect of inhibitor on corrosion resistance of materials have been examined in the medium scale test loop, the most components of chemical cleaning agent have been obtained, the EDTA is main component in cleaning agent. The electrochemical method for monitor corrosion of materials during cleaning process has been completed in the laboratory. The study of the medium scale test of chemical cleaning technique have had the optimum chemical cleaning technique for remove corrosion product in SG secondary side of PWR. (9 refs., 4 figs., 11 tabs.)

  13. An efficient permeability scaling-up technique applied to the discretized flow equations

    Energy Technology Data Exchange (ETDEWEB)

    Urgelli, D.; Ding, Yu [Institut Francais du Petrole, Rueil Malmaison (France)

    1997-08-01

    Grid-block permeability scaling-up for numerical reservoir simulations has been discussed for a long time in the literature. It is now recognized that a full permeability tensor is needed to get an accurate reservoir description at large scale. However, two major difficulties are encountered: (1) grid-block permeability cannot be properly defined because it depends on boundary conditions; (2) discretization of flow equations with a full permeability tensor is not straightforward and little work has been done on this subject. In this paper, we propose a new method, which allows us to get around both difficulties. As the two major problems are closely related, a global approach will preserve the accuracy. So, in the proposed method, the permeability up-scaling technique is integrated in the discretized numerical scheme for flow simulation. The permeability is scaled-up via the transmissibility term, in accordance with the fluid flow calculation in the numerical scheme. A finite-volume scheme is particularly studied, and the transmissibility scaling-up technique for this scheme is presented. Some numerical examples are tested for flow simulation. This new method is compared with some published numerical schemes for full permeability tensor discretization where the full permeability tensor is scaled-up through various techniques. Comparing the results with fine grid simulations shows that the new method is more accurate and more efficient.

  14. The use of production management techniques in the construction of large scale physics detectors

    CERN Document Server

    Bazan, A; Estrella, F; Kovács, Z; Le Flour, T; Le Goff, J M; Lieunard, S; McClatchey, R; Murray, S; Varga, L Z; Vialle, J P; Zsenei, M

    1999-01-01

    The construction process of detectors for the Large Hadron Collider (LHC) experiments is large scale, heavily constrained by resource availability and evolves with time. As a consequence, changes in detector component design need to be tracked and quickly reflected in the construction process. With similar problems in industry engineers employ so-called Product Data Management (PDM) systems to control access to documented versions of designs and managers employ so- called Workflow Management software (WfMS) to coordinate production work processes. However, PDM and WfMS software are not generally integrated in industry. The scale of LHC experiments, like CMS, demands that industrial production techniques be applied in detector construction. This paper outlines the major functions and applications of the CRISTAL system (Cooperating Repositories and an information System for Tracking Assembly Lifecycles) in use in CMS which successfully integrates PDM and WfMS techniques in managing large scale physics detector ...

  15. Identification of the Scale of Changes in Personnel Motivation Techniques at Mechanical-Engineering Enterprises

    Directory of Open Access Journals (Sweden)

    Melnyk Olga G.

    2016-02-01

    Full Text Available The method for identification of the scale of changes in personnel motivation techniques at mechanical-engineering enterprises based on structural and logical sequence of implementation of relevant stages (identification of the mission, strategy and objectives of the enterprise; forecasting the development of the enterprise business environment; SWOT-analysis of actual motivation techniques, deciding on the scale of changes in motivation techniques, choosing providers for changing personnel motivation techniques, choosing an alternative to changing motivation techniques, implementation of changes in motivation techniques; control over changes in motivation techniques. It has been substantiated that the improved method enables providing a systematic and analytical justification for management decisionmaking in this field and choosing the best for the mechanical-engineering enterprise scale and variant of changes in motivation techniques. The method for identification of the scale of changes in motivation techniques at mechanical-engineering enterprises takes into account the previous, current and prospective character. Firstly, the approach is based on considering the past state in the motivational sphere of the mechanical-engineering enterprise; secondly, the method involves identifying the current state of personnel motivation techniques; thirdly, within the method framework the prospective, which is manifested in strategic vision of the enterprise development as well as in forecasting the development of its business environment, is taken into account. The advantage of the proposed method is that the level of its specification may vary depending on the set goals, resource constraints and necessity. Among other things, this method allows integrating various formalized and non-formalized causal relationships in the sphere of personnel motivation at machine-building enterprises and management of relevant processes. This creates preconditions for a

  16. Dynamical scaling in polymer solutions investigated by the neutron spin echo technique

    International Nuclear Information System (INIS)

    Richter, D.; Ewen, B.

    1979-01-01

    Chain dynamics in polymer solutions was investigated by means of the recently developed neutron spin echo spectroscopy. - By this technique, it was possible for the first time to verify unambiguously the scaling predictions of the Zimm model in the case of single chain behaviour and to observe the cross over to many chain behaviour. The segmental diffusion of single chains exhibits deviations from a simple exponential law, indicating the importance of memory effects. (orig.) [de

  17. The use of production management techniques in the construction of large scale physics detectors

    International Nuclear Information System (INIS)

    Bazan, A.; Chevenier, G.; Estrella, F.

    1999-01-01

    The construction process of detectors for the Large Hadron Collider (LHC) experiments is large scale, heavily constrained by resource availability and evolves with time. As a consequence, changes in detector component design need to be tracked and quickly reflected in the construction process. With similar problems in industry engineers employ so-called Product Data Management (PDM) systems to control access to documented versions of designs and managers employ so-called Product Data Management (PDM) systems to control access to documented versions of designs and managers employ so-called Workflow Management Software (WfMS) to coordinate production work processes. However, PDM and WfMS software are not generally integrated in industry. The scale of LHC experiments, like CMS, demands that industrial production techniques be applied in detector construction. This paper outlines the major functions and applications of the CRISTAL system (Cooperating Repositories and an Information System for Tracking Assembly Lifecycles) in use in CMS which successfully integrates PDM and WfMS techniques in managing large scale physics detector construction. This is the first time industrial production techniques have been deployed to this extent in detector construction

  18. Regional scales of fire danger rating in the forest: improved technique

    Directory of Open Access Journals (Sweden)

    A. V. Volokitina

    2017-04-01

    Full Text Available Wildland fires distribute unevenly in time and over area under the influence of weather and other factors. It is unfeasible to air patrol the whole forest area daily during a fire season as well as to keep all fire suppression forces constantly alert. Daily work and preparedness of forest fire protection services is regulated by the level of fire danger according to weather conditions (Nesterov’s index. PV-1 index, fire hazard class (Melekhov’s scale, regional scales (earlier called local scales. Unfortunately, there is still no unified comparable technique of making regional scales. As a result, it is difficult to maneuver forest fire protection resources, since the techniques currently used are not approved and not tested for their performance. They give fire danger rating incomparable even for neighboring regions. The paper analyzes the state-of-the-art in Russia and abroad. It is stated the irony is that with factors of fire danger measured quantitatively, the fire danger itself as a function has no quantitative expression. Thus, selection of an absolute criteria is of high importance for improvement of daily fire danger rating. On the example of the Chunsky forest ranger station (Krasnoyarsk Krai, an improved technique is suggested of making comparable local scales of forest fire danger rating based on an absolute criterion of fire danger rating – a probable density of active fires per million ha. A method and an algorithm are described of automatized local scales of fire danger that should facilitate effective creation of similar scales for any forest ranger station or aviation regional office using a database on forest fires and weather conditions. The information system of distant monitoring by Federal Forestry Agency of Russia is analyzed for its application in making local scales. To supplement the existing weather station net it is suggested that automatic compact weather stations or, if the latter is not possible, simple

  19. Size Reduction Techniques for Large Scale Permanent Magnet Generators in Wind Turbines

    Science.gov (United States)

    Khazdozian, Helena; Hadimani, Ravi; Jiles, David

    2015-03-01

    Increased wind penetration is necessary to reduce U.S. dependence on fossil fuels, combat climate change and increase national energy security. The U.S Department of Energy has recommended large scale and offshore wind turbines to achieve 20% wind electricity generation by 2030. Currently, geared doubly-fed induction generators (DFIGs) are typically employed in the drivetrain for conversion of mechanical to electrical energy. Yet, gearboxes account for the greatest downtime of wind turbines, decreasing reliability and contributing to loss of profit. Direct drive permanent magnet generators (PMGs) offer a reliable alternative to DFIGs by eliminating the gearbox. However, PMGs scale up in size and weight much more rapidly than DFIGs as rated power is increased, presenting significant challenges for large scale wind turbine application. Thus, size reduction techniques are needed for viability of PMGs in large scale wind turbines. Two size reduction techniques are presented. It is demonstrated that 25% size reduction of a 10MW PMG is possible with a high remanence theoretical permanent magnet. Additionally, the use of a Halbach cylinder in an outer rotor PMG is investigated to focus magnetic flux over the rotor surface in order to increase torque. This work was supported by the National Science Foundation under Grant No. 1069283 and a Barbara and James Palmer Endowment at Iowa State University.

  20. Advanced techniques for energy-efficient industrial-scale continuous chromatography

    Energy Technology Data Exchange (ETDEWEB)

    DeCarli, J.P. II (Dow Chemical Co., Midland, MI (USA)); Carta, G. (Virginia Univ., Charlottesville, VA (USA). Dept. of Chemical Engineering); Byers, C.H. (Oak Ridge National Lab., TN (USA))

    1989-11-01

    Continuous annular chromatography (CAC) is a developing technology that allows truly continuous chromatographic separations. Previous work has demonstrated the utility of this technology for the separation of various materials by isocratic elution on a bench scale. Novel applications and improved operation of the process were studied in this work, demonstrating that CAC is a versatile apparatus which is capable of separations at high throughput. Three specific separation systems were investigated. Pilot-scale separations at high loadings were performed using an industrial sugar mixture as an example of scale-up for isocratic separations. Bench-scale experiments of a low concentration metal ion mixture were performed to demonstrate stepwise elution, a chromatographic technique which decreases dilution and increases sorbent capacity. Finally, the separation of mixtures of amino acids by ion exchange was investigated to demonstrate the use of displacement development on the CAC. This technique, which perhaps has the most potential, when applied to the CAC allowed simultaneous separation and concentration of multicomponent mixtures on a continuous basis. Mathematical models were developed to describe the CAC performance and optimize the operating conditions. For all the systems investigated, the continuous separation performance of the CAC was found to be very nearly the same as the batchwise performance of conventional chromatography. the technology appears, thus, to be very promising for industrial applications. 43 figs., 9 tabs.

  1. A study of residence time distribution using radiotracer technique in the large scale plant facility

    Science.gov (United States)

    Wetchagarun, S.; Tippayakul, C.; Petchrak, A.; Sukrod, K.; Khoonkamjorn, P.

    2017-06-01

    As the demand for troubleshooting of large industrial plants increases, radiotracer techniques, which have capability to provide fast, online and effective detections to plant problems, have been continually developed. One of the good potential applications of the radiotracer for troubleshooting in a process plant is the analysis of Residence Time Distribution (RTD). In this paper, the study of RTD in a large scale plant facility using radiotracer technique was presented. The objective of this work is to gain experience on the RTD analysis using radiotracer technique in a “larger than laboratory” scale plant setup which can be comparable to the real industrial application. The experiment was carried out at the sedimentation tank in the water treatment facility of Thailand Institute of Nuclear Technology (Public Organization). Br-82 was selected to use in this work due to its chemical property, its suitable half-life and its on-site availability. NH4Br in the form of aqueous solution was injected into the system as the radiotracer. Six NaI detectors were placed along the pipelines and at the tank in order to determine the RTD of the system. The RTD and the Mean Residence Time (MRT) of the tank was analysed and calculated from the measured data. The experience and knowledge attained from this study is important for extending this technique to be applied to industrial facilities in the future.

  2. Industrial scale production of stable isotopes employing the technique of plasma separation

    International Nuclear Information System (INIS)

    Stevenson, N.R.; Bigelow, T.S.; Tarallo, F.J.

    2003-01-01

    Calutrons, centrifuges, diffusion and distillation processes are some of the devices and techniques that have been employed to produce substantial quantities of enriched stable isotopes. Nevertheless, the availability of enriched isotopes in sufficient quantities for industrial applications remains very restricted. Industries such as those involved with medicine, semiconductors, nuclear fuel, propulsion, and national defense have identified the potential need for various enriched isotopes in large quantities. Economically producing most enriched (non-gaseous) isotopes in sufficient quantities has so far eluded commercial producers. The plasma separation process is a commercial technique now available for producing large quantities of a wide range of enriched isotopes. Until recently, this technique has mainly been explored with small-scale ('proof-of-principle') devices that have been built and operated at research institutes. The new Theragenics TM facility at Oak Ridge, TN houses the only existing commercial scale PSP system. This device, which successfully operated in the 1980's, has recently been re-commissioned and is planned to be used to produce a variety of isotopes. Progress and the capabilities of this device and it's potential for impacting the world's supply of stable isotopes in the future is summarized. This technique now holds promise of being able to open the door to allowing new and exciting applications of these isotopes in the future. (author)

  3. The integration of novel diagnostics techniques for multi-scale monitoring of large civil infrastructures

    Directory of Open Access Journals (Sweden)

    F. Soldovieri

    2008-11-01

    Full Text Available In the recent years, structural monitoring of large infrastructures (buildings, dams, bridges or more generally man-made structures has raised an increased attention due to the growing interest about safety and security issues and risk assessment through early detection. In this framework, aim of the paper is to introduce a new integrated approach which combines two sensing techniques acting on different spatial and temporal scales. The first one is a distributed optic fiber sensor based on the Brillouin scattering phenomenon, which allows a spatially and temporally continuous monitoring of the structure with a "low" spatial resolution (meter. The second technique is based on the use of Ground Penetrating Radar (GPR, which can provide detailed images of the inner status of the structure (with a spatial resolution less then tens centimetres, but does not allow a temporal continuous monitoring. The paper describes the features of these two techniques and provides experimental results concerning preliminary test cases.

  4. Extension and application of a scaling technique for duplication of in-flight aerodynamic heat flux in ground test facilities

    NARCIS (Netherlands)

    Veraar, R.G.

    2009-01-01

    To enable direct experimental duplication of the inflight heat flux distribution on supersonic and hypersonic vehicles, an aerodynamic heating scaling technique has been developed. The scaling technique is based on the analytical equations for convective heat transfer for laminar and turbulent

  5. Measurements of liquid phase residence time distributions in a pilot-scale continuous leaching reactor using radiotracer technique

    International Nuclear Information System (INIS)

    Pant, H.J.; Sharma, V.K.; Shenoy, K.T.; Sreenivas, T.

    2015-01-01

    An alkaline based continuous leaching process is commonly used for extraction of uranium from uranium ore. The reactor in which the leaching process is carried out is called a continuous leaching reactor (CLR) and is expected to behave as a continuously stirred tank reactor (CSTR) for the liquid phase. A pilot-scale CLR used in a Technology Demonstration Pilot Plant (TDPP) was designed, installed and operated; and thus needed to be tested for its hydrodynamic behavior. A radiotracer investigation was carried out in the CLR for measurement of residence time distribution (RTD) of liquid phase with specific objectives to characterize the flow behavior of the reactor and validate its design. Bromine-82 as ammonium bromide was used as a radiotracer and about 40–60 MBq activity was used in each run. The measured RTD curves were treated and mean residence times were determined and simulated using a tanks-in-series model. The result of simulation indicated no flow abnormality and the reactor behaved as an ideal CSTR for the range of the operating conditions used in the investigation. - Highlights: • Radiotracer technique was applied for evaluation of design of a pilot-scale continuous leaching reactor. • Mean residence time and dead volume were estimated. Dead volume was found to be ranging from 4% to 15% at different operating conditions. • Tank-in-series model was used to simulate the measured RTD data and was found suitable to describe the flow in the reactor. • No flow abnormality was found and the reactor behaved as a well-mixed system. The design of the reactor was validated

  6. Patterns and sources of adult personality development: growth curve analyses of the NEO PI-R scales in a longitudinal twin study.

    Science.gov (United States)

    Bleidorn, Wiebke; Kandler, Christian; Riemann, Rainer; Spinath, Frank M; Angleitner, Alois

    2009-07-01

    The present study examined the patterns and sources of 10-year stability and change of adult personality assessed by the 5 domains and 30 facets of the Revised NEO Personality Inventory. Phenotypic and biometric analyses were performed on data from 126 identical and 61 fraternal twins from the Bielefeld Longitudinal Study of Adult Twins (BiLSAT). Consistent with previous research, LGM analyses revealed significant mean-level changes in domains and facets suggesting maturation of personality. There were also substantial individual differences in the change trajectories of both domain and facet scales. Correlations between age and trait changes were modest and there were no significant associations between change and gender. Biometric extensions of growth curve models showed that 10-year stability and change of personality were influenced by both genetic as well as environmental factors. Regarding the etiology of change, the analyses uncovered a more complex picture than originally stated, as findings suggest noticeable differences between traits with respect to the magnitude of genetic and environmental effects. (PsycINFO Database Record (c) 2009 APA, all rights reserved).

  7. Complementary techniques for solid oxide cell characterisation on micro- and nano-scale

    International Nuclear Information System (INIS)

    Wiedenmann, D.; Hauch, A.; Grobety, B.; Mogensen, M.; Vogt, U.

    2009-01-01

    High temperature steam electrolysis by solid oxide electrolysis cells (SOEC) is a way with great potential to transform clean and renewable energy from non-fossil sources to synthetic fuels such as hydrogen, methane or dimethyl ether, which have been identified as promising alternative energy carriers. Also, as SOEC can operate in the reverse mode as solid oxide fuel cells (SOFC), during high peak hours e.g. hydrogen can be used in a very efficient way to reconvert chemically stored energy into electrical energy. As solid oxide cells (SOC) are working at high temperatures (700-900 o C), material degradation and evaporation can occur e.g. from the cell sealing material, leading to poisoning effects and aging mechanisms which are decreasing the cell efficiency and long-term durability. In order to investigate such cell degradation processes, thorough examination on SOC often requires the chemical and structural characterisation on the microscopic and the nanoscopic level. The combination of different microscope techniques like conventional scanning electron microscopy (SEM), electron-probe microanalysis (EPMA) and the focused ion-beam (FIB) preparation technique for transmission electron microscopy (TEM) allows performing post mortem analysis on a multi scale level of cells after testing. These complementary techniques can be used to characterize structural and chemical changes over a large and representative sample area (micro-scale) on the one hand, and also on the nano-scale level for selected sample details on the other hand. This article presents a methodical approach for the structural and chemical characterisation of changes in aged cathode-supported electrolysis cells produced at Riso DTU, Denmark. Also, results from the characterisation of impurities at the electrolyte/hydrogen interface caused by evaporation from sealing material are discussed. (author)

  8. Preionization Techniques in a kJ-Scale Dense Plasma Focus

    Science.gov (United States)

    Povilus, Alexander; Shaw, Brian; Chapman, Steve; Podpaly, Yuri; Cooper, Christopher; Falabella, Steve; Prasad, Rahul; Schmidt, Andrea

    2016-10-01

    A dense plasma focus (DPF) is a type of z-pinch device that uses a high current, coaxial plasma gun with an implosion phase to generate dense plasmas. These devices can accelerate a beam of ions to MeV-scale energies through strong electric fields generated by instabilities during the implosion of the plasma sheath. The formation of these instabilities, however, relies strongly on the history of the plasma sheath in the device, including the evolution of the gas breakdown in the device. In an effort to reduce variability in the performance of the device, we attempt to control the initial gas breakdown in the device by seeding the system with free charges before the main power pulse arrives. We report on the effectiveness of two techniques developed for a kJ-scale DPF at LLNL, a miniature primer spark gap and pulsed, 255nm LED illumination. Prepared by LLNL under Contract DE-AC52-07NA27344.

  9. Sample preparation for large-scale bioanalytical studies based on liquid chromatographic techniques.

    Science.gov (United States)

    Medvedovici, Andrei; Bacalum, Elena; David, Victor

    2018-01-01

    Quality of the analytical data obtained for large-scale and long term bioanalytical studies based on liquid chromatography depends on a number of experimental factors including the choice of sample preparation method. This review discusses this tedious part of bioanalytical studies, applied to large-scale samples and using liquid chromatography coupled with different detector types as core analytical technique. The main sample preparation methods included in this paper are protein precipitation, liquid-liquid extraction, solid-phase extraction, derivatization and their versions. They are discussed by analytical performances, fields of applications, advantages and disadvantages. The cited literature covers mainly the analytical achievements during the last decade, although several previous papers became more valuable in time and they are included in this review. Copyright © 2017 John Wiley & Sons, Ltd.

  10. Nudging technique for scale bridging in air quality/climate atmospheric composition modelling

    Directory of Open Access Journals (Sweden)

    A. Maurizi

    2012-04-01

    Full Text Available The interaction between air quality and climate involves dynamical scales that cover a very wide range. Bridging these scales in numerical simulations is fundamental in studies devoted to megacity/hot-spot impacts on larger scales. A technique based on nudging is proposed as a bridging method that can couple different models at different scales.

    Here, nudging is used to force low resolution chemical composition models with a run of a high resolution model on a critical area. A one-year numerical experiment focused on the Po Valley hot spot is performed using the BOLCHEM model to asses the method.

    The results show that the model response is stable to perturbation induced by the nudging and that, taking the high resolution run as a reference, performances of the nudged run increase with respect to the non-forced run. The effect outside the forcing area depends on transport and is significant in a relevant number of events although it becomes weak on seasonal or yearly basis.

  11. Elongation cutoff technique armed with quantum fast multipole method for linear scaling.

    Science.gov (United States)

    Korchowiec, Jacek; Lewandowski, Jakub; Makowski, Marcin; Gu, Feng Long; Aoki, Yuriko

    2009-11-30

    A linear-scaling implementation of the elongation cutoff technique (ELG/C) that speeds up Hartree-Fock (HF) self-consistent field calculations is presented. The cutoff method avoids the known bottleneck of the conventional HF scheme, that is, diagonalization, because it operates within the low dimension subspace of the whole atomic orbital space. The efficiency of ELG/C is illustrated for two model systems. The obtained results indicate that the ELG/C is a very efficient sparse matrix algebra scheme. Copyright 2009 Wiley Periodicals, Inc.

  12. Scaling Robotic Displays: Displays and Techniques for Dismounted Movement with Robots

    Science.gov (United States)

    2010-04-01

    you are performing the low crawl 4.25 5.00 Drive the robot while you are negotiating the hill 6.00 5.00 Drive the robot while you are climbing the... stairs 4.67 5.00 Drive the robot while you are walking 5.70 5.27 HMD It was fairly doable. 1 When you’re looking through the lens, it’s not...Scaling Robotic Displays: Displays and Techniques for Dismounted Movement with Robots by Elizabeth S. Redden, Rodger A. Pettitt

  13. Gallium Nitride: A Nano scale Study using Electron Microscopy and Associated Techniques

    International Nuclear Information System (INIS)

    Mohammed Benaissa; Vennegues, Philippe

    2008-01-01

    A complete nano scale study on GaN thin films doped with Mg. This study was carried out using TEM and associated techniques such as HREM, CBED, EDX and EELS. It was found that the presence of triangular defects (of few nanometers in size) within GaN:Mg films were at the origin of unexpected electrical and optical behaviors, such as a decrease in the free hole density at high Mg doping. It is shown that these defects are inversion domains limited with inversion-domains boundaries. (author)

  14. Quasistatic zooming of FDTD E-field computations: the impact of down-scaling techniques

    Energy Technology Data Exchange (ETDEWEB)

    Van de Kamer, J.B.; Kroeze, H.; De Leeuw, A.A.C.; Lagendijk, J.J.W. [Department of Radiotherapy, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX, Utrecht (Netherlands)

    2001-05-01

    Due to current computer limitations, regional hyperthermia treatment planning (HTP) is practically limited to a resolution of 1 cm, whereas a millimetre resolution is desired. Using the centimetre resolution E-vector-field distribution, computed with, for example, the finite-difference time-domain (FDTD) method and the millimetre resolution patient anatomy it is possible to obtain a millimetre resolution SAR distribution in a volume of interest (VOI) by means of quasistatic zooming. To compute the required low-resolution E-vector-field distribution, a low-resolution dielectric geometry is needed which is constructed by down-scaling the millimetre resolution dielectric geometry. In this study we have investigated which down-scaling technique results in a dielectric geometry that yields the best low-resolution E-vector-field distribution as input for quasistatic zooming. A segmented 2 mm resolution CT data set of a patient has been down-scaled to 1 cm resolution using three different techniques: 'winner-takes-all', 'volumetric averaging' and 'anisotropic volumetric averaging'. The E-vector-field distributions computed for those low-resolution dielectric geometries have been used as input for quasistatic zooming. The resulting zoomed-resolution SAR distributions were compared with a reference: the 2 mm resolution SAR distribution computed with the FDTD method. The E-vector-field distribution for both a simple phantom and the complex partial patient geometry down-scaled using 'anisotropic volumetric averaging' resulted in zoomed-resolution SAR distributions that best approximate the corresponding high-resolution SAR distribution (correlation 97, 96% and absolute averaged difference 6, 14% respectively). (author)

  15. Heat techniques in the light of the innovation curve. New technology for gas appliances and solar water heaters; Warmtetechnieken in het licht van de innovatiecurve. Nieuwe technologie gezocht voor gastoestel en zonneboilers

    Energy Technology Data Exchange (ETDEWEB)

    Vollebregt, R.

    2012-09-15

    The development, market introduction and deployment of new techniques often follows an S-shaped curve. The gas-fired boiler transformed from a conventional apparatus into the current high efficiency boiler. Will the high efficiency electricity boiler be the next breakthrough technique? The electrical and gas heat pumps are fully developed techniques, but this does not apply to their use in Dutch single-family dwellings [Dutch] De ontwikkeling, marktintroductie en toepassing van nieuwe technieken verloopt vaak volgens een S-vormige curve. De gasketel transformeerde van conventioneel toestel naar de huidige hr-ketel. Is de hre-ketel de volgende doorbraaktechniek? De elektrische en de gaswarmtepomp zijn uitontwikkelde technieken, maar nog niet voor de toepassing in een Nederlandse eengezinswoning.

  16. Large-scale chromosome folding versus genomic DNA sequences: A discrete double Fourier transform technique.

    Science.gov (United States)

    Chechetkin, V R; Lobzin, V V

    2017-08-07

    Using state-of-the-art techniques combining imaging methods and high-throughput genomic mapping tools leaded to the significant progress in detailing chromosome architecture of various organisms. However, a gap still remains between the rapidly growing structural data on the chromosome folding and the large-scale genome organization. Could a part of information on the chromosome folding be obtained directly from underlying genomic DNA sequences abundantly stored in the databanks? To answer this question, we developed an original discrete double Fourier transform (DDFT). DDFT serves for the detection of large-scale genome regularities associated with domains/units at the different levels of hierarchical chromosome folding. The method is versatile and can be applied to both genomic DNA sequences and corresponding physico-chemical parameters such as base-pairing free energy. The latter characteristic is closely related to the replication and transcription and can also be used for the assessment of temperature or supercoiling effects on the chromosome folding. We tested the method on the genome of E. coli K-12 and found good correspondence with the annotated domains/units established experimentally. As a brief illustration of further abilities of DDFT, the study of large-scale genome organization for bacteriophage PHIX174 and bacterium Caulobacter crescentus was also added. The combined experimental, modeling, and bioinformatic DDFT analysis should yield more complete knowledge on the chromosome architecture and genome organization. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Validation of a large-scale audit technique for CT dose optimisation

    International Nuclear Information System (INIS)

    Wood, T. J.; Davis, A. W.; Moore, C. S.; Beavis, A. W.; Saunderson, J. R.

    2008-01-01

    The expansion and increasing availability of computed tomography (CT) imaging means that there is a greater need for the development of efficient optimisation strategies that are able to inform clinical practice, without placing a significant burden on limited departmental resources. One of the most fundamental aspects to any optimisation programme is the collection of patient dose information, which can be compared with appropriate diagnostic reference levels. This study has investigated the implementation of a large-scale audit technique, which utilises data that already exist in the radiology information system, to determine typical doses for a range of examinations on four CT scanners. This method has been validated against what is considered the 'gold standard' technique for patient dose audits, and it has been demonstrated that results equivalent to the 'standard-sized patient' can be inferred from this much larger data set. This is particularly valuable where CT optimisation is concerned as it is considered a 'high dose' technique, and hence close monitoring of patient dose is particularly important. (authors)

  18. Comparison of residual NAPL source removal techniques in 3D metric scale experiments

    Science.gov (United States)

    Atteia, O.; Jousse, F.; Cohen, G.; Höhener, P.

    2017-07-01

    the contaminant fluxes, which were different for each technique. This paper presents the first comparison of four remediation techniques at the scale of 1 m3 tanks including heterogeneities. Sparging, persulfate and surfactant only remove 50% of the mass, while it is more than 99% for thermal. In terms of flux removal oxidant addition performs better when density effects are used.

  19. Summary receiver operating characteristic curves as a technique for meta-analysis of the diagnostic performance of duplex ultrasonography in peripheral arterial disease

    NARCIS (Netherlands)

    deVries, SO; Hunink, MGM; Polak, JF

    Rationale and Objectives. We summarized and compared the diagnostic performance of duplex and color-guided duplex ultrasonography in the evaluation of peripheral arterial disease. We present our research as an example of the use of summary receiver operating characteristic (ROC) curves in a

  20. Quantum fields in curved space

    International Nuclear Information System (INIS)

    Birrell, N.D.; Davies, P.C.W.

    1982-01-01

    The book presents a comprehensive review of the subject of gravitational effects in quantum field theory. Quantum field theory in Minkowski space, quantum field theory in curved spacetime, flat spacetime examples, curved spacetime examples, stress-tensor renormalization, applications of renormalization techniques, quantum black holes and interacting fields are all discussed in detail. (U.K.)

  1. A scaled underwater launch system accomplished by stress wave propagation technique

    International Nuclear Information System (INIS)

    Wei Yanpeng; Wang Yiwei; Huang Chenguang; Fang Xin; Duan Zhuping

    2011-01-01

    A scaled underwater launch system based on the stress wave theory and the slip Hopkinson pressure bar (SHPB) technique is developed to study the phenomenon of cavitations and other hydrodynamic features of high-speed submerged bodies. The present system can achieve a transient acceleration in the water instead of long-time acceleration outside the water. The projectile can obtain a maximum speed of 30 m/s in about 200 μs by the SHPB launcher. The cavitation characteristics in the stage of acceleration and deceleration are captured by the high-speed camera. The processes of cavitation inception, development and collapse are also simulated with the business software FLUENT, and the results are in good agreement with experiment. There is about 20-30% energy loss during the launching processes, the mechanism of energy loss is also preliminary investigated by measuring the energy of the incident bar and the projectile. (authors)

  2. Very large scale characterization of graphene mechanical devices using a colorimetry technique.

    Science.gov (United States)

    Cartamil-Bueno, Santiago Jose; Centeno, Alba; Zurutuza, Amaia; Steeneken, Peter Gerard; van der Zant, Herre Sjoerd Jan; Houri, Samer

    2017-06-08

    We use a scalable optical technique to characterize more than 21 000 circular nanomechanical devices made of suspended single- and double-layer graphene on cavities with different diameters (D) and depths (g). To maximize the contrast between suspended and broken membranes we used a model for selecting the optimal color filter. The method enables parallel and automatized image processing for yield statistics. We find the survival probability to be correlated with a structural mechanics scaling parameter given by D 4 /g 3 . Moreover, we extract a median adhesion energy of Γ = 0.9 J m -2 between the membrane and the native SiO 2 at the bottom of the cavities.

  3. [Adverse Effect Predictions Based on Computational Toxicology Techniques and Large-scale Databases].

    Science.gov (United States)

    Uesawa, Yoshihiro

    2018-01-01

     Understanding the features of chemical structures related to the adverse effects of drugs is useful for identifying potential adverse effects of new drugs. This can be based on the limited information available from post-marketing surveillance, assessment of the potential toxicities of metabolites and illegal drugs with unclear characteristics, screening of lead compounds at the drug discovery stage, and identification of leads for the discovery of new pharmacological mechanisms. This present paper describes techniques used in computational toxicology to investigate the content of large-scale spontaneous report databases of adverse effects, and it is illustrated with examples. Furthermore, volcano plotting, a new visualization method for clarifying the relationships between drugs and adverse effects via comprehensive analyses, will be introduced. These analyses may produce a great amount of data that can be applied to drug repositioning.

  4. Dynamic state estimation techniques for large-scale electric power systems

    International Nuclear Information System (INIS)

    Rousseaux, P.; Pavella, M.

    1991-01-01

    This paper presents the use of dynamic type state estimators for energy management in electric power systems. Various dynamic type estimators have been developed, but have never been implemented. This is primarily because of dimensionality problems posed by the conjunction of an extended Kalman filter with a large scale power system. This paper precisely focuses on how to circumvent the high dimensionality, especially prohibitive in the filtering step, by using a decomposition-aggregation hierarchical scheme; to appropriately model the power system dynamics, the authors introduce new state variables in the prediction step and rely on a load forecasting method. The combination of these two techniques succeeds in solving the overall dynamic state estimation problem not only in a tractable and realistic way, but also in compliance with real-time computational requirements. Further improvements are also suggested, bound to the specifics of the high voltage electric transmission systems

  5. Application of the thermoluminescent (TL) and optically stimulated luminescence (OSL) dosimetry techniques to determinate the isodose curves in a cancer treatment planning simulation using Volumetric Modulated Arc Therapy - VMAT

    International Nuclear Information System (INIS)

    Bravim, Amanda

    2015-01-01

    The Volumetric Modulated Arc Therapy (VMAT) is an advance technique of Intensity Modulated Radiation Therapy (IMRT). This progress is due to the continuous gantry rotation with the radiation beam modulation providing lower time of the patient treatment. This research aimed the verification of the isodose curves in a simulation of a vertebra treatment with spinal cord protection using the thermoluminescent (TL) and optically stimulated luminescence (OSL) dosimetry techniques and the LiF:Mg,Ti (TLD-100), CaS0 4 :Dy and Al 2 0 3 :C dosimeters and LiF:Mg,Ti micro dosimeters (TLD-100). The dosimeters were characterized using PMMA plates of 30 x 30 x 30 cm 3 and different thickness. All irradiations were done using Truebeam STx linear accelerator of Hospital Israelita Albert Einstein, with 6 MV photons beam. After the dosimeter characterization, they were irradiated according the specific planning simulation and using a PMMA phantom developed to VMAT measurements. This irradiation aimed to verify the isodose curves of the treatment simulation using the two dosimetry techniques. All types of dosimeters showed satisfactory results to determine the dose distribution but analysing the complexity of the isodose curves and the proximity of them, the LiF:Mg,Ti micro dosimeter showed the most appropriate for use due to its small dimensions. Regarding the best technique, as both technique showed satisfactory results, the TL technique presents less complex to be used because the most of the radiotherapy departments already have a TL laboratory. The OSL technique requires more care and greater investment in the hospital. (author)

  6. Towards improved hydrologic predictions using data assimilation techniques for water resource management at the continental scale

    Science.gov (United States)

    Naz, Bibi; Kurtz, Wolfgang; Kollet, Stefan; Hendricks Franssen, Harrie-Jan; Sharples, Wendy; Görgen, Klaus; Keune, Jessica; Kulkarni, Ketan

    2017-04-01

    More accurate and reliable hydrologic simulations are important for many applications such as water resource management, future water availability projections and predictions of extreme events. However, simulation of spatial and temporal variations in the critical water budget components such as precipitation, snow, evaporation and runoff is highly uncertain, due to errors in e.g. model structure and inputs (hydrologic parameters and forcings). In this study, we use data assimilation techniques to improve the predictability of continental-scale water fluxes using in-situ measurements along with remotely sensed information to improve hydrologic predications for water resource systems. The Community Land Model, version 3.5 (CLM) integrated with the Parallel Data Assimilation Framework (PDAF) was implemented at spatial resolution of 1/36 degree (3 km) over the European CORDEX domain. The modeling system was forced with a high-resolution reanalysis system COSMO-REA6 from Hans-Ertel Centre for Weather Research (HErZ) and ERA-Interim datasets for time period of 1994-2014. A series of data assimilation experiments were conducted to assess the efficiency of assimilation of various observations, such as river discharge data, remotely sensed soil moisture, terrestrial water storage and snow measurements into the CLM-PDAF at regional to continental scales. This setup not only allows to quantify uncertainties, but also improves streamflow predictions by updating simultaneously model states and parameters utilizing observational information. The results from different regions, watershed sizes, spatial resolutions and timescales are compared and discussed in this study.

  7. Comparisons of Particle Tracking Techniques and Galerkin Finite Element Methods in Flow Simulations on Watershed Scales

    Science.gov (United States)

    Shih, D.; Yeh, G.

    2009-12-01

    This paper applies two numerical approximations, the particle tracking technique and Galerkin finite element method, to solve the diffusive wave equation in both one-dimensional and two-dimensional flow simulations. The finite element method is one of most commonly approaches in numerical problems. It can obtain accurate solutions, but calculation times may be rather extensive. The particle tracking technique, using either single-velocity or average-velocity tracks to efficiently perform advective transport, could use larger time-step sizes than the finite element method to significantly save computational time. Comparisons of the alternative approximations are examined in this poster. We adapt the model WASH123D to examine the work. WASH123D is an integrated multimedia, multi-processes, physics-based computational model suitable for various spatial-temporal scales, was first developed by Yeh et al., at 1998. The model has evolved in design capability and flexibility, and has been used for model calibrations and validations over the course of many years. In order to deliver a locally hydrological model in Taiwan, the Taiwan Typhoon and Flood Research Institute (TTFRI) is working with Prof. Yeh to develop next version of WASH123D. So, the work of our preliminary cooperationx is also sketched in this poster.

  8. Techniques for extracting single-trial activity patterns from large-scale neural recordings

    Science.gov (United States)

    Churchland, Mark M; Yu, Byron M; Sahani, Maneesh; Shenoy, Krishna V

    2008-01-01

    Summary Large, chronically-implanted arrays of microelectrodes are an increasingly common tool for recording from primate cortex, and can provide extracellular recordings from many (order of 100) neurons. While the desire for cortically-based motor prostheses has helped drive their development, such arrays also offer great potential to advance basic neuroscience research. Here we discuss the utility of array recording for the study of neural dynamics. Neural activity often has dynamics beyond that driven directly by the stimulus. While governed by those dynamics, neural responses may nevertheless unfold differently for nominally identical trials, rendering many traditional analysis methods ineffective. We review recent studies – some employing simultaneous recording, some not – indicating that such variability is indeed present both during movement generation, and during the preceding premotor computations. In such cases, large-scale simultaneous recordings have the potential to provide an unprecedented view of neural dynamics at the level of single trials. However, this enterprise will depend not only on techniques for simultaneous recording, but also on the use and further development of analysis techniques that can appropriately reduce the dimensionality of the data, and allow visualization of single-trial neural behavior. PMID:18093826

  9. Application of parallel computing techniques to a large-scale reservoir simulation

    International Nuclear Information System (INIS)

    Zhang, Keni; Wu, Yu-Shu; Ding, Chris; Pruess, Karsten

    2001-01-01

    Even with the continual advances made in both computational algorithms and computer hardware used in reservoir modeling studies, large-scale simulation of fluid and heat flow in heterogeneous reservoirs remains a challenge. The problem commonly arises from intensive computational requirement for detailed modeling investigations of real-world reservoirs. This paper presents the application of a massive parallel-computing version of the TOUGH2 code developed for performing large-scale field simulations. As an application example, the parallelized TOUGH2 code is applied to develop a three-dimensional unsaturated-zone numerical model simulating flow of moisture, gas, and heat in the unsaturated zone of Yucca Mountain, Nevada, a potential repository for high-level radioactive waste. The modeling approach employs refined spatial discretization to represent the heterogeneous fractured tuffs of the system, using more than a million 3-D gridblocks. The problem of two-phase flow and heat transfer within the model domain leads to a total of 3,226,566 linear equations to be solved per Newton iteration. The simulation is conducted on a Cray T3E-900, a distributed-memory massively parallel computer. Simulation results indicate that the parallel computing technique, as implemented in the TOUGH2 code, is very efficient. The reliability and accuracy of the model results have been demonstrated by comparing them to those of small-scale (coarse-grid) models. These comparisons show that simulation results obtained with the refined grid provide more detailed predictions of the future flow conditions at the site, aiding in the assessment of proposed repository performance

  10. Planetary-Scale Geospatial Data Analysis Techniques in Google's Earth Engine Platform (Invited)

    Science.gov (United States)

    Hancher, M.

    2013-12-01

    Geoscientists have more and more access to new tools for large-scale computing. With any tool, some tasks are easy and other tasks hard. It is natural to look to new computing platforms to increase the scale and efficiency of existing techniques, but there is a more exiting opportunity to discover and develop a new vocabulary of fundamental analysis idioms that are made easy and effective by these new tools. Google's Earth Engine platform is a cloud computing environment for earth data analysis that combines a public data catalog with a large-scale computational facility optimized for parallel processing of geospatial data. The data catalog includes a nearly complete archive of scenes from Landsat 4, 5, 7, and 8 that have been processed by the USGS, as well as a wide variety of other remotely-sensed and ancillary data products. Earth Engine supports a just-in-time computation model that enables real-time preview during algorithm development and debugging as well as during experimental data analysis and open-ended data exploration. Data processing operations are performed in parallel across many computers in Google's datacenters. The platform automatically handles many traditionally-onerous data management tasks, such as data format conversion, reprojection, resampling, and associating image metadata with pixel data. Early applications of Earth Engine have included the development of Google's global cloud-free fifteen-meter base map and global multi-decadal time-lapse animations, as well as numerous large and small experimental analyses by scientists from a range of academic, government, and non-governmental institutions, working in a wide variety of application areas including forestry, agriculture, urban mapping, and species habitat modeling. Patterns in the successes and failures of these early efforts have begun to emerge, sketching the outlines of a new set of simple and effective approaches to geospatial data analysis.

  11. Simplified field-in-field technique for a large-scale implementation in breast radiation treatment

    International Nuclear Information System (INIS)

    Fournier-Bidoz, Nathalie; Kirova, Youlia M.; Campana, Francois; Dendale, Rémi; Fourquet, Alain

    2012-01-01

    We wanted to evaluate a simplified “field-in-field” technique (SFF) that was implemented in our department of Radiation Oncology for breast treatment. This study evaluated 15 consecutive patients treated with a simplified field in field technique after breast-conserving surgery for early-stage breast cancer. Radiotherapy consisted of whole-breast irradiation to the total dose of 50 Gy in 25 fractions, and a boost of 16 Gy in 8 fractions to the tumor bed. We compared dosimetric outcomes of SFF to state-of-the-art electronic surface compensation (ESC) with dynamic leaves. An analysis of early skin toxicity of a population of 15 patients was performed. The median volume receiving at least 95% of the prescribed dose was 763 mL (range, 347–1472) for SFF vs. 779 mL (range, 349–1494) for ESC. The median residual 107% isodose was 0.1 mL (range, 0–63) for SFF and 1.9 mL (range, 0–57) for ESC. Monitor units were on average 25% higher in ESC plans compared with SFF. No patient treated with SFF had acute side effects superior to grade 1-NCI scale. SFF created homogenous 3D dose distributions equivalent to electronic surface compensation with dynamic leaves. It allowed the integration of a forward planned concomitant tumor bed boost as an additional multileaf collimator subfield of the tangential fields. Compared with electronic surface compensation with dynamic leaves, shorter treatment times allowed better radiation protection to the patient. Low-grade acute toxicity evaluated weekly during treatment and 2 months after treatment completion justified the pursuit of this technique for all breast patients in our department.

  12. Spatial epidemiological techniques in cholera mapping and analysis towards a local scale predictive modelling

    Science.gov (United States)

    Rasam, A. R. A.; Ghazali, R.; Noor, A. M. M.; Mohd, W. M. N. W.; Hamid, J. R. A.; Bazlan, M. J.; Ahmad, N.

    2014-02-01

    Cholera spatial epidemiology is the study of the spread and control of the disease spatial pattern and epidemics. Previous studies have shown that multi-factorial causation such as human behaviour, ecology and other infectious risk factors influence the disease outbreaks. Thus, understanding spatial pattern and possible interrelationship factors of the outbreaks are crucial to be explored an in-depth study. This study focuses on the integration of geographical information system (GIS) and epidemiological techniques in exploratory analyzing the cholera spatial pattern and distribution in the selected district of Sabah. Spatial Statistic and Pattern tools in ArcGIS and Microsoft Excel software were utilized to map and analyze the reported cholera cases and other data used. Meanwhile, cohort study in epidemiological technique was applied to investigate multiple outcomes of the disease exposure. The general spatial pattern of cholera was highly clustered showed the disease spread easily at a place or person to others especially 1500 meters from the infected person and locations. Although the cholera outbreaks in the districts are not critical, it could be endemic at the crowded areas, unhygienic environment, and close to contaminated water. It was also strongly believed that the coastal water of the study areas has possible relationship with the cholera transmission and phytoplankton bloom since the areas recorded higher cases. GIS demonstrates a vital spatial epidemiological technique in determining the distribution pattern and elucidating the hypotheses generating of the disease. The next research would be applying some advanced geo-analysis methods and other disease risk factors for producing a significant a local scale predictive risk model of the disease in Malaysia.

  13. Spatial epidemiological techniques in cholera mapping and analysis towards a local scale predictive modelling

    International Nuclear Information System (INIS)

    Rasam, A R A; Ghazali, R; Noor, A M M; Mohd, W M N W; Hamid, J R A; Bazlan, M J; Ahmad, N

    2014-01-01

    Cholera spatial epidemiology is the study of the spread and control of the disease spatial pattern and epidemics. Previous studies have shown that multi-factorial causation such as human behaviour, ecology and other infectious risk factors influence the disease outbreaks. Thus, understanding spatial pattern and possible interrelationship factors of the outbreaks are crucial to be explored an in-depth study. This study focuses on the integration of geographical information system (GIS) and epidemiological techniques in exploratory analyzing the cholera spatial pattern and distribution in the selected district of Sabah. Spatial Statistic and Pattern tools in ArcGIS and Microsoft Excel software were utilized to map and analyze the reported cholera cases and other data used. Meanwhile, cohort study in epidemiological technique was applied to investigate multiple outcomes of the disease exposure. The general spatial pattern of cholera was highly clustered showed the disease spread easily at a place or person to others especially 1500 meters from the infected person and locations. Although the cholera outbreaks in the districts are not critical, it could be endemic at the crowded areas, unhygienic environment, and close to contaminated water. It was also strongly believed that the coastal water of the study areas has possible relationship with the cholera transmission and phytoplankton bloom since the areas recorded higher cases. GIS demonstrates a vital spatial epidemiological technique in determining the distribution pattern and elucidating the hypotheses generating of the disease. The next research would be applying some advanced geo-analysis methods and other disease risk factors for producing a significant a local scale predictive risk model of the disease in Malaysia

  14. RNA-TVcurve: a Web server for RNA secondary structure comparison based on a multi-scale similarity of its triple vector curve representation.

    Science.gov (United States)

    Li, Ying; Shi, Xiaohu; Liang, Yanchun; Xie, Juan; Zhang, Yu; Ma, Qin

    2017-01-21

    RNAs have been found to carry diverse functionalities in nature. Inferring the similarity between two given RNAs is a fundamental step to understand and interpret their functional relationship. The majority of functional RNAs show conserved secondary structures, rather than sequence conservation. Those algorithms relying on sequence-based features usually have limitations in their prediction performance. Hence, integrating RNA structure features is very critical for RNA analysis. Existing algorithms mainly fall into two categories: alignment-based and alignment-free. The alignment-free algorithms of RNA comparison usually have lower time complexity than alignment-based algorithms. An alignment-free RNA comparison algorithm was proposed, in which novel numerical representations RNA-TVcurve (triple vector curve representation) of RNA sequence and corresponding secondary structure features are provided. Then a multi-scale similarity score of two given RNAs was designed based on wavelet decomposition of their numerical representation. In support of RNA mutation and phylogenetic analysis, a web server (RNA-TVcurve) was designed based on this alignment-free RNA comparison algorithm. It provides three functional modules: 1) visualization of numerical representation of RNA secondary structure; 2) detection of single-point mutation based on secondary structure; and 3) comparison of pairwise and multiple RNA secondary structures. The inputs of the web server require RNA primary sequences, while corresponding secondary structures are optional. For the primary sequences alone, the web server can compute the secondary structures using free energy minimization algorithm in terms of RNAfold tool from Vienna RNA package. RNA-TVcurve is the first integrated web server, based on an alignment-free method, to deliver a suite of RNA analysis functions, including visualization, mutation analysis and multiple RNAs structure comparison. The comparison results with two popular RNA

  15. Low-loss, compact, and fabrication-tolerant Si-wire 90° waveguide bend using clothoid and normal curves for large scale photonic integrated circuits.

    Science.gov (United States)

    Fujisawa, Takeshi; Makino, Shuntaro; Sato, Takanori; Saitoh, Kunimasa

    2017-04-17

    Ultimately low-loss 90° waveguide bend composed of clothoid and normal curves is proposed for dense optical interconnect photonic integrated circuits. By using clothoid curves at the input and output of 90° waveguide bend, straight and bent waveguides are smoothly connected without increasing the footprint. We found that there is an optimum ratio of clothoid curves in the bend and the bending loss can be significantly reduced compared with normal bend. 90% reduction of the bending loss for the bending radius of 4 μm is experimentally demonstrated with excellent agreement between theory and experiment. The performance is compared with the waveguide bend with offset, and the proposed bend is superior to the waveguide bend with offset in terms of fabrication tolerance.

  16. Computational aspects of algebraic curves

    CERN Document Server

    Shaska, Tanush

    2005-01-01

    The development of new computational techniques and better computing power has made it possible to attack some classical problems of algebraic geometry. The main goal of this book is to highlight such computational techniques related to algebraic curves. The area of research in algebraic curves is receiving more interest not only from the mathematics community, but also from engineers and computer scientists, because of the importance of algebraic curves in applications including cryptography, coding theory, error-correcting codes, digital imaging, computer vision, and many more.This book cove

  17. Spherical nanoindentation of proton irradiated 304 stainless steel: A comparison of small scale mechanical test techniques for measuring irradiation hardening

    Science.gov (United States)

    Weaver, Jordan S.; Pathak, Siddhartha; Reichardt, Ashley; Vo, Hi T.; Maloy, Stuart A.; Hosemann, Peter; Mara, Nathan A.

    2017-09-01

    Experimentally quantifying the mechanical effects of radiation damage in reactor materials is necessary for the development and qualification of new materials for improved performance and safety. This can be achieved in a high-throughput fashion through a combination of ion beam irradiation and small scale mechanical testing in contrast to the high cost and laborious nature of bulk testing of reactor irradiated samples. The current work focuses on using spherical nanoindentation stress-strain curves on unirradiated and proton irradiated (10 dpa at 360 °C) 304 stainless steel to quantify the mechanical effects of radiation damage. Spherical nanoindentation stress-strain measurements show a radiation-induced increase in indentation yield strength from 1.36 GPa to 2.72 GPa and a radiation-induced increase in indentation work hardening rate of 10 GPa-30 GPa. These measurements are critically compared against Berkovich nanohardness, micropillar compression, and micro-tension measurements on the same material and similar grain orientations. The ratio of irradiated to unirradiated yield strength increases by a similar factor of 2 when measured via spherical nanoindentation or Berkovich nanohardness testing. A comparison of spherical indentation stress-strain curves to uniaxial (micropillar and micro-tension) stress-strain curves was achieved using a simple scaling relationship which shows good agreement for the unirradiated condition and poor agreement in post-yield behavior for the irradiated condition. The disagreement between spherical nanoindentation and uniaxial stress-strain curves is likely due to the plastic instability that occurs during uniaxial tests but is absent during spherical nanoindentation tests.

  18. Experimental investigations of micro-scale flow and heat transfer phenomena by using molecular tagging techniques

    International Nuclear Information System (INIS)

    Hu, Hui; Jin, Zheyan; Lum, Chee; Nocera, Daniel; Koochesfahani, Manoochehr

    2010-01-01

    Recent progress made in the development of novel molecule-based flow diagnostic techniques, including molecular tagging velocimetry (MTV) and lifetime-based molecular tagging thermometry (MTT), to achieve simultaneous measurements of multiple important flow variables for micro-flows and micro-scale heat transfer studies is reported in this study. The focus of the work described here is the particular class of molecular tagging tracers that relies on phosphorescence. Instead of using tiny particles, especially designed phosphorescent molecules, which can be turned into long-lasting glowing marks upon excitation by photons of appropriate wavelength, are used as tracers for both flow velocity and temperature measurements. A pulsed laser is used to 'tag' the tracer molecules in the regions of interest, and the tagged molecules are imaged at two successive times within the photoluminescence lifetime of the tracer molecules. The measured Lagrangian displacement of the tagged molecules provides the estimate of the fluid velocity. The simultaneous temperature measurement is achieved by taking advantage of the temperature dependence of phosphorescence lifetime, which is estimated from the intensity ratio of the tagged molecules in the acquired two phosphorescence images. The implementation and application of the molecular tagging approach for micro-scale thermal flow studies are demonstrated by two examples. The first example is to conduct simultaneous flow velocity and temperature measurements inside a microchannel to quantify the transient behavior of electroosmotic flow (EOF) to elucidate underlying physics associated with the effects of Joule heating on electrokinematically driven flows. The second example is to examine the time evolution of the unsteady heat transfer and phase changing process inside micro-sized, icing water droplets, which is pertinent to the ice formation and accretion processes as water droplets impinge onto cold wind turbine blades

  19. Spraying Techniques for Large Scale Manufacturing of PEM-FC Electrodes

    Science.gov (United States)

    Hoffman, Casey J.

    Fuel cells are highly efficient energy conversion devices that represent one part of the solution to the world's current energy crisis in the midst of global climate change. When supplied with the necessary reactant gasses, fuel cells produce only electricity, heat, and water. The fuel used, namely hydrogen, is available from many sources including natural gas and the electrolysis of water. If the electricity for electrolysis is generated by renewable energy (e.g., solar and wind power), fuel cells represent a completely 'green' method of producing electricity. The thought of being able to produce electricity to power homes, vehicles, and other portable or stationary equipment with essentially zero environmentally harmful emissions has been driving academic and industrial fuel cell research and development with the goal of successfully commercializing this technology. Unfortunately, fuel cells cannot achieve any appreciable market penetration at their current costs. The author's hypothesis is that: the development of automated, non-contact deposition methods for electrode manufacturing will improve performance and process flexibility, thereby helping to accelerate the commercialization of PEMFC technology. The overarching motivation for this research was to lower the cost of manufacturing fuel cell electrodes and bring the technology one step closer to commercial viability. The author has proven this hypothesis through a detailed study of two non-contact spraying methods. These scalable deposition systems were incorporated into an automated electrode manufacturing system that was designed and built by the author for this research. The electrode manufacturing techniques developed by the author have been shown to produce electrodes that outperform a common lab-scale contact method that was studied as a baseline, as well as several commercially available electrodes. In addition, these scalable, large scale electrode manufacturing processes developed by the author are

  20. Nuclear analytical techniques applied to the large scale measurements of atmospheric aerosols in the amazon region

    International Nuclear Information System (INIS)

    Gerab, Fabio

    1996-03-01

    This work presents the characterization of the atmosphere aerosol collected in different places of the Amazon Basin. We studied both the biogenic emission from the forest and the particulate material which is emitted to the atmosphere due to the large scale man-made burning during the dry season. The samples were collected during a three year period at two different locations in the Amazon, namely the Alta Floresta (MT) and Serra do Navio (AP) regions, using stacked unit filters. These regions represent two different atmospheric compositions: the aerosol is dominated by the forest natural biogenic emission at Serra do Navio, while at Alta Floresta it presents an important contribution from the man-made burning during the dry season. At Alta Floresta we took samples in gold in order to characterize mercury emission to the atmosphere related to the gold prospection activity in Amazon. Airplanes were used for aerosol sampling during the 1992 and 1993 dry seasons to characterize the atmospheric aerosol contents from man-made burning in large Amazonian areas. The samples were analyzed using several nuclear analytic techniques: Particle Induced X-ray Emission for the quantitative analysis of trace elements with atomic number above 11; Particle Induced Gamma-ray Emission for the quantitative analysis of Na; and Proton Microprobe was used for the characterization of individual particles of the aerosol. Reflectancy technique was used in the black carbon quantification, gravimetric analysis to determine the total atmospheric aerosol concentration and Cold Vapor Atomic Absorption Spectroscopy for quantitative analysis of mercury in the particulate from the Alta Floresta gold shops. Ionic chromatography was used to quantify ionic contents of aerosols from the fine mode particulate samples from Serra do Navio. Multivariate statistical analysis was used in order to identify and characterize the sources of the atmospheric aerosol present in the sampled regions. (author)

  1. Systematic study of the effects of scaling techniques in numerical simulations with application to enhanced geothermal systems

    Science.gov (United States)

    Heinze, Thomas; Jansen, Gunnar; Galvan, Boris; Miller, Stephen A.

    2016-04-01

    Numerical modeling is a well established tool in rock mechanics studies investigating a wide range of problems. Especially for estimating seismic risk of a geothermal energy plants a realistic rock mechanical model is needed. To simulate a time evolving system, two different approaches need to be separated: Implicit methods for solving linear equations are unconditionally stable, while explicit methods are limited by the time step. However, explicit methods are often preferred because of their limited memory demand, their scalability in parallel computing, and simple implementation of complex boundary conditions. In numerical modeling of explicit elastoplastic dynamics the time step is limited by the rock density. Mass scaling techniques, which increase the rock density artificially by several orders, can be used to overcome this limit and significantly reduce computation time. In the context of geothermal energy this is of great interest because in a coupled hydro-mechanical model the time step of the mechanical part is significantly smaller than for the fluid flow. Mass scaling can also be combined with time scaling, which increases the rate of physical processes, assuming that processes are rate independent. While often used, the effect of mass and time scaling and how it may influence the numerical results is rarely-mentioned in publications, and choosing the right scaling technique is typically performed by trial and error. Also often scaling techniques are used in commercial software packages, hidden from the untrained user. To our knowledge, no systematic studies have addressed how mass scaling might affect the numerical results. In this work, we present results from an extensive and systematic study of the influence of mass and time scaling on the behavior of a variety of rock-mechanical models. We employ a finite difference scheme to model uniaxial and biaxial compression experiments using different mass and time scaling factors, and with physical models

  2. Scaling model for prediction of radionuclide activity in cooling water using a regression triplet technique

    International Nuclear Information System (INIS)

    Silvia Dulanska; Lubomir Matel; Milan Meloun

    2010-01-01

    The decommissioning of the nuclear power plant (NPP) A1 Jaslovske Bohunice (Slovakia) is a complicated set of problems that is highly demanding both technically and financially. The basic goal of the decommissioning process is the total elimination of radioactive materials from the nuclear power plant area, and radwaste treatment to a form suitable for its safe disposal. The initial conditions of decommissioning also include elimination of the operational events, preparation and transport of the fuel from the plant territory, radiochemical and physical-chemical characterization of the radioactive wastes. One of the problems was and still is the processing of the liquid radioactive wastes. Such media is also the cooling water of the long-term storage of spent fuel. A suitable scaling model for predicting the activity of hard-to-detect radionuclides 239,240 Pu, 90 Sr and summary beta in cooling water using a regression triplet technique has been built using the regression triplet analysis and regression diagnostics. (author)

  3. Large scale distribution monitoring of FRP-OF based on BOTDR technique for infrastructures

    Science.gov (United States)

    Zhou, Zhi; He, Jianping; Yan, Kai; Ou, Jinping

    2007-04-01

    BOTDA(R) sensing technique is considered as one of the most practical solution for large-sized structures as the instrument. However, there is still a big obstacle to apply BOTDA(R) in large-scale area due to the high cost and the reliability problem of sensing head which is associated to the sensor installation and survival. In this paper, we report a novel low-cost and high reliable BOTDA(R) sensing head using FRP(Fiber Reinforced Polymer)-bare optical fiber rebar, named BOTDA(R)-FRP-OF. We investigated the surface bonding and its mechanical strength by SEM and intensity experiments. Considering the strain difference between OF and host matrix which may result in measurement error, the strain transfer from host to OF have been theoretically studied. Furthermore, GFRP-OFs sensing properties of strain and temperature at different gauge length were tested under different spatial and readout resolution using commercial BOTDA. Dual FRP-OFs temperature compensation method has also been proposed and analyzed. And finally, BOTDA(R)-OFs have been applied in Tiyu west road civil structure at Guangzhou and Daqing Highway. This novel FRP-OF rebar shows both high strengthen and good sensing properties, which can be used in long-term SHM for civil infrastructures.

  4. Digital Image Correlation Techniques Applied to Large Scale Rocket Engine Testing

    Science.gov (United States)

    Gradl, Paul R.

    2016-01-01

    Rocket engine hot-fire ground testing is necessary to understand component performance, reliability and engine system interactions during development. The J-2X upper stage engine completed a series of developmental hot-fire tests that derived performance of the engine and components, validated analytical models and provided the necessary data to identify where design changes, process improvements and technology development were needed. The J-2X development engines were heavily instrumented to provide the data necessary to support these activities which enabled the team to investigate any anomalies experienced during the test program. This paper describes the development of an optical digital image correlation technique to augment the data provided by traditional strain gauges which are prone to debonding at elevated temperatures and limited to localized measurements. The feasibility of this optical measurement system was demonstrated during full scale hot-fire testing of J-2X, during which a digital image correlation system, incorporating a pair of high speed cameras to measure three-dimensional, real-time displacements and strains was installed and operated under the extreme environments present on the test stand. The camera and facility setup, pre-test calibrations, data collection, hot-fire test data collection and post-test analysis and results are presented in this paper.

  5. Codes and curves

    CERN Document Server

    Walker, Judy L

    2000-01-01

    When information is transmitted, errors are likely to occur. Coding theory examines efficient ways of packaging data so that these errors can be detected, or even corrected. The traditional tools of coding theory have come from combinatorics and group theory. Lately, however, coding theorists have added techniques from algebraic geometry to their toolboxes. In particular, by re-interpreting the Reed-Solomon codes, one can see how to define new codes based on divisors on algebraic curves. For instance, using modular curves over finite fields, Tsfasman, Vladut, and Zink showed that one can define a sequence of codes with asymptotically better parameters than any previously known codes. This monograph is based on a series of lectures the author gave as part of the IAS/PCMI program on arithmetic algebraic geometry. Here, the reader is introduced to the exciting field of algebraic geometric coding theory. Presenting the material in the same conversational tone of the lectures, the author covers linear codes, inclu...

  6. Experiments with conjugate gradient algorithms for homotopy curve tracking

    Science.gov (United States)

    Irani, Kashmira M.; Ribbens, Calvin J.; Watson, Layne T.; Kamat, Manohar P.; Walker, Homer F.

    1991-01-01

    There are algorithms for finding zeros or fixed points of nonlinear systems of equations that are globally convergent for almost all starting points, i.e., with probability one. The essence of all such algorithms is the construction of an appropriate homotopy map and then tracking some smooth curve in the zero set of this homotopy map. HOMPACK is a mathematical software package implementing globally convergent homotopy algorithms with three different techniques for tracking a homotopy zero curve, and has separate routines for dense and sparse Jacobian matrices. The HOMPACK algorithms for sparse Jacobian matrices use a preconditioned conjugate gradient algorithm for the computation of the kernel of the homotopy Jacobian matrix, a required linear algebra step for homotopy curve tracking. Here, variants of the conjugate gradient algorithm are implemented in the context of homotopy curve tracking and compared with Craig's preconditioned conjugate gradient method used in HOMPACK. The test problems used include actual large scale, sparse structural mechanics problems.

  7. Innovative Techniques for Large-Scale Collection, Processing, and Storage of Eelgrass (Zostera marina) Seeds

    National Research Council Canada - National Science Library

    Orth, Robert J; Marion, Scott R

    2007-01-01

    .... Although methods for hand-collecting, processing and storing eelgrass seeds have advanced to match the scale of collections, the number of seeds collected has limited the scale of restoration efforts...

  8. Vortex depinning as a nonequilibrium phase transition phenomenon: Scaling of current-voltage curves near the low and the high critical-current states in 2 H -Nb S2 single crystals

    Science.gov (United States)

    Bag, Biplab; Sivananda, Dibya J.; Mandal, Pabitra; Banerjee, S. S.; Sood, A. K.; Grover, A. K.

    2018-04-01

    The vortex depinning phenomenon in single crystals of 2 H -Nb S2 superconductors is used as a prototype for investigating properties of the nonequilibrium (NEQ) depinning phase transition. The 2 H -Nb S2 is a unique system as it exhibits two distinct depinning thresholds, viz., a lower critical current Icl and a higher one Ich. While Icl is related to depinning of a conventional, static (pinned) vortex state, the state with Ich is achieved via a negative differential resistance (NDR) transition where the velocity abruptly drops. Using a generalized finite-temperature scaling ansatz, we study the scaling of current (I)-voltage (V) curves measured across Icl and Ich. Our analysis shows that for I >Icl , the moving vortex state exhibits Arrhenius-like thermally activated flow behavior. This feature persists up to a current value where an inflexion in the IV curves is encountered. While past measurements have often reported similar inflexion, our analysis shows that the inflexion is a signature of a NEQ phase transformation from a thermally activated moving vortex phase to a free flowing phase. Beyond this inflection in IV, a large vortex velocity flow regime is encountered in the 2 H -Nb S2 system, wherein the Bardeen-Stephen flux flow limit is crossed. In this regime the NDR transition is encountered, leading to the high Ich state. The IV curves above Ich we show do not obey the generalized finite-temperature scaling ansatz (as obeyed near Icl). Instead, they scale according to the Fisher's scaling form [Fisher, Phys. Rev. B 31, 1396 (1985), 10.1103/PhysRevB.31.1396] where we show thermal fluctuations do not affect the vortex flow, unlike that found for depinning near Icl.

  9. Determination of J-integral R-curves for the pressure vessel material A 533 B1 using the potential drop technique and the multi-specimen method

    International Nuclear Information System (INIS)

    Krompholz, K.; Ullrich, G.

    1985-01-01

    J-integral experiments at room temperature were performed on three point bend type specimens of the nuclear pressure vessel material A 533 B1 with a/w-ratios of 0.3 and 0.5. Following the ASTM-proposal for the multi-specimen technique a value is obtained close to the value obtained in the HSST round robin test. On the other hand, from the measurement of the Jsub(IC)-value by means of the potential drop technique there is an indication that a lower value of Jsub(IC) is correct. This is in agreement with the multi-specimen technique using linear regression lines without excluding 'invalid' points. That is reasonable if fractographic investigations gives clear indications that stable crack growth has occurred as is the case in this work. (Auth.)

  10. Lagrangian Curves on Spectral Curves of Monopoles

    International Nuclear Information System (INIS)

    Guilfoyle, Brendan; Khalid, Madeeha; Ramon Mari, Jose J.

    2010-01-01

    We study Lagrangian points on smooth holomorphic curves in TP 1 equipped with a natural neutral Kaehler structure, and prove that they must form real curves. By virtue of the identification of TP 1 with the space LE 3 of oriented affine lines in Euclidean 3-space, these Lagrangian curves give rise to ruled surfaces in E 3 , which we prove have zero Gauss curvature. Each ruled surface is shown to be the tangent lines to a curve in E 3 , called the edge of regression of the ruled surface. We give an alternative characterization of these curves as the points in E 3 where the number of oriented lines in the complex curve Σ that pass through the point is less than the degree of Σ. We then apply these results to the spectral curves of certain monopoles and construct the ruled surfaces and edges of regression generated by the Lagrangian curves.

  11. Column Store for GWAC: A High-cadence, High-density, Large-scale Astronomical Light Curve Pipeline and Distributed Shared-nothing Database

    Science.gov (United States)

    Wan, Meng; Wu, Chao; Wang, Jing; Qiu, Yulei; Xin, Liping; Mullender, Sjoerd; Mühleisen, Hannes; Scheers, Bart; Zhang, Ying; Nes, Niels; Kersten, Martin; Huang, Yongpan; Deng, Jinsong; Wei, Jianyan

    2016-11-01

    The ground-based wide-angle camera array (GWAC), a part of the SVOM space mission, will search for various types of optical transients by continuously imaging a field of view (FOV) of 5000 degrees2 every 15 s. Each exposure consists of 36 × 4k × 4k pixels, typically resulting in 36 × ˜175,600 extracted sources. For a modern time-domain astronomy project like GWAC, which produces massive amounts of data with a high cadence, it is challenging to search for short timescale transients in both real-time and archived data, and to build long-term light curves for variable sources. Here, we develop a high-cadence, high-density light curve pipeline (HCHDLP) to process the GWAC data in real-time, and design a distributed shared-nothing database to manage the massive amount of archived data which will be used to generate a source catalog with more than 100 billion records during 10 years of operation. First, we develop HCHDLP based on the column-store DBMS of MonetDB, taking advantage of MonetDB’s high performance when applied to massive data processing. To realize the real-time functionality of HCHDLP, we optimize the pipeline in its source association function, including both time and space complexity from outside the database (SQL semantic) and inside (RANGE-JOIN implementation), as well as in its strategy of building complex light curves. The optimized source association function is accelerated by three orders of magnitude. Second, we build a distributed database using a two-level time partitioning strategy via the MERGE TABLE and REMOTE TABLE technology of MonetDB. Intensive tests validate that our database architecture is able to achieve both linear scalability in response time and concurrent access by multiple users. In summary, our studies provide guidance for a solution to GWAC in real-time data processing and management of massive data.

  12. Analysis test of understanding of vectors with the three-parameter logistic model of item response theory and item response curves technique

    Directory of Open Access Journals (Sweden)

    Suttida Rakkapao

    2016-10-01

    Full Text Available This study investigated the multiple-choice test of understanding of vectors (TUV, by applying item response theory (IRT. The difficulty, discriminatory, and guessing parameters of the TUV items were fit with the three-parameter logistic model of IRT, using the parscale program. The TUV ability is an ability parameter, here estimated assuming unidimensionality and local independence. Moreover, all distractors of the TUV were analyzed from item response curves (IRC that represent simplified IRT. Data were gathered on 2392 science and engineering freshmen, from three universities in Thailand. The results revealed IRT analysis to be useful in assessing the test since its item parameters are independent of the ability parameters. The IRT framework reveals item-level information, and indicates appropriate ability ranges for the test. Moreover, the IRC analysis can be used to assess the effectiveness of the test’s distractors. Both IRT and IRC approaches reveal test characteristics beyond those revealed by the classical analysis methods of tests. Test developers can apply these methods to diagnose and evaluate the features of items at various ability levels of test takers.

  13. Analysis test of understanding of vectors with the three-parameter logistic model of item response theory and item response curves technique

    Science.gov (United States)

    Rakkapao, Suttida; Prasitpong, Singha; Arayathanitkul, Kwan

    2016-12-01

    This study investigated the multiple-choice test of understanding of vectors (TUV), by applying item response theory (IRT). The difficulty, discriminatory, and guessing parameters of the TUV items were fit with the three-parameter logistic model of IRT, using the parscale program. The TUV ability is an ability parameter, here estimated assuming unidimensionality and local independence. Moreover, all distractors of the TUV were analyzed from item response curves (IRC) that represent simplified IRT. Data were gathered on 2392 science and engineering freshmen, from three universities in Thailand. The results revealed IRT analysis to be useful in assessing the test since its item parameters are independent of the ability parameters. The IRT framework reveals item-level information, and indicates appropriate ability ranges for the test. Moreover, the IRC analysis can be used to assess the effectiveness of the test's distractors. Both IRT and IRC approaches reveal test characteristics beyond those revealed by the classical analysis methods of tests. Test developers can apply these methods to diagnose and evaluate the features of items at various ability levels of test takers.

  14. In vitro Evaluation of the Colistin-Carbapenem Combination in Clinical Isolates of A. baumannii Using the Checkerboard, Etest, and Time-Kill Curve Techniques.

    Science.gov (United States)

    Soudeiha, Micheline A H; Dahdouh, Elias A; Azar, Eid; Sarkis, Dolla K; Daoud, Ziad

    2017-01-01

    The worldwide increase in the emergence of carbapenem resistant Acinetobacter baumannii (CRAB) calls for the investigation into alternative approaches for treatment. This study aims to evaluate colistin-carbapenem combinations against Acinetobacter spp., in order to potentially reduce the need for high concentrations of antibiotics in therapy. This study was conducted on 100 non-duplicate Acinetobacter isolates that were collected from different patients admitted at Saint George Hospital-University Medical Center in Beirut. The isolates were identified using API 20NE strips, which contain the necessary agents to cover a panel of biochemical tests, and confirmed by PCR amplification of bla OXA-51-like . Activities of colistin, meropenem and imipenem against Acinetobacter isolates were determined by ETEST and microdilution methods, and interpreted according to the guidelines of the Clinical and Laboratory Standards Institute. In addition, PCR amplifications of the most common beta lactamases contributing to carbapenem resistance were performed. Tri locus PCR-typing was also performed to determine the international clonality of the isolates. Checkerboard, ETEST and time kill curves were then performed to determine the effect of the colistin-carbapenem combinations. The synergistic potential of the combination was then determined by calculating the Fractional Inhibitory Concentration Index (FICI), which is an index that indicates additivity, synergism, or antagonism between the antimicrobial agents. In this study, 84% of the isolates were resistant to meropenem, 78% to imipenem, and only one strain was resistant to colistin. 79% of the isolates harbored bla OXA-23-like and pertained to the International Clone II. An additive effect for the colistin-carbapenem combination was observed using all three methods. The combination of colistin-meropenem showed better effects as compared to colistin-imipenem ( p carbapenems could be a promising antimicrobial strategy in

  15. In vitro Evaluation of the Colistin-Carbapenem Combination in Clinical Isolates of A. baumannii Using the Checkerboard, Etest, and Time-Kill Curve Techniques

    Directory of Open Access Journals (Sweden)

    Micheline A. H. Soudeiha

    2017-05-01

    Full Text Available The worldwide increase in the emergence of carbapenem resistant Acinetobacter baumannii (CRAB calls for the investigation into alternative approaches for treatment. This study aims to evaluate colistin-carbapenem combinations against Acinetobacter spp., in order to potentially reduce the need for high concentrations of antibiotics in therapy. This study was conducted on 100 non-duplicate Acinetobacter isolates that were collected from different patients admitted at Saint George Hospital-University Medical Center in Beirut. The isolates were identified using API 20NE strips, which contain the necessary agents to cover a panel of biochemical tests, and confirmed by PCR amplification of blaOXA−51−like. Activities of colistin, meropenem and imipenem against Acinetobacter isolates were determined by ETEST and microdilution methods, and interpreted according to the guidelines of the Clinical and Laboratory Standards Institute. In addition, PCR amplifications of the most common beta lactamases contributing to carbapenem resistance were performed. Tri locus PCR–typing was also performed to determine the international clonality of the isolates. Checkerboard, ETEST and time kill curves were then performed to determine the effect of the colistin-carbapenem combinations. The synergistic potential of the combination was then determined by calculating the Fractional Inhibitory Concentration Index (FICI, which is an index that indicates additivity, synergism, or antagonism between the antimicrobial agents. In this study, 84% of the isolates were resistant to meropenem, 78% to imipenem, and only one strain was resistant to colistin. 79% of the isolates harbored blaOXA−23−like and pertained to the International Clone II. An additive effect for the colistin-carbapenem combination was observed using all three methods. The combination of colistin-meropenem showed better effects as compared to colistin-imipenem (p < 0.05. The colistin-meropenem and

  16. Sensitivity Analysis of Electromagnetic Induction Technique to Determine Soil Salinity in Large –Scale

    Directory of Open Access Journals (Sweden)

    Yousef Hasheminejhad

    2017-02-01

    Full Text Available Introduction: Monitoring and management of saline soils depends on exact and updatable measurements of soil electrical conductivity. Large scale direct measurements are not only expensive but also time consuming. Therefore application of near ground surface sensors could be considered as acceptable time- and cost-saving methods with high accuracy in soil salinity detection. . One of these relatively innovative methods is electromagnetic induction technique. Apparent soil electrical conductivity measurement by electromagnetic induction technique is affected by several key properties of soils including soil moisture and clay content. Materials and Methods: Soil salinity and apparent soil electrical conductivity data of two years of 50000 ha area in Sabzevar- Davarzan plain were used to evaluate the sensitivity of electromagnetic induction to soil moisture and clay content. Locations of the sampling points were determined by the Latin Hypercube Sampling strategy, based on 100 sampling points were selected for the first year and 25 sampling points for the second year. Regarding to difficulties in finding and sampling the points 97 sampling points were found in the area for the first year out of which 82 points were sampled down to 90 cm depth in 30 cm intervals and all of them were measured with electromagnetic induction device at horizontal orientation. The first year data were used for training the model which included 82 points measurement of bulk conductivity and laboratory determination of electrical conductivity of saturated extract, soil texture and moisture content in soil samples. On the other hand, the second year data which were used for testing the model integrated by 25 sampling points and 9 bulk conductivity measurements around each point. Electrical conductivity of saturated extract was just measured as the only parameter in the laboratory for the second year samples. Results and Discussion: Results of the first year showed a

  17. Chromatographic techniques used in the laboratory scale fractionation and purification of plasma

    International Nuclear Information System (INIS)

    Siti Najila Mohd Janib; Wan Hamirul Bahrin Wan Kamal; Shaharuddin Mohd

    2004-01-01

    Chromatography is a powerful technique used in the separation as well as purification of proteins for use as biopharmaceuticals or medicines. Scientists use many different chromatographic techniques in biotechnology as they bring a molecule from its initial identification stage to the stage of it becoming a marketed product. The most commonly used of these techniques is liquid chromatography (1,C). This technique can be used to separate the target molecule from undesired contaminants, as well as to analyse the final product for the requisite purity as established by governmental regulatory groups such as the FDA. Some examples of LC techniques include: ion exchange (IEC), hydrophobic interaction (HIC), gel filtration (GF), affinity (AC) and reverse phase (RPC) chromatography. These techniques are very versatile and can be used at any stage of the purification process i.e. capture, intermediate purification phase and polishing. The choice of a particular technique is dependent upon the nature of the target protein as well as its intended final use. This paper describes the preliminary work done on the chromatographic purification of factor VIII (FVIII), factor IX (FIX), albumin and IgG from plasma. Results, in particular, in the isolation of albumin and IgG using IEC, have been promising. Preparation and production of cryoprecipitate to yield FVIII and FIX have also been successful. (Author)

  18. Trace contaminant determination in fish scale by laser-ablation technique

    International Nuclear Information System (INIS)

    Lee, I.; Coutant, C.C.; Arakawa, E.T.

    1993-01-01

    Laser ablation on rings of fish scale has been used to analyze the historical accumulation of polychlorinated biphenyls (PCB) in striped bass in the Watts Bar Reservoir. Rings on a fish scale grow in a pattern that forms a record of the fish's chemical intake. In conjunction with the migration patterns of fish monitored by ecologists, relative PCB concentrations in the seasonal rings of fish scale can be used to study the PCB distribution in the reservoir. In this study, a tightly-focused laser beam from a XeCl excimer laser was used to ablate and ionize a small portion of a fish scale placed in a vacuum chamber. The ions were identified and quantified by a time-of-flight mass spectrometer. Studies of this type can provide valuable information for the Department of Energy (DOE) off-site clean-up efforts as well as identifying the impacts of other sources to local aquatic populations

  19. Cross-section library and processing techniques within the SCALE system

    International Nuclear Information System (INIS)

    Westfall, R.M.

    1986-01-01

    A summary of each of the SCALE system features involved in problem-dependent cross section processing is presented. These features include criticality libraries, shielding libraries, the Standard Composition Library, the SCALE functional modules: BONAMI-S, NITAWL-S, XSDRNPM-S, ICE-S, and the Material Information Processor. The automated procedure for cross-section processing is described with examples. 15 refs

  20. The Visual Analogue Scale for Rating, Ranking and Paired-Comparison (VAS-RRP): A new technique for psychological measurement.

    Science.gov (United States)

    Sung, Yao-Ting; Wu, Jeng-Shin

    2018-04-17

    Traditionally, the visual analogue scale (VAS) has been proposed to overcome the limitations of ordinal measures from Likert-type scales. However, the function of VASs to overcome the limitations of response styles to Likert-type scales has not yet been addressed. Previous research using ranking and paired comparisons to compensate for the response styles of Likert-type scales has suffered from limitations, such as that the total score of ipsative measures is a constant that cannot be analyzed by means of many common statistical techniques. In this study we propose a new scale, called the Visual Analogue Scale for Rating, Ranking, and Paired-Comparison (VAS-RRP), which can be used to collect rating, ranking, and paired-comparison data simultaneously, while avoiding the limitations of each of these data collection methods. The characteristics, use, and analytic method of VAS-RRPs, as well as how they overcome the disadvantages of Likert-type scales, ranking, and VASs, are discussed. On the basis of analyses of simulated and empirical data, this study showed that VAS-RRPs improved reliability, response style bias, and parameter recovery. Finally, we have also designed a VAS-RRP Generator for researchers' construction and administration of their own VAS-RRPs.

  1. Volume changes at macro- and nano-scale in epoxy resins studied by PALS and PVT experimental techniques

    Energy Technology Data Exchange (ETDEWEB)

    Somoza, A. [IFIMAT-UNCentro, Pinto 399, B7000GHG Tandil (Argentina) and CICPBA, Pinto 399, B7000GHG Tandil (Argentina)]. E-mail: asomoza@exa.unicen.edu.ar; Salgueiro, W. [IFIMAT-UNCentro, Pinto 399, B7000GHG Tandil (Argentina); Goyanes, S. [LPMPyMC, Depto. de Fisica, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, Ciudad Universitaria, Pabellon I, 1428 Buenos Aires (Argentina); Ramos, J. [Materials and Technology Group, Departamento de Ingenieria Quimica y M. Ambiente, Escuela University Politecnica, Universidad Pais Vasco/Euskal Herriko Unibertsitatea, Pz. Europa 1, 20018 Donostia/San Sebastian (Spain); Mondragon, I. [Materials and Technology Group, Departamento de Ingenieria Quimica y M. Ambiente, Escuela University Politecnica, Universidad Pais Vasco/Euskal Herriko Unibertsitatea, Pz. Europa 1, 20018 Donostia/San Sebastian (Spain)

    2007-02-15

    A systematic study on changes in the volumes at macro- and nano-scale in epoxy systems cured with selected aminic hardeners at different pre-cure temperatures is presented. Free- and macroscopic specific-volumes were measured by PALS and pressure-volume-temperature techniques, respectively. An analysis of the relation existing between macro- and nano-scales of the thermosetting networks developed by the different chemical structures is shown. The result obtained indicates that the structure of the hardeners governs the packing of the molecular chains of the epoxy network.

  2. Theoretical foundations for environmental Kuznets curve analysis

    Science.gov (United States)

    Lantz, Van

    This thesis provides a dynamic theory for analyzing the paths of aggregate output and pollution in a country over time. An infinite horizon, competitive growth-pollution model is explored in order to determine the role that economic scale, production techniques, and pollution regulations play in explaining the inverted U-shaped relationship between output and some forms of pollution (otherwise known as the Environmental Kuznets Curve, or EKC). Results indicate that the output-pollution relationship may follow a strictly increasing, strictly decreasing (but bounded), inverted U-shaped, or some combination of curves. While the 'scale' effect may cause output and pollution to exhibit a monotonic relationship, 'technique' and 'regulation' effects may ultimately cause a de-linking of these two variables. Pollution-minimizing energy regulation policies are also investigated within this framework. It is found that the EKC may be 'flattened' or even eliminated moving from a poorly-regulated economy to one that minimizes pollution. The model is calibrated to the US economy for output (gross national product, GNP) and two pollutants (sulfur dioxide, SO2, and carbon dioxide, CO2) over the period 1900 to 1990. Results indicate that the model replicates the observations quite well. The predominance of 'scale' effects cause aggregate SO2 and CO2 levels to increase with GNP in the early stages of development. Then, in the case of SO 2, 'technique' and 'regulation' effects may be the cause of falling SO2 levels with continued economic growth (establishing the EKC). CO2 continues to monotonically increase as output levels increase over time. The positive relationship may be due to the lack of regulations on this pollutant. If stricter regulation policies were instituted in the two case studies, an improved allocation of resources may result. While GNP may be 2.596 to 20% lower than what has been realized in the US economy (depending on the pollution variable analyzed), individual

  3. 51Cr - erythrocyte survival curves

    International Nuclear Information System (INIS)

    Paiva Costa, J. de.

    1982-07-01

    Sixteen patients were studied, being fifteen patients in hemolytic state, and a normal individual as a witness. The aim was to obtain better techniques for the analysis of the erythrocytes, survival curves, according to the recommendations of the International Committee of Hematology. It was used the radiochromatic method as a tracer. Previously a revisional study of the International Literature was made in its aspects inherent to the work in execution, rendering possible to establish comparisons and clarify phonomena observed in cur investigation. Several parameters were considered in this study, hindering both the exponential and the linear curves. The analysis of the survival curves of the erythrocytes in the studied group, revealed that the elution factor did not present a homogeneous answer quantitatively to all, though, the result of the analysis of these curves have been established, through listed programs in the electronic calculator. (Author) [pt

  4. Scales

    Science.gov (United States)

    Scales are a visible peeling or flaking of outer skin layers. These layers are called the stratum ... Scales may be caused by dry skin, certain inflammatory skin conditions, or infections. Examples of disorders that ...

  5. Applicability and sensitivity of gamma transmission and radiotracer techniques for mineral scaling studies

    Energy Technology Data Exchange (ETDEWEB)

    Bjoernstad, Tor; Stamatakis, Emanuel

    2006-05-15

    Mineral scaling in petroleum and geothermal production systems creates a substantial problem of flow impairment. It is a priority to develop methods for scale inhibition. To study scaling rates and mechanisms in laboratory flow experiments under simulated reservoir conditions two nuclear methods have been introduced and tested. The first applies the principle of gamma transmission to measure mass increase. Here, we use a 30 MBq source of 133Ba. The other method applies radioactive tracers of one or more of the scaling components. We have used the study of CaC03-precipitation, as an example of the applicability of the method where the main tracer used is 47Ca2+. While the first method must be regarded as an indirect method, the latter is a direct method where the reactions of specific components may be studied. Both methods are on-line, continuous and non-destructive, and capable to study scaling of liquids with saturation ratios as low as SR=1.5 or lower. A lower limit of detection for the transmission method in sand-packed columns with otherwise reasonable experimental parameters is less than 1 mg CaC03 in a 1 cm section of the tube packed with silica sand SiO2. A lower limit of detection for the tracer method with reasonable experimental parameters is less than 1 microgram in the same tube section. (author) (tk)

  6. Applicability and sensitivity of gamma transmission and radiotracer techniques for mineral scaling studies

    International Nuclear Information System (INIS)

    Bjoernstad, Tor; Stamatakis, Emanuel

    2006-05-01

    Mineral scaling in petroleum and geothermal production systems creates a substantial problem of flow impairment. It is a priority to develop methods for scale inhibition. To study scaling rates and mechanisms in laboratory flow experiments under simulated reservoir conditions two nuclear methods have been introduced and tested. The first applies the principle of gamma transmission to measure mass increase. Here, we use a 30 MBq source of 133Ba. The other method applies radioactive tracers of one or more of the scaling components. We have used the study of CaC03-precipitation, as an example of the applicability of the method where the main tracer used is 47Ca2+. While the first method must be regarded as an indirect method, the latter is a direct method where the reactions of specific components may be studied. Both methods are on-line, continuous and non-destructive, and capable to study scaling of liquids with saturation ratios as low as SR=1.5 or lower. A lower limit of detection for the transmission method in sand-packed columns with otherwise reasonable experimental parameters is less than 1 mg CaC03 in a 1 cm section of the tube packed with silica sand SiO2. A lower limit of detection for the tracer method with reasonable experimental parameters is less than 1 microgram in the same tube section. (author) (tk)

  7. ECM using Edwards curves

    DEFF Research Database (Denmark)

    Bernstein, Daniel J.; Birkner, Peter; Lange, Tanja

    2013-01-01

    -arithmetic level are as follows: (1) use Edwards curves instead of Montgomery curves; (2) use extended Edwards coordinates; (3) use signed-sliding-window addition-subtraction chains; (4) batch primes to increase the window size; (5) choose curves with small parameters and base points; (6) choose curves with large...

  8. Development of a novel once-through flow visualization technique for kinetic study of bulk and surface scaling

    Science.gov (United States)

    Sanni, O.; Bukuaghangin, O.; Huggan, M.; Kapur, N.; Charpentier, T.; Neville, A.

    2017-10-01

    There is a considerable interest to investigate surface crystallization in order to have a full mechanistic understanding of how layers of sparingly soluble salts (scale) build on component surfaces. Despite much recent attention, a suitable methodology to improve on the understanding of the precipitation/deposition systems to enable the construction of an accurate surface deposition kinetic model is still needed. In this work, an experimental flow rig and associated methodology to study mineral scale deposition is developed. The once-through flow rig allows us to follow mineral scale precipitation and surface deposition in situ and in real time. The rig enables us to assess the effects of various parameters such as brine chemistry and scaling indices, temperature, flow rates, and scale inhibitor concentrations on scaling kinetics. Calcium carbonate (CaCO3) scaling at different values of the saturation ratio (SR) is evaluated using image analysis procedures that enable the assessment of surface coverage, nucleation, and growth of the particles with time. The result for turbidity values measured in the flow cell is zero for all the SR considered. The residence time from the mixing point to the sample is shorter than the induction time for bulk precipitation; therefore, there are no crystals in the bulk solution as the flow passes through the sample. The study shows that surface scaling is not always a result of pre-precipitated crystals in the bulk solution. The technique enables both precipitation and surface deposition of scale to be decoupled and for the surface deposition process to be studied in real time and assessed under constant condition.

  9. Review of ultimate pressure capacity test of containment structure and scale model design techniques

    Energy Technology Data Exchange (ETDEWEB)

    Seo, Jeong Moon; Choi, In Kil [Korea Atomic Energy Research Institute, Taejeon (Korea)

    2000-03-01

    This study was performed to obtain the basic knowledge of the scaled model test through the review of experimental studies conducted in foreign countries. The results of this study will be used for the wall segment test planed in next year. It was concluded from the previous studies that the larger the model, the greater the trust of the community in the obtained results. It is recommended that a scale model 1/4 - 1/6 be suitable considering the characteristics of concrete, reinforcement, liner and tendon. Such a large scale model test require large amounts of time and budget. Because of these reasons, it is concluded that the containment wall segment test with analytical studies is efficient for the verification of the ultimate pressure capacity of the containment structures. 57 refs., 46 figs., 11 tabs. (Author)

  10. Distributed and hierarchical control techniques for large-scale power plant systems

    International Nuclear Information System (INIS)

    Raju, G.V.S.; Kisner, R.A.

    1985-08-01

    In large-scale systems, integrated and coordinated control functions are required to maximize plant availability, to allow maneuverability through various power levels, and to meet externally imposed regulatory limitations. Nuclear power plants are large-scale systems. Prime subsystems are those that contribute directly to the behavior of the plant's ultimate output. The prime subsystems in a nuclear power plant include reactor, primary and intermediate heat transport, steam generator, turbine generator, and feedwater system. This paper describes and discusses the continuous-variable control system developed to supervise prime plant subsystems for optimal control and coordination

  11. Photographic and video techniques used in the 1/5-scale Mark I boiling water reactor pressure suppression experiment

    International Nuclear Information System (INIS)

    Dixon, D.; Lord, D.

    1978-01-01

    The report provides a description of the techniques and equipment used for the photographic and video recordings of the air test series conducted on the 1/5 scale Mark I boiling water reactor (BWR) pressure suppression experimental facility at Lawrence Livermore Laboratory (LLL) between March 4, 1977, and May 12, 1977. Lighting and water filtering are discussed in the photographic system section and are also applicable to the video system. The appendices contain information from the photographic and video camera logs

  12. The Use of System Codes in Scaling Studies: Relevant Techniques for Qualifying NPP Nodalizations for Particular Scenarios

    Directory of Open Access Journals (Sweden)

    V. Martinez-Quiroga

    2014-01-01

    Full Text Available System codes along with necessary nodalizations are valuable tools for thermal hydraulic safety analysis. Qualifying both codes and nodalizations is an essential step prior to their use in any significant study involving code calculations. Since most existing experimental data come from tests performed on the small scale, any qualification process must therefore address scale considerations. This paper describes the methodology developed at the Technical University of Catalonia in order to contribute to the qualification of Nuclear Power Plant nodalizations by means of scale disquisitions. The techniques that are presented include the so-called Kv-scaled calculation approach as well as the use of “hybrid nodalizations” and “scaled-up nodalizations.” These methods have revealed themselves to be very helpful in producing the required qualification and in promoting further improvements in nodalization. The paper explains both the concepts and the general guidelines of the method, while an accompanying paper will complete the presentation of the methodology as well as showing the results of the analysis of scaling discrepancies that appeared during the posttest simulations of PKL-LSTF counterpart tests performed on the PKL-III and ROSA-2 OECD/NEA Projects. Both articles together produce the complete description of the methodology that has been developed in the framework of the use of NPP nodalizations in the support to plant operation and control.

  13. Fractal scaling behavior of heart rate variability in response to meditation techniques

    International Nuclear Information System (INIS)

    Alvarez-Ramirez, J.; Rodríguez, E.; Echeverría, J.C.

    2017-01-01

    Highlights: • The scaling properties of heart rate variability in premeditation and meditation states were studied. • Mindfulness meditation induces a decrement of the HRV long-range scaling correlations. • Mindfulness meditation can be regarded as a type of induced deep sleep-like dynamics. - Abstract: The rescaled range (R/S) analysis was used for analyzing the fractal scaling properties of heart rate variability (HRV) of subjects undergoing premeditation and meditation states. Eight novice subjects and four advanced practitioners were considered. The corresponding pre-meditation and meditation HRV data were obtained from the Physionet database. The results showed that mindfulness meditation induces a decrement of the HRV long-range scaling correlations as quantified with the time-variant Hurst exponent. The Hurst exponent for advanced meditation practitioners decreases up to values of 0.5, reflecting uncorrelated (e.g., white noise-like) HRV dynamics. Some parallelisms between mindfulness meditation and deep sleep (Stage 4) are discussed, suggesting that the former can be regarded as a type of induced deep sleep-like dynamics.

  14. Validity limits in J-resistance curve determination: A computational approach to ductile crack growth under large-scale yielding conditions. Volume 2

    International Nuclear Information System (INIS)

    Shih, C.F.; Xia, L.; Hutchinson, J.W.

    1995-02-01

    In this report, Volume 2, Mode I crack initiation and growth under plane strain conditions in tough metals are computed using an elastic/plastic continuum model which accounts for void growth and coalescence ahead of the crack tip. The material parameters include the stress-strain properties, along with the parameters characterizing the spacing and volume fraction of voids in material elements lying in the plane of the crack. For a given set of these parameters and a specific specimen, or component, subject to a specific loading, relationships among load, load-line displacement and crack advance can be computed with no restrictions on the extent of plastic deformation. Similarly, there is no limit on crack advance, except that it must take place on the symmetry plane ahead of the initial crack. Suitably defined measures of crack tip loading intensity, such as those based on the J-integral, can also be computed, thereby directly generating crack growth resistance curves. In this report, the model is applied to five specimen geometries which are known to give rise to significantly different crack tip constraints and crack growth resistance behaviors. Computed results are compared with sets of experimental data for two tough steels for four of the specimen types. Details of the load, displacement and crack growth histories are accurately reproduced, even when extensive crack growth takes place under conditions of fully plastic yielding. A description of material resistance to crack initiation and subsequent growth is essential for assessing structural integrity such as nuclear pressure vessels and piping

  15. Mapping patient safety : A large-scale literature review using bibliometric visualisation techniques

    NARCIS (Netherlands)

    Rodrigues, S.P.; Van Eck, N.J.; Waltman, L.; Jansen, F.W.

    2014-01-01

    Background The amount of scientific literature available is often overwhelming, making it difficult for researchers to have a good overview of the literature and to see relations between different developments. Visualisation techniques based on bibliometric data are helpful in obtaining an overview

  16. Vis-A-Plan /visualize a plan/ management technique provides performance-time scale

    Science.gov (United States)

    Ranck, N. H.

    1967-01-01

    Vis-A-Plan is a bar-charting technique for representing and evaluating project activities on a performance-time basis. This rectilinear method presents the logic diagram of a project as a series of horizontal time bars. It may be used supplementary to PERT or independently.

  17. Bridging the scales in atmospheric composition simulations using a nudging technique

    Science.gov (United States)

    D'Isidoro, Massimo; Maurizi, Alberto; Russo, Felicita; Tampieri, Francesco

    2010-05-01

    Studying the interaction between climate and anthropogenic activities, specifically those concentrated in megacities/hot spots, requires the description of processes in a very wide range of scales from local, where anthropogenic emissions are concentrated to global where we are interested to study the impact of these sources. The description of all the processes at all scales within the same numerical implementation is not feasible because of limited computer resources. Therefore, different phenomena are studied by means of different numerical models that can cover different range of scales. The exchange of information from small to large scale is highly non-trivial though of high interest. In fact uncertainties in large scale simulations are expected to receive large contribution from the most polluted areas where the highly inhomogeneous distribution of sources connected to the intrinsic non-linearity of the processes involved can generate non negligible departures between coarse and fine scale simulations. In this work a new method is proposed and investigated in a case study (August 2009) using the BOLCHEM model. Monthly simulations at coarse (0.5° European domain, run A) and fine (0.1° Central Mediterranean domain, run B) horizontal resolution are performed using the coarse resolution as boundary condition for the fine one. Then another coarse resolution run (run C) is performed, in which the high resolution fields remapped on to the coarse grid are used to nudge the concentrations on the Po Valley area. The nudging is applied to all gas and aerosol species of BOLCHEM. Averaged concentrations and variances over Po Valley and other selected areas for O3 and PM are computed. It is observed that although the variance of run B is markedly larger than that of run A, the variance of run C is smaller because the remapping procedure removes large portion of variance from run B fields. Mean concentrations show some differences depending on species: in general mean

  18. High frequency magnetic field technique: mathematical modelling and development of a full scale water fraction meter

    Energy Technology Data Exchange (ETDEWEB)

    Cimpan, Emil

    2004-09-15

    This work is concerned with the development of a new on-line measuring technique to be used in measurements of the water concentration in a two component oil/water or three component (i.e. multiphase) oil/water/gas flow. The technique is based on using non-intrusive coil detectors and experiments were performed both statically (medium at rest) and dynamically (medium flowing through a flow rig). The various coil detectors were constructed with either one or two coils and specially designed electronics were used. The medium was composed by air, machine oil, and water having different conductivity values, i.e. seawater and salt water with various conductivities (salt concentrations) such as 1 S/m, 4.9 S/m and 9.3 S/m. The experimental measurements done with the different mixtures were further used to mathematically model the physical principle used in the technique. This new technique is based on measuring the coil impedance and signal frequency at the self-resonance frequency of the coil to determine the water concentration in the mix. By using numerous coils it was found, experimentally, that generally both the coil impedance and the self-resonance frequency of the coil decreased as the medium conductivity increased. Both the impedance and the self-resonance frequency of the coil depended on the medium loss due to the induced eddy currents within the conductive media in the mixture, i.e. water. In order to detect relatively low values of the medium loss, the self-resonance frequency of the coil and also of the magnetic field penetrating the media should be relatively high (within the MHz range and higher). Therefore, the technique was called and referred to throughout the entire work as the high frequency magnetic field technique (HFMFT). To practically use the HFMFT, it was necessary to circumscribe an analytical frame to this technique. This was done by working out a mathematical model that relates the impedance and the self-resonance frequency of the coil to the

  19. Central composite design with the help of multivariate curve resolution in loadability optimization of RP-HPLC to scale-up a binary mixture.

    Science.gov (United States)

    Taheri, Mohammadreza; Moazeni-Pourasil, Roudabeh Sadat; Sheikh-Olia-Lavasani, Majid; Karami, Ahmad; Ghassempour, Alireza

    2016-03-01

    Chromatographic method development for preparative targets is a time-consuming and subjective process. This can be particularly problematic because of the use of valuable samples for isolation and the large consumption of solvents in preparative scale. These processes could be improved by using statistical computations to save time, solvent and experimental efforts. Thus, contributed by ESI-MS, after applying DryLab software to gain an overview of the most effective parameters in separation of synthesized celecoxib and its co-eluted compounds, design of experiment software that relies on multivariate modeling as a chemometric approach was used to predict the optimized touching-band overloading conditions by objective functions according to the relationship between selectivity and stationary phase properties. The loadability of the method was investigated on the analytical and semi-preparative scales, and the performance of this chemometric approach was approved by peak shapes beside recovery and purity of products. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Fabrication Of Atomic-scale Gold Junctions By Electrochemical Plating Technique Using A Common Medical Disinfectant

    Science.gov (United States)

    Umeno, Akinori; Hirakawa, Kazuhiko

    2005-06-01

    Iodine tincture, a medical liquid familiar as a disinfectant, was introduced as an etching/deposition electrolyte for the fabrication of nanometer-separated gold electrodes. In the gold dissolved iodine tincture, the gold electrodes were grown or eroded slowly in atomic scale, enough to form quantum point contacts. The resistance evolution during the electrochemical deposition showed plateaus at integer multiples of the resistance quantum, (2e2/h)-1, at the room temperature. The iodine tincture is a commercially available common material, which makes the fabrication process to be the simple and cost effective. Moreover, in contrast to the conventional electrochemical approaches, this method is free from highly toxic cyanide compounds or extraordinary strong acid. We expect this method to be a useful interface between single-molecular-scale structures and macroscopic opto-electronic devices.

  1. Normal-Mode Analysis of Circular DNA at the Base-Pair Level. 2. Large-Scale Configurational Transformation of a Naturally Curved Molecule.

    Science.gov (United States)

    Matsumoto, Atsushi; Tobias, Irwin; Olson, Wilma K

    2005-01-01

    Fine structural and energetic details embedded in the DNA base sequence, such as intrinsic curvature, are important to the packaging and processing of the genetic material. Here we investigate the internal dynamics of a 200 bp closed circular molecule with natural curvature using a newly developed normal-mode treatment of DNA in terms of neighboring base-pair "step" parameters. The intrinsic curvature of the DNA is described by a 10 bp repeating pattern of bending distortions at successive base-pair steps. We vary the degree of intrinsic curvature and the superhelical stress on the molecule and consider the normal-mode fluctuations of both the circle and the stable figure-8 configuration under conditions where the energies of the two states are similar. To extract the properties due solely to curvature, we ignore other important features of the double helix, such as the extensibility of the chain, the anisotropy of local bending, and the coupling of step parameters. We compare the computed normal modes of the curved DNA model with the corresponding dynamical features of a covalently closed duplex of the same chain length constructed from naturally straight DNA and with the theoretically predicted dynamical properties of a naturally circular, inextensible elastic rod, i.e., an O-ring. The cyclic molecules with intrinsic curvature are found to be more deformable under superhelical stress than rings formed from naturally straight DNA. As superhelical stress is accumulated in the DNA, the frequency, i.e., energy, of the dominant bending mode decreases in value, and if the imposed stress is sufficiently large, a global configurational rearrangement of the circle to the figure-8 form takes place. We combine energy minimization with normal-mode calculations of the two states to decipher the configurational pathway between the two states. We also describe and make use of a general analytical treatment of the thermal fluctuations of an elastic rod to characterize the

  2. Introduction of Functional Structures in Nano-Scales into Engineering Polymer Films Using Radiation Technique

    Energy Technology Data Exchange (ETDEWEB)

    Maekawa, Y., E-mail: maekawa.yasunari@jaea.go.jp [Japan Atomic Energy Agency (JAEA), Quantum Beam Science Directorate, High Performance Polymer Group, 1233 Watanuki-Machi, Takasaki, Gunma-ken 370-1292 (Japan)

    2010-07-01

    Introduction of functional regions in nanometer scale in polymeric films using γ-rays, EB, and ion beams are proposed. Two approaches to build nano-scale functional domains in polymer substrates are proposed: 1) Radiation-induced grafting to transfer nano-scale polymer crystalline structures (morphology), acting as a nano-template, to nano-scale graft polymer regions. The obtained polymers with nano structures can be applied to high performance polymer membranes. 2) Fabrication of nanopores and functional domains in engineering plastic films using ion beams, which deposit the energy in very narrow region of polymer films. Hydrophilic grafting polymers are introduced into hydrophobic fluorinated polymers, cross-linked PTFE (cPTFE) and aromatic hydrocarbon polymer, poly(ether ether ketone (PEEK), which is known to have lamella and crystallite in the polymer films. Then, the hierarchical structures of graft domains are analyzed by a small angle neutron scattering (SANS) experiment. From these analyses, the different structures and the different formation of graft domains were observed in fluorinated and hydrocarbon polymer substrates. the grafted domains in the cPTFE film, working as an ion channel, grew as covering the crystallite and the size of domain seems to be similar to that of crystallite. On the other hand, the PEEK-based PEM has a smaller domain size and it seems to grow independently on the crystallites of PEEK substrate. For nano-fabrication of polymer films using heavy ion beams, the energy distribution in radial direction, which is perpendicular to ion trajectory, is mainly concerned. For penumbra, we re-estimated effective radius of penumbra, in which radiation induced grafting took place, for several different ion beams. We observed the different diameters of the ion channels consisting of graft polymers. The channel sizes were quite in good agreement with the effective penumbra which possess the absorption doses more than 1 kGy. (author)

  3. Introduction of Functional Structures in Nano-Scales into Engineering Polymer Films Using Radiation Technique

    International Nuclear Information System (INIS)

    Maekawa, Y.

    2010-01-01

    Introduction of functional regions in nanometer scale in polymeric films using γ-rays, EB, and ion beams are proposed. Two approaches to build nano-scale functional domains in polymer substrates are proposed: 1) Radiation-induced grafting to transfer nano-scale polymer crystalline structures (morphology), acting as a nano-template, to nano-scale graft polymer regions. The obtained polymers with nano structures can be applied to high performance polymer membranes. 2) Fabrication of nanopores and functional domains in engineering plastic films using ion beams, which deposit the energy in very narrow region of polymer films. Hydrophilic grafting polymers are introduced into hydrophobic fluorinated polymers, cross-linked PTFE (cPTFE) and aromatic hydrocarbon polymer, poly(ether ether ketone (PEEK), which is known to have lamella and crystallite in the polymer films. Then, the hierarchical structures of graft domains are analyzed by a small angle neutron scattering (SANS) experiment. From these analyses, the different structures and the different formation of graft domains were observed in fluorinated and hydrocarbon polymer substrates. the grafted domains in the cPTFE film, working as an ion channel, grew as covering the crystallite and the size of domain seems to be similar to that of crystallite. On the other hand, the PEEK-based PEM has a smaller domain size and it seems to grow independently on the crystallites of PEEK substrate. For nano-fabrication of polymer films using heavy ion beams, the energy distribution in radial direction, which is perpendicular to ion trajectory, is mainly concerned. For penumbra, we re-estimated effective radius of penumbra, in which radiation induced grafting took place, for several different ion beams. We observed the different diameters of the ion channels consisting of graft polymers. The channel sizes were quite in good agreement with the effective penumbra which possess the absorption doses more than 1 kGy. (author)

  4. Synthesis of fish scales gelatin-chitosan crosslinked films by gamma irradiation techniques

    International Nuclear Information System (INIS)

    Erizal; Perkasa, D.P.; Abbas, B.; Sulistioso, G.S.

    2013-01-01

    Gelatin is an important component of fish scales. Nowadays, attention has increased concerning the application of gelatin.The aim of this research was to improve the mechanical properties of gelatin produced from fish scales, which concurrently could increase the usefulness of fish scales. Gelatin (G) is prone to degrade or dissolve in water at room temperature, therefore to enhance its lifetime, it has to be modified with other compound such as chitosan. Chitosan (Cs) is a biodegradable polymer, which has biocompatibility and antibacterial properties. In this study, gelatin solution was mixed with chitosan solution in various ratios (G/Cs: 100/0, 75/25, 50/50, 25/75, 0/100), casted at room temperature to make composite films, then tested for the effectiveness of various gamma irradiation doses (10-40 kGy) for crosslinking of the two polymers. Chemical changes of the films were measured by FT-IR, gel fractions were determined by gravimetry, and mechanical properties were determined by tensile strength and elongation at break using universal testing machine. At optimum conditions ( 30 kGy and 75% Cs), the gel fraction, tensile strength, and elongation at break were higher leading to a stronger composite films as compared to the gelatin film. FTIR spectral analysis showed that gelatin and chitosan formed a crosslinked network. It was concluded that G-Cs films prepared by gamma irradiation have improved their mechanical properties than the gelatin itself. (author)

  5. Mechanisms of mineral scaling in oil and geothermal wells studied in laboratory experiments by nuclear techniques

    International Nuclear Information System (INIS)

    Bjoernstad, T.; Stamatakis, E.

    2006-01-01

    Two independent nuclear methods have been developed and tested for studies of mineral scaling mechanisms and kinetics related to the oil and geothermal industry. The first is a gamma transmission method to measure mass increase with a 30 MBq source of 133 Ba. The other method applies radioactive tracers of one or more of the scaling components. CaCO 3 -precipitation has been used as an example here where the main tracer has been 47 Ca 2+ . While the transmission method is an indirect method, the latter is a direct method where the reactions of specific components may be studied. Both methods are on-line, continuous and non-destructive, and capable to study scaling of liquids with saturation ratios approaching the solubility product. A lower limit for detection of CaCO 3 with the transmission method in sand-packed columns with otherwise reasonable experimental parameters is estimated to be < 1 mg in a 1 cm section of the tube packed with silica sand while the lower limit of detection for the tracer method with reasonable experimental parameters is estimated to < 1 μg in the same tube section. (author)

  6. Microarray Data Processing Techniques for Genome-Scale Network Inference from Large Public Repositories.

    Science.gov (United States)

    Chockalingam, Sriram; Aluru, Maneesha; Aluru, Srinivas

    2016-09-19

    Pre-processing of microarray data is a well-studied problem. Furthermore, all popular platforms come with their own recommended best practices for differential analysis of genes. However, for genome-scale network inference using microarray data collected from large public repositories, these methods filter out a considerable number of genes. This is primarily due to the effects of aggregating a diverse array of experiments with different technical and biological scenarios. Here we introduce a pre-processing pipeline suitable for inferring genome-scale gene networks from large microarray datasets. We show that partitioning of the available microarray datasets according to biological relevance into tissue- and process-specific categories significantly extends the limits of downstream network construction. We demonstrate the effectiveness of our pre-processing pipeline by inferring genome-scale networks for the model plant Arabidopsis thaliana using two different construction methods and a collection of 11,760 Affymetrix ATH1 microarray chips. Our pre-processing pipeline and the datasets used in this paper are made available at http://alurulab.cc.gatech.edu/microarray-pp.

  7. Bench Scale Treatability Studies of Contaminated Soil Using Soil Washing Technique

    OpenAIRE

    Gupta, M. K.; Srivastava, R. K.; Singh, A. K.

    2010-01-01

    Soil contamination is one of the most widespread and serious environmental problems confronting both the industrialized as well as developing nations like India. Different contaminants have different physicochemical properties, which influence the geochemical reactions induced in the soils and may bring about changes in their engineering and environmental behaviour. Several technologies exist for the remediation of contaminated soil and water. In the present study soil washing technique using...

  8. Coarse-grain bandwidth estimation techniques for large-scale network

    Science.gov (United States)

    Cheung, Kar-Ming; Jennings, E.

    In this paper, we describe a top-down analysis and simulation approach to size the bandwidths of a store-and-forward network for a given network topology, a mission traffic scenario, and a set of data types with different latency requirements. We use these techniques to estimate the wide area network (WAN) bandwidths of the ground links for different architecture options of the proposed Integrated Space Communication and Navigation (SCaN) Network.

  9. Structures and Techniques For Implementing and Packaging Complex, Large Scale Microelectromechanical Systems Using Foundry Fabrication Processes.

    Science.gov (United States)

    1996-06-01

    switches 5-43 Figure 5-27. Mechanical interference between ’Pull Spring’ devices 5-45 Figure 5-28. Array of LIGA mechanical relay switches 5-49...like coating DM Direct metal interconnect technique DMD ™ Digital Micromirror Device EDP Ethylene, diamine, pyrocatechol and water; silicon anisotropic...mechanical systems MOSIS MOS Implementation Service PGA Pin grid array, an electronic die package PZT Lead-zirconate-titanate LIGA Lithographie

  10. Coarse-Grain Bandwidth Estimation Techniques for Large-Scale Space Network

    Science.gov (United States)

    Cheung, Kar-Ming; Jennings, Esther

    2013-01-01

    In this paper, we describe a top-down analysis and simulation approach to size the bandwidths of a store-andforward network for a given network topology, a mission traffic scenario, and a set of data types with different latency requirements. We use these techniques to estimate the wide area network (WAN) bandwidths of the ground links for different architecture options of the proposed Integrated Space Communication and Navigation (SCaN) Network.

  11. Ex vivo activity quantification in micrometastases at the cellular scale using the α-camera technique

    DEFF Research Database (Denmark)

    Chouin, Nicolas; Lindegren, Sture; Frost, Sofia H L

    2013-01-01

    Targeted α-therapy (TAT) appears to be an ideal therapeutic technique for eliminating malignant circulating, minimal residual, or micrometastatic cells. These types of malignancies are typically infraclinical, complicating the evaluation of potential treatments. This study presents a method of ex...... vivo activity quantification with an α-camera device, allowing measurement of the activity taken up by tumor cells in biologic structures a few tens of microns....

  12. POC-scale testing of an advanced fine coal dewatering equipment/technique

    Energy Technology Data Exchange (ETDEWEB)

    Groppo, J.G.; Parekh, B.K. [Univ. of Kentucky, Lexington, KY (United States); Rawls, P. [Department of Energy, Pittsburgh, PA (United States)

    1995-11-01

    Froth flotation technique is an effective and efficient process for recovering of ultra-fine (minus 74 {mu}m) clean coal. Economical dewatering of an ultra-fine clean coal product to a 20 percent level moisture will be an important step in successful implementation of the advanced cleaning processes. This project is a step in the Department of Energy`s program to show that ultra-clean coal could be effectively dewatered to 20 percent or lower moisture using either conventional or advanced dewatering techniques. As the contract title suggests, the main focus of the program is on proof-of-concept testing of a dewatering technique for a fine clean coal product. The coal industry is reluctant to use the advanced fine coal recovery technology due to the non-availability of an economical dewatering process. in fact, in a recent survey conducted by U.S. DOE and Battelle, dewatering of fine clean coal was identified as the number one priority for the coal industry. This project will attempt to demonstrate an efficient and economic fine clean coal slurry dewatering process.

  13. Evaluation of different downscaling techniques for hydrological climate-change impact studies at the catchment scale

    Energy Technology Data Exchange (ETDEWEB)

    Teutschbein, Claudia [Stockholm University, Department of Physical Geography and Quaternary Geology, Stockholm (Sweden); Wetterhall, Fredrik [King' s College London, Department of Geography, Strand, London (United Kingdom); Swedish Meteorological and Hydrological Institute, Norrkoeping (Sweden); Seibert, Jan [Stockholm University, Department of Physical Geography and Quaternary Geology, Stockholm (Sweden); Uppsala University, Department of Earth Sciences, Uppsala (Sweden); University of Zurich, Department of Geography, Zurich (Switzerland)

    2011-11-15

    Hydrological modeling for climate-change impact assessment implies using meteorological variables simulated by global climate models (GCMs). Due to mismatching scales, coarse-resolution GCM output cannot be used directly for hydrological impact studies but rather needs to be downscaled. In this study, we investigated the variability of seasonal streamflow and flood-peak projections caused by the use of three statistical approaches to downscale precipitation from two GCMs for a meso-scale catchment in southeastern Sweden: (1) an analog method (AM), (2) a multi-objective fuzzy-rule-based classification (MOFRBC) and (3) the Statistical DownScaling Model (SDSM). The obtained higher-resolution precipitation values were then used to simulate daily streamflow for a control period (1961-1990) and for two future emission scenarios (2071-2100) with the precipitation-streamflow model HBV. The choice of downscaled precipitation time series had a major impact on the streamflow simulations, which was directly related to the ability of the downscaling approaches to reproduce observed precipitation. Although SDSM was considered to be most suitable for downscaling precipitation in the studied river basin, we highlighted the importance of an ensemble approach. The climate and streamflow change signals indicated that the current flow regime with a snowmelt-driven spring flood in April will likely change to a flow regime that is rather dominated by large winter streamflows. Spring flood events are expected to decrease considerably and occur earlier, whereas autumn flood peaks are projected to increase slightly. The simulations demonstrated that projections of future streamflow regimes are highly variable and can even partly point towards different directions. (orig.)

  14. Chemically intuited, large-scale screening of MOFs by machine learning techniques

    Science.gov (United States)

    Borboudakis, Giorgos; Stergiannakos, Taxiarchis; Frysali, Maria; Klontzas, Emmanuel; Tsamardinos, Ioannis; Froudakis, George E.

    2017-10-01

    A novel computational methodology for large-scale screening of MOFs is applied to gas storage with the use of machine learning technologies. This approach is a promising trade-off between the accuracy of ab initio methods and the speed of classical approaches, strategically combined with chemical intuition. The results demonstrate that the chemical properties of MOFs are indeed predictable (stochastically, not deterministically) using machine learning methods and automated analysis protocols, with the accuracy of predictions increasing with sample size. Our initial results indicate that this methodology is promising to apply not only to gas storage in MOFs but in many other material science projects.

  15. Towards large-scale FAME-based bacterial species identification using machine learning techniques.

    Science.gov (United States)

    Slabbinck, Bram; De Baets, Bernard; Dawyndt, Peter; De Vos, Paul

    2009-05-01

    In the last decade, bacterial taxonomy witnessed a huge expansion. The swift pace of bacterial species (re-)definitions has a serious impact on the accuracy and completeness of first-line identification methods. Consequently, back-end identification libraries need to be synchronized with the List of Prokaryotic names with Standing in Nomenclature. In this study, we focus on bacterial fatty acid methyl ester (FAME) profiling as a broadly used first-line identification method. From the BAME@LMG database, we have selected FAME profiles of individual strains belonging to the genera Bacillus, Paenibacillus and Pseudomonas. Only those profiles resulting from standard growth conditions have been retained. The corresponding data set covers 74, 44 and 95 validly published bacterial species, respectively, represented by 961, 378 and 1673 standard FAME profiles. Through the application of machine learning techniques in a supervised strategy, different computational models have been built for genus and species identification. Three techniques have been considered: artificial neural networks, random forests and support vector machines. Nearly perfect identification has been achieved at genus level. Notwithstanding the known limited discriminative power of FAME analysis for species identification, the computational models have resulted in good species identification results for the three genera. For Bacillus, Paenibacillus and Pseudomonas, random forests have resulted in sensitivity values, respectively, 0.847, 0.901 and 0.708. The random forests models outperform those of the other machine learning techniques. Moreover, our machine learning approach also outperformed the Sherlock MIS (MIDI Inc., Newark, DE, USA). These results show that machine learning proves very useful for FAME-based bacterial species identification. Besides good bacterial identification at species level, speed and ease of taxonomic synchronization are major advantages of this computational species

  16. Ranking provinces based on development scale in agriculture sector using taxonomy technique

    Directory of Open Access Journals (Sweden)

    Shahram Rostampour

    2012-08-01

    Full Text Available The purpose of this paper is to determine comparative ranking of agricultural development in different provinces of Iran using taxonomy technique. The independent variables are amount of annual rainfall amount, the number of permanent rivers, the width of pastures and forest, cultivated level of agricultural harvests and garden harvests, number of beehives, the number of fish farming ranches, the number of tractors and combines, the number of cooperative production societies, the number of industrial cattle breeding and aviculture. The results indicate that the maximum development coefficient value is associated with Razavi Khorasan province followed by Mazandaran, East Azarbayjan while the minimum ranking value belongs to Bushehr province.

  17. Using artificial soil sediment mixtures for calibrating fingerprinting techniques at catchment scale

    International Nuclear Information System (INIS)

    Torres Astorga, Romina; Martin, Osvaldo A.; Velasco, Ricardo Hugo; Santos-Villalobos, Sergio de los; Mabit, Lionel; Dercon, Gerd

    2016-01-01

    Soil erosion and related sediment transportation and deposition are key environmental problems in Central Argentina. Certain land use practices, such as intensive grazing, are considered particularly harmful in causing erosion and sediment mobilization. In our studied catchment, Sub Catchment Estancia Grande (630 hectares), 23 km north east from San Luis, characterized by erosive loess soils, we tested sediment source fingerprinting techniques to identify critical hot spots of land degradation, based on the concentration of 43 elements determined by Energy Dispersive X Ray Fluorescence (EDXRF).

  18. Plasmonic nanoparticle lithography: Fast resist-free laser technique for large-scale sub-50 nm hole array fabrication

    Science.gov (United States)

    Pan, Zhenying; Yu, Ye Feng; Valuckas, Vytautas; Yap, Sherry L. K.; Vienne, Guillaume G.; Kuznetsov, Arseniy I.

    2018-05-01

    Cheap large-scale fabrication of ordered nanostructures is important for multiple applications in photonics and biomedicine including optical filters, solar cells, plasmonic biosensors, and DNA sequencing. Existing methods are either expensive or have strict limitations on the feature size and fabrication complexity. Here, we present a laser-based technique, plasmonic nanoparticle lithography, which is capable of rapid fabrication of large-scale arrays of sub-50 nm holes on various substrates. It is based on near-field enhancement and melting induced under ordered arrays of plasmonic nanoparticles, which are brought into contact or in close proximity to a desired material and acting as optical near-field lenses. The nanoparticles are arranged in ordered patterns on a flexible substrate and can be attached and removed from the patterned sample surface. At optimized laser fluence, the nanohole patterning process does not create any observable changes to the nanoparticles and they have been applied multiple times as reusable near-field masks. This resist-free nanolithography technique provides a simple and cheap solution for large-scale nanofabrication.

  19. Novel GIMS technique for deposition of colored Ti/TiO₂ coatings on industrial scale

    Directory of Open Access Journals (Sweden)

    Zdunek Krzysztof

    2016-03-01

    Full Text Available The aim of the present paper has been to verify the effectiveness and usefulness of a novel deposition process named GIMS (Gas Injection Magnetron Sputtering used for the flrst time for deposition of Ti/TiO₂ coatings on large area glass Substrates covered in the condition of industrial scale production. The Ti/TiO₂ coatings were deposited in an industrial System utilizing a set of linear magnetrons with the length of 2400 mm each for covering the 2000 × 3000 mm glasses. Taking into account the speciflc course of the GIMS (multipoint gas injection along the magnetron length and the scale of the industrial facility, the optical coating uniformity was the most important goal to check. The experiments on Ti/TiO₂ coatings deposited by the use of GIMS were conducted on Substrates in the form of glass plates located at the key points along the magnetrons and intentionally non-heated during any stage of the process. Measurements of the coatings properties showed that the thickness and optical uniformity of the 150 nm thick coatings deposited by GIMS in the industrial facility (the thickness differences on the large plates with 2000 mm width did not exceed 20 nm is fully acceptable form the point of view of expected applications e.g. for architectural glazing.

  20. All-automatic swimmer tracking system based on an optimized scaled composite JTC technique

    Science.gov (United States)

    Benarab, D.; Napoléon, T.; Alfalou, A.; Verney, A.; Hellard, P.

    2016-04-01

    In this paper, an all-automatic optimized JTC based swimmer tracking system is proposed and evaluated on real video database outcome from national and international swimming competitions (French National Championship, Limoges 2015, FINA World Championships, Barcelona 2013 and Kazan 2015). First, we proposed to calibrate the swimming pool using the DLT algorithm (Direct Linear Transformation). DLT calculates the homography matrix given a sufficient set of correspondence points between pixels and metric coordinates: i.e. DLT takes into account the dimensions of the swimming pool and the type of the swim. Once the swimming pool is calibrated, we extract the lane. Then we apply a motion detection approach to detect globally the swimmer in this lane. Next, we apply our optimized Scaled Composite JTC which consists of creating an adapted input plane that contains the predicted region and the head reference image. This latter is generated using a composite filter of fin images chosen from the database. The dimension of this reference will be scaled according to the ratio between the head's dimension and the width of the swimming lane. Finally, applying the proposed approach improves the performances of our previous tracking method by adding a detection module in order to achieve an all-automatic swimmer tracking system.

  1. Improved technique that allows the performance of large-scale SNP genotyping on DNA immobilized by FTA technology.

    Science.gov (United States)

    He, Hongbin; Argiro, Laurent; Dessein, Helia; Chevillard, Christophe

    2007-01-01

    FTA technology is a novel method designed to simplify the collection, shipment, archiving and purification of nucleic acids from a wide variety of biological sources. The number of punches that can normally be obtained from a single specimen card are often however, insufficient for the testing of the large numbers of loci required to identify genetic factors that control human susceptibility or resistance to multifactorial diseases. In this study, we propose an improved technique to perform large-scale SNP genotyping. We applied a whole genome amplification method to amplify DNA from buccal cell samples stabilized using FTA technology. The results show that using the improved technique it is possible to perform up to 15,000 genotypes from one buccal cell sample. Furthermore, the procedure is simple. We consider this improved technique to be a promising methods for performing large-scale SNP genotyping because the FTA technology simplifies the collection, shipment, archiving and purification of DNA, while whole genome amplification of FTA card bound DNA produces sufficient material for the determination of thousands of SNP genotypes.

  2. Large-scale nanofabrication of periodic nanostructures using nanosphere-related techniques for green technology applications (Conference Presentation)

    Science.gov (United States)

    Yen, Chen-Chung; Wu, Jyun-De; Chien, Yi-Hsin; Wang, Chang-Han; Liu, Chi-Ching; Ku, Chen-Ta; Chen, Yen-Jon; Chou, Meng-Cheng; Chang, Yun-Chorng

    2016-09-01

    Nanotechnology has been developed for decades and many interesting optical properties have been demonstrated. However, the major hurdle for the further development of nanotechnology depends on finding economic ways to fabricate such nanostructures in large-scale. Here, we demonstrate how to achieve low-cost fabrication using nanosphere-related techniques, such as Nanosphere Lithography (NSL) and Nanospherical-Lens Lithography (NLL). NSL is a low-cost nano-fabrication technique that has the ability to fabricate nano-triangle arrays that cover a very large area. NLL is a very similar technique that uses polystyrene nanospheres to focus the incoming ultraviolet light and exposure the underlying photoresist (PR) layer. PR hole arrays form after developing. Metal nanodisk arrays can be fabricated following metal evaporation and lifting-off processes. Nanodisk or nano-ellipse arrays with various sizes and aspect ratios are routinely fabricated in our research group. We also demonstrate we can fabricate more complicated nanostructures, such as nanodisk oligomers, by combining several other key technologies such as angled exposure and deposition, we can modify these methods to obtain various metallic nanostructures. The metallic structures are of high fidelity and in large scale. The metallic nanostructures can be transformed into semiconductor nanostructures and be used in several green technology applications.

  3. Creep lifing methodologies applied to a single crystal superalloy by use of small scale test techniques

    Energy Technology Data Exchange (ETDEWEB)

    Jeffs, S.P., E-mail: s.p.jeffs@swansea.ac.uk [Institute of Structural Materials, Swansea University, Singleton Park SA2 8PP (United Kingdom); Lancaster, R.J. [Institute of Structural Materials, Swansea University, Singleton Park SA2 8PP (United Kingdom); Garcia, T.E. [IUTA (University Institute of Industrial Technology of Asturias), University of Oviedo, Edificio Departamental Oeste 7.1.17, Campus Universitario, 33203 Gijón (Spain)

    2015-06-11

    In recent years, advances in creep data interpretation have been achieved either by modified Monkman–Grant relationships or through the more contemporary Wilshire equations, which offer the opportunity of predicting long term behaviour extrapolated from short term results. Long term lifing techniques prove extremely useful in creep dominated applications, such as in the power generation industry and in particular nuclear where large static loads are applied, equally a reduction in lead time for new alloy implementation within the industry is critical. The latter requirement brings about the utilisation of the small punch (SP) creep test, a widely recognised approach for obtaining useful mechanical property information from limited material volumes, as is typically the case with novel alloy development and for any in-situ mechanical testing that may be required. The ability to correlate SP creep results with uniaxial data is vital when considering the benefits of the technique. As such an equation has been developed, known as the k{sub SP} method, which has been proven to be an effective tool across several material systems. The current work now explores the application of the aforementioned empirical approaches to correlate small punch creep data obtained on a single crystal superalloy over a range of elevated temperatures. Finite element modelling through ABAQUS software based on the uniaxial creep data has also been implemented to characterise the SP deformation and help corroborate the experimental results.

  4. Creep lifing methodologies applied to a single crystal superalloy by use of small scale test techniques

    International Nuclear Information System (INIS)

    Jeffs, S.P.; Lancaster, R.J.; Garcia, T.E.

    2015-01-01

    In recent years, advances in creep data interpretation have been achieved either by modified Monkman–Grant relationships or through the more contemporary Wilshire equations, which offer the opportunity of predicting long term behaviour extrapolated from short term results. Long term lifing techniques prove extremely useful in creep dominated applications, such as in the power generation industry and in particular nuclear where large static loads are applied, equally a reduction in lead time for new alloy implementation within the industry is critical. The latter requirement brings about the utilisation of the small punch (SP) creep test, a widely recognised approach for obtaining useful mechanical property information from limited material volumes, as is typically the case with novel alloy development and for any in-situ mechanical testing that may be required. The ability to correlate SP creep results with uniaxial data is vital when considering the benefits of the technique. As such an equation has been developed, known as the k SP method, which has been proven to be an effective tool across several material systems. The current work now explores the application of the aforementioned empirical approaches to correlate small punch creep data obtained on a single crystal superalloy over a range of elevated temperatures. Finite element modelling through ABAQUS software based on the uniaxial creep data has also been implemented to characterise the SP deformation and help corroborate the experimental results

  5. Contractibility of curves

    Directory of Open Access Journals (Sweden)

    Janusz Charatonik

    1991-11-01

    Full Text Available Results concerning contractibility of curves (equivalently: of dendroids are collected and discussed in the paper. Interrelations tetween various conditions which are either sufficient or necessary for a curve to be contractible are studied.

  6. A Procedure to Map Subsidence at the Regional Scale Using the Persistent Scatterer Interferometry (PSI Technique

    Directory of Open Access Journals (Sweden)

    Ascanio Rosi

    2014-10-01

    Full Text Available In this paper, we present a procedure to map subsidence at the regional scale by means of persistent scatterer interferometry (PSI. Subsidence analysis is usually restricted to plain areas and where the presence of this phenomenon is already known. The proposed procedure allows a fast identification of subsidences in large and hilly-mountainous areas. The test area is the Tuscany region, in Central Italy, where several areas are affected by natural and anthropogenic subsidence and where PSI data acquired by the Envisat satellite are available both in ascending and descending orbit. The procedure consists of the definition of the vertical and horizontal components of the deformation measured by satellite at first, then of the calculation of the “real” displacement direction, so that mainly vertical deformations can be individuated and mapped.

  7. A case study of life cycle impacts of small-scale fishing techniques in Thailand

    DEFF Research Database (Denmark)

    Verones, Francesca; Bolowich, Alya F.; Ebata, Keigo

    2017-01-01

    Fish provides an important source of protein, especially in developing countries, and the amounts of fish consumed are increasing worldwide (mostly from aquaculture). More than half of all marine fish are caught by small-scale fishery operations. However, no life cycle assessment (LCA) of small...... inventories for three different seasons (northeast monsoon, southwest monsoon and pre-monsoon), since the time spent on the water and catch varied significantly between the seasons. Our results showed the largest impacts from artisanal fishing operations affect climate change, human toxicity, and fossil...... and metal depletion. Our results are, in terms of global warming potential, comparable with other artisanal fisheries. Between different fishing operations, impacts vary between a factor of 2 (for land transformation impacts) and up to a factor of more than 20 (fossil fuel depletion and marine...

  8. A Robust Decision-Making Technique for Water Management under Decadal Scale Climate Variability

    Science.gov (United States)

    Callihan, L.; Zagona, E. A.; Rajagopalan, B.

    2013-12-01

    Robust decision making, a flexible and dynamic approach to managing water resources in light of deep uncertainties associated with climate variability at inter-annual to decadal time scales, is an analytical framework that detects when a system is in or approaching a vulnerable state. It provides decision makers the opportunity to implement strategies that both address the vulnerabilities and perform well over a wide range of plausible future scenarios. A strategy that performs acceptably over a wide range of possible future states is not likely to be optimal with respect to the actual future state. The degree of success--the ability to avoid vulnerable states and operate efficiently--thus depends on the skill in projecting future states and the ability to select the most efficient strategies to address vulnerabilities. This research develops a robust decision making framework that incorporates new methods of decadal scale projections with selection of efficient strategies. Previous approaches to water resources planning under inter-annual climate variability combining skillful seasonal flow forecasts with climatology for subsequent years are not skillful for medium term (i.e. decadal scale) projections as decision makers are not able to plan adequately to avoid vulnerabilities. We address this need by integrating skillful decadal scale streamflow projections into the robust decision making framework and making the probability distribution of this projection available to the decision making logic. The range of possible future hydrologic scenarios can be defined using a variety of nonparametric methods. Once defined, an ensemble projection of decadal flow scenarios are generated from a wavelet-based spectral K-nearest-neighbor resampling approach using historical and paleo-reconstructed data. This method has been shown to generate skillful medium term projections with a rich variety of natural variability. The current state of the system in combination with the

  9. Scaling Analysis Techniques to Establish Experimental Infrastructure for Component, Subsystem, and Integrated System Testing

    Energy Technology Data Exchange (ETDEWEB)

    Sabharwall, Piyush [Idaho National Laboratory (INL), Idaho Falls, ID (United States); O' Brien, James E. [Idaho National Laboratory (INL), Idaho Falls, ID (United States); McKellar, Michael G. [Idaho National Laboratory (INL), Idaho Falls, ID (United States); Housley, Gregory K. [Idaho National Laboratory (INL), Idaho Falls, ID (United States); Bragg-Sitton, Shannon M. [Idaho National Laboratory (INL), Idaho Falls, ID (United States)

    2015-03-01

    Hybrid energy system research has the potential to expand the application for nuclear reactor technology beyond electricity. The purpose of this research is to reduce both technical and economic risks associated with energy systems of the future. Nuclear hybrid energy systems (NHES) mitigate the variability of renewable energy sources, provide opportunities to produce revenue from different product streams, and avoid capital inefficiencies by matching electrical output to demand by using excess generation capacity for other purposes when it is available. An essential step in the commercialization and deployment of this advanced technology is scaled testing to demonstrate integrated dynamic performance of advanced systems and components when risks cannot be mitigated adequately by analysis or simulation. Further testing in a prototypical environment is needed for validation and higher confidence. This research supports the development of advanced nuclear reactor technology and NHES, and their adaptation to commercial industrial applications that will potentially advance U.S. energy security, economy, and reliability and further reduce carbon emissions. Experimental infrastructure development for testing and feasibility studies of coupled systems can similarly support other projects having similar developmental needs and can generate data required for validation of models in thermal energy storage and transport, energy, and conversion process development. Experiments performed in the Systems Integration Laboratory will acquire performance data, identify scalability issues, and quantify technology gaps and needs for various hybrid or other energy systems. This report discusses detailed scaling (component and integrated system) and heat transfer figures of merit that will establish the experimental infrastructure for component, subsystem, and integrated system testing to advance the technology readiness of components and systems to the level required for commercial

  10. Mapping patient safety: a large-scale literature review using bibliometric visualisation techniques.

    Science.gov (United States)

    Rodrigues, S P; van Eck, N J; Waltman, L; Jansen, F W

    2014-03-13

    The amount of scientific literature available is often overwhelming, making it difficult for researchers to have a good overview of the literature and to see relations between different developments. Visualisation techniques based on bibliometric data are helpful in obtaining an overview of the literature on complex research topics, and have been applied here to the topic of patient safety (PS). On the basis of title words and citation relations, publications in the period 2000-2010 related to PS were identified in the Scopus bibliographic database. A visualisation of the most frequently cited PS publications was produced based on direct and indirect citation relations between publications. Terms were extracted from titles and abstracts of the publications, and a visualisation of the most important terms was created. The main PS-related topics studied in the literature were identified using a technique for clustering publications and terms. A total of 8480 publications were identified, of which the 1462 most frequently cited ones were included in the visualisation. The publications were clustered into 19 clusters, which were grouped into three categories: (1) magnitude of PS problems (42% of all included publications); (2) PS risk factors (31%) and (3) implementation of solutions (19%). In the visualisation of PS-related terms, five clusters were identified: (1) medication; (2) measuring harm; (3) PS culture; (4) physician; (5) training, education and communication. Both analysis at publication and term level indicate an increasing focus on risk factors. A bibliometric visualisation approach makes it possible to analyse large amounts of literature. This approach is very useful for improving one's understanding of a complex research topic such as PS and for suggesting new research directions or alternative research priorities. For PS research, the approach suggests that more research on implementing PS improvement initiatives might be needed.

  11. Mapping patient safety: a large-scale literature review using bibliometric visualisation techniques

    Science.gov (United States)

    Rodrigues, S P; van Eck, N J; Waltman, L; Jansen, F W

    2014-01-01

    Background The amount of scientific literature available is often overwhelming, making it difficult for researchers to have a good overview of the literature and to see relations between different developments. Visualisation techniques based on bibliometric data are helpful in obtaining an overview of the literature on complex research topics, and have been applied here to the topic of patient safety (PS). Methods On the basis of title words and citation relations, publications in the period 2000–2010 related to PS were identified in the Scopus bibliographic database. A visualisation of the most frequently cited PS publications was produced based on direct and indirect citation relations between publications. Terms were extracted from titles and abstracts of the publications, and a visualisation of the most important terms was created. The main PS-related topics studied in the literature were identified using a technique for clustering publications and terms. Results A total of 8480 publications were identified, of which the 1462 most frequently cited ones were included in the visualisation. The publications were clustered into 19 clusters, which were grouped into three categories: (1) magnitude of PS problems (42% of all included publications); (2) PS risk factors (31%) and (3) implementation of solutions (19%). In the visualisation of PS-related terms, five clusters were identified: (1) medication; (2) measuring harm; (3) PS culture; (4) physician; (5) training, education and communication. Both analysis at publication and term level indicate an increasing focus on risk factors. Conclusions A bibliometric visualisation approach makes it possible to analyse large amounts of literature. This approach is very useful for improving one's understanding of a complex research topic such as PS and for suggesting new research directions or alternative research priorities. For PS research, the approach suggests that more research on implementing PS improvement initiatives

  12. Different scale land subsidence and ground fissure monitoring with multiple InSAR techniques over Fenwei basin, China

    Directory of Open Access Journals (Sweden)

    C. Zhao

    2015-11-01

    Full Text Available Fenwei basin, China, composed by several sub-basins, has been suffering severe geo-hazards in last 60 years, including large scale land subsidence and small scale ground fissure, which caused serious infrastructure damages and property losses. In this paper, we apply different InSAR techniques with different SAR data to monitor these hazards. Firstly, combined small baseline subset (SBAS InSAR method and persistent scatterers (PS InSAR method is used to multi-track Envisat ASAR data to retrieve the large scale land subsidence covering entire Fenwei basin, from which different land subsidence magnitudes are analyzed of different sub-basins. Secondly, PS-InSAR method is used to monitor the small scale ground fissure deformation in Yuncheng basin, where different spatial deformation gradient can be clearly discovered. Lastly, different track SAR data are contributed to retrieve two-dimensional deformation in both land subsidence and ground fissure region, Xi'an, China, which can be benefitial to explain the occurrence of ground fissure and the correlation between land subsidence and ground fissure.

  13. Technique for large-scale structural mapping at uranium deposits i in non-metamorphosed sedimentary cover rocks

    International Nuclear Information System (INIS)

    Kochkin, B.T.

    1985-01-01

    The technique for large-scale construction (1:1000 - 1:10000), reflecting small amplitude fracture plicate structures, is given for uranium deposits in non-metamorphozed sedimentary cover rocks. Structure drill log sections, as well as a set of maps with the results of area analysis of hidden disturbances, structural analysis of iso-pachous lines and facies of platform mantle horizons serve as sour ce materials for structural mapplotting. The steps of structural map construction are considered: 1) structural carcass construction; 2) reconstruction of structure contour; 3) time determination of structure initiation; 4) plotting of an additional geologic load

  14. SEGMENTATION OF MITOCHONDRIA IN ELECTRON MICROSCOPY IMAGES USING ALGEBRAIC CURVES.

    Science.gov (United States)

    Seyedhosseini, Mojtaba; Ellisman, Mark H; Tasdizen, Tolga

    2013-01-01

    High-resolution microscopy techniques have been used to generate large volumes of data with enough details for understanding the complex structure of the nervous system. However, automatic techniques are required to segment cells and intracellular structures in these multi-terabyte datasets and make anatomical analysis possible on a large scale. We propose a fully automated method that exploits both shape information and regional statistics to segment irregularly shaped intracellular structures such as mitochondria in electron microscopy (EM) images. The main idea is to use algebraic curves to extract shape features together with texture features from image patches. Then, these powerful features are used to learn a random forest classifier, which can predict mitochondria locations precisely. Finally, the algebraic curves together with regional information are used to segment the mitochondria at the predicted locations. We demonstrate that our method outperforms the state-of-the-art algorithms in segmentation of mitochondria in EM images.

  15. Calibration curves for biological dosimetry

    International Nuclear Information System (INIS)

    Guerrero C, C.; Brena V, M. . E-mail cgc@nuclear.inin.mx

    2004-01-01

    The generated information by the investigations in different laboratories of the world, included the ININ, in which settles down that certain class of chromosomal leisure it increases in function of the dose and radiation type, has given by result the obtaining of calibrated curves that are applied in the well-known technique as biological dosimetry. In this work is presented a summary of the work made in the laboratory that includes the calibrated curves for gamma radiation of 60 Cobalt and X rays of 250 k Vp, examples of presumed exposure to ionizing radiation, resolved by means of aberration analysis and the corresponding dose estimate through the equations of the respective curves and finally a comparison among the dose calculations in those people affected by the accident of Ciudad Juarez, carried out by the group of Oak Ridge, USA and those obtained in this laboratory. (Author)

  16. Curve collection, extension of databases

    International Nuclear Information System (INIS)

    Gillemot, F.

    1992-01-01

    Full text: Databases: generally calculated data only. The original measurements: diagrams. Information loss between them Expensive research eg. irradiation, aging, creep etc. Original curves should be stored for reanalysing. The format of the stored curves: a. Data in ASCII files, only numbers b. Other information in strings in a second file Same name, but different extension. Extensions shows the type of the test and the type of the file. EXAMPLES. TEN is tensile information, TED is tensile data, CHN is Charpy informations, CHD is Charpy data. Storing techniques: digitalised measurements, digitalising old curves stored on paper. Use: making catalogues, reanalysing, comparison with new data. Tools: mathematical software packages like quattro, genplot, exel, mathcad, qbasic, pascal, fortran, mathlab, grapher etc. (author)

  17. Application od scaling technique for estimation of radionuclide inventory in radioactive waste

    International Nuclear Information System (INIS)

    Hertelendi, E.; Szuecs, Z.; Gulyas, J.; Svingor, E.; Csongor, J.; Ormai, P.; Fritz, A.; Solymosi, J.; Gresits, I.; Vajda, N.; Molnar, Zs.

    1996-01-01

    Safety studies related to the disposal of low- and intermediate waste indicate that the long term risk is determined by the presence of long-lived nuclides such as 14 C, 59 Ni, 63 Ni, 99 Tc, 129 I and the transuranium elements. As most of these nuclides are difficult to measure, the correlation between these critical nuclides and some other easily measurable key nuclides such as 60 Co and 137 Cs has been investigated for typical waste streams of Paks Nuclear Power Plant (Hungary) and scaling factors have been proposed. An automated gamma-scanning monitor has been purchased and calibrated to determine the gamma-emitting radionuclides. Radiochemical methods have been developed to determine significant difficult-to-measure radionuclides. The radionuclides of interest have been 3 H, 14 C, 90 Sr, 55 Fe, 59 Ni, 99 Tc, 129 I and TRUs. The measurements taken so far have revealed brand new information and data on radiological composition of waste of WWER-type reactors. The reliability of the radioanalytical methods was checked by an international intercomparison test. For all radionuclides the Hungarian results were in the average range of the total data set. (author)

  18. An industry-scale mass marking technique for tracing farmed fish escapees.

    Directory of Open Access Journals (Sweden)

    Fletcher Warren-Myers

    Full Text Available Farmed fish escape and enter the environment with subsequent effects on wild populations. Reducing escapes requires the ability to trace individuals back to the point of escape, so that escape causes can be identified and technical standards improved. Here, we tested if stable isotope otolith fingerprint marks delivered during routine vaccination could be an accurate, feasible and cost effective marking method, suitable for industrial-scale application. We tested seven stable isotopes, (134Ba, (135Ba, (136Ba, (137Ba, (86Sr, (87Sr and (26Mg, on farmed Atlantic salmon reared in freshwater, in experimental conditions designed to reflect commercial practice. Marking was 100% successful with individual Ba isotopes at concentrations as low as 0.001 µg. g-1 fish and for Sr isotopes at 1 µg. g-1 fish. Our results suggest that 63 unique fingerprint marks can be made at low cost using Ba (0.0002 - 0.02 $US per mark and Sr (0.46 - 0.82 $US per mark isotopes. Stable isotope fingerprinting during vaccination is feasible for commercial application if applied at a company level within the world's largest salmon producing nations. Introducing a mass marking scheme would enable tracing of escapees back to point of origin, which could drive greater compliance, better farm design and improved management practices to reduce escapes.

  19. Subchains: A Technique to Scale Bitcoin and Improve the User Experience

    Directory of Open Access Journals (Sweden)

    Peter R. Rizun

    2016-12-01

    Full Text Available Orphan risk for large blocks limits Bitcoin’s transactional capacity while the lack of secure instant transactions restricts its usability. Progress on either front would help spur adoption. This paper considers a technique for using fractional-difficulty blocks (weak blocks to build subchains bridging adjacent pairs of real blocks. Subchains reduce orphan risk by propagating blocks layer-by-layer over the entire block interval, rather than all at once when the proof-of-work is solved. Each new layer of transactions helps to secure the transactions included in lower layers, even though none of the transactions have been con-firmed in a real block. Miners are incentivized to cooperate building subchains in order to process more transactions per second (thereby claiming more fee revenue without incur-ring additional orphan risk. The use of subchains also diverts fee revenue towards network hash power rather than dripping it out of the system to pay for orphaned blocks. By nesting subchains, weak block verification times approaching the theoretical limits imposed by speed-of-light constraints would become possible with future technology improvements. As subchains are built on top of the existing Bitcoin protocol, their implementation does not require any changes to Bitcoin’s consensus rules.

  20. Bench Scale Treatability Studies of Contaminated Soil Using Soil Washing Technique

    Directory of Open Access Journals (Sweden)

    M. K. Gupta

    2010-01-01

    Full Text Available Soil contamination is one of the most widespread and serious environmental problems confronting both the industrialized as well as developing nations like India. Different contaminants have different physicochemical properties, which influence the geochemical reactions induced in the soils and may bring about changes in their engineering and environmental behaviour. Several technologies exist for the remediation of contaminated soil and water. In the present study soil washing technique using plain water with surfactants as an enhancer was used to study the remediation of soil contaminated with (i an organic contaminant (engine lubricant oil and (ii an inorganic contaminant (heavy metal. The lubricant engine oil was used at different percentages (by dry weight of the soil to artificially contaminate the soil. It was found that geotechnical properties of the soil underwent large modifications on account of mixing with the lubricant oil. The sorption experiments were conducted with cadmium metal in aqueous medium at different initial concentration of the metal and at varying pH values of the sorbing medium. For the remediation of contaminated soil matrices, a nonionic surfactant was used for the restoration of geotechnical properties of lubricant oil contaminated soil samples, whereas an anionic surfactant was employed to desorb cadmium from the contaminated soil matrix. The surfactant in case of soil contaminated with the lubricant oil was able to restore properties to an extent of 98% vis-à-vis the virgin soil, while up to 54% cadmium was desorbed from the contaminated soil matrix in surfactant aided desorption experiments.

  1. Development of small scale mechanical testing techniques on ion beam irradiated 304 SS

    International Nuclear Information System (INIS)

    Reichardt, A.; Abad, M.D.; Hosemann, P.; Lupinacci, A.; Kacher, J.; Minor, A.; Jiao, Z; Chou, P.

    2015-01-01

    Austenitic stainless steels are widely used for structural components in light water reactors, however uncertainty in their susceptibility to irradiation assisted stress corrosion cracking (IASCC) has made long term performance predictions difficult. In addition, the testing of reactor irradiated materials has proven challenging due to the long irradiation times required, limited sample availability, and unwanted activation. To address these problems, we apply recently developed techniques in nano-indentation and micro-compression testing to small volume samples of 10 dpa proton-beam irradiated 304 stainless steel. Cross sectional nano-indentation was performed on both proton beam irradiated and non-irradiated samples at temperatures ranging from 22 to 300 C. degrees to determine the effects of irradiation and operating temperature on hardening. Micro-compression tests using 2 μm x 2 μm x 5 μm focused-ion beam milled pillars were then performed in situ in an electron microscope to allow for a more accurate look at stress-strain behavior along with real-time observations of localized mechanical deformation. Large sudden slip events and significant increase in yield strength are observed in irradiated micro-compression samples at room temperature. Elevated temperature nano-indentation results reveal the possibility of thermally-activated changes in deformation mechanism for irradiated specimens. Since the deformation mechanism information provided by micro-compression testing can provide valuable information about IASCC susceptibility, future work will involve ex situ micro-compression tests at reactor operating temperature

  2. ADVANCING THE FUNDAMENTAL UNDERSTANDING AND SCALE-UP OF TRISO FUEL COATERS VIA ADVANCED MEASUREMENT AND COMPUTATIONAL TECHNIQUES

    Energy Technology Data Exchange (ETDEWEB)

    Biswas, Pratim; Al-Dahhan, Muthanna

    2012-11-01

    to advance the fundamental understanding of the hydrodynamics by systematically investigating the effect of design and operating variables, to evaluate the reported dimensionless groups as scaling factors, and to establish a reliable scale-up methodology for the TRISO fuel particle spouted bed coaters based on hydrodynamic similarity via advanced measurement and computational techniques. An additional objective is to develop an on-line non-invasive measurement technique based on gamma ray densitometry (i.e. Nuclear Gauge Densitometry) that can be installed and used for coater process monitoring to ensure proper performance and operation and to facilitate the developed scale-up methodology. To achieve the objectives set for the project, the work will use optical probes and gamma ray computed tomography (CT) (for the measurements of solids/voidage holdup cross-sectional distribution and radial profiles along the bed height, spouted diameter, and fountain height) and radioactive particle tracking (RPT) (for the measurements of the 3D solids flow field, velocity, turbulent parameters, circulation time, solids lagrangian trajectories, and many other of spouted bed related hydrodynamic parameters). In addition, gas dynamic measurement techniques and pressure transducers will be utilized to complement the obtained information. The measurements obtained by these techniques will be used as benchmark data to evaluate and validate the computational fluid dynamic (CFD) models (two fluid model or discrete particle model) and their closures. The validated CFD models and closures will be used to facilitate the developed methodology for scale-up, design and hydrodynamic similarity. Successful execution of this work and the proposed tasks will advance the fundamental understanding of the coater flow field and quantify it for proper and safe design, scale-up, and performance. Such achievements will overcome the barriers to AGR applications and will help assure that the US maintains

  3. Investigation of flow dynamics of liquid phase in a pilot-scale trickle bed reactor using radiotracer technique.

    Science.gov (United States)

    Pant, H J; Sharma, V K

    2016-10-01

    A radiotracer investigation was carried out to measure residence time distribution (RTD) of liquid phase in a trickle bed reactor (TBR). The main objectives of the investigation were to investigate radial and axial mixing of the liquid phase, and evaluate performance of the liquid distributor/redistributor at different operating conditions. Mean residence times (MRTs), holdups (H) and fraction of flow flowing along different quadrants were estimated. The analysis of the measured RTD curves indicated radial non-uniform distribution of liquid phase across the beds. The overall RTD of the liquid phase, measured at the exit of the reactor was simulated using a multi-parameter axial dispersion with exchange model (ADEM), and model parameters were obtained. The results of model simulations indicated that the TBR behaved as a plug flow reactor at most of the operating conditions used in the investigation. The results of the investigation helped to improve the existing design as well as to design a full-scale industrial TBR for petroleum refining applications. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Investigation of flow dynamics of liquid phase in a pilot-scale trickle bed reactor using radiotracer technique

    International Nuclear Information System (INIS)

    Pant, H.J.; Sharma, V.K.

    2016-01-01

    A radiotracer investigation was carried out to measure residence time distribution (RTD) of liquid phase in a trickle bed reactor (TBR). The main objectives of the investigation were to investigate radial and axial mixing of the liquid phase, and evaluate performance of the liquid distributor/redistributor at different operating conditions. Mean residence times (MRTs), holdups (H) and fraction of flow flowing along different quadrants were estimated. The analysis of the measured RTD curves indicated radial non-uniform distribution of liquid phase across the beds. The overall RTD of the liquid phase, measured at the exit of the reactor was simulated using a multi-parameter axial dispersion with exchange model (ADEM), and model parameters were obtained. The results of model simulations indicated that the TBR behaved as a plug flow reactor at most of the operating conditions used in the investigation. The results of the investigation helped to improve the existing design as well as to design a full-scale industrial TBR for petroleum refining applications. - Highlights: • Residence time distributions of liquid phase were measured in a trickle bed reactor. • Bromine-82 as ammonium bromide was used as a radiotracer. • Mean residence times, holdups and radial distribution of liquid phase were quantified. • Axial dispersion with exchange model was used to simulate the measured data. • The trickle bed reactor behaved as a plug flow reactor.

  5. Measurements of liquid phase residence time distributions in a pilot-scale continuous leaching reactor using radiotracer technique.

    Science.gov (United States)

    Pant, H J; Sharma, V K; Shenoy, K T; Sreenivas, T

    2015-03-01

    An alkaline based continuous leaching process is commonly used for extraction of uranium from uranium ore. The reactor in which the leaching process is carried out is called a continuous leaching reactor (CLR) and is expected to behave as a continuously stirred tank reactor (CSTR) for the liquid phase. A pilot-scale CLR used in a Technology Demonstration Pilot Plant (TDPP) was designed, installed and operated; and thus needed to be tested for its hydrodynamic behavior. A radiotracer investigation was carried out in the CLR for measurement of residence time distribution (RTD) of liquid phase with specific objectives to characterize the flow behavior of the reactor and validate its design. Bromine-82 as ammonium bromide was used as a radiotracer and about 40-60MBq activity was used in each run. The measured RTD curves were treated and mean residence times were determined and simulated using a tanks-in-series model. The result of simulation indicated no flow abnormality and the reactor behaved as an ideal CSTR for the range of the operating conditions used in the investigation. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Considerations for reference pump curves

    International Nuclear Information System (INIS)

    Stockton, N.B.

    1992-01-01

    This paper examines problems associated with inservice testing (IST) of pumps to assess their hydraulic performance using reference pump curves to establish acceptance criteria. Safety-related pumps at nuclear power plants are tested under the American Society of Mechanical Engineers (ASME) Boiler and Pressure Vessel Code (the Code), Section 11. The Code requires testing pumps at specific reference points of differential pressure or flow rate that can be readily duplicated during subsequent tests. There are many cases where test conditions cannot be duplicated. For some pumps, such as service water or component cooling pumps, the flow rate at any time depends on plant conditions and the arrangement of multiple independent and constantly changing loads. System conditions cannot be controlled to duplicate a specific reference value. In these cases, utilities frequently request to use pump curves for comparison of test data for acceptance. There is no prescribed method for developing a pump reference curve. The methods vary and may yield substantially different results. Some results are conservative when compared to the Code requirements; some are not. The errors associated with different curve testing techniques should be understood and controlled within reasonable bounds. Manufacturer's pump curves, in general, are not sufficiently accurate to use as reference pump curves for IST. Testing using reference curves generated with polynomial least squares fits over limited ranges of pump operation, cubic spline interpolation, or cubic spline least squares fits can provide a measure of pump hydraulic performance that is at least as accurate as the Code required method. Regardless of the test method, error can be reduced by using more accurate instruments, by correcting for systematic errors, by increasing the number of data points, and by taking repetitive measurements at each data point

  7. Development of spatial scaling technique of forest health sample point information

    Science.gov (United States)

    Lee, J.; Ryu, J.; Choi, Y. Y.; Chung, H. I.; Kim, S. H.; Jeon, S. W.

    2017-12-01

    Most forest health assessments are limited to monitoring sampling sites. The monitoring of forest health in Britain in Britain was carried out mainly on five species (Norway spruce, Sitka spruce, Scots pine, Oak, Beech) Database construction using Oracle database program with density The Forest Health Assessment in GreatBay in the United States was conducted to identify the characteristics of the ecosystem populations of each area based on the evaluation of forest health by tree species, diameter at breast height, water pipe and density in summer and fall of 200. In the case of Korea, in the first evaluation report on forest health vitality, 1000 sample points were placed in the forests using a systematic method of arranging forests at 4Km × 4Km at regular intervals based on an sample point, and 29 items in four categories such as tree health, vegetation, soil, and atmosphere. As mentioned above, existing researches have been done through the monitoring of the survey sample points, and it is difficult to collect information to support customized policies for the regional survey sites. In the case of special forests such as urban forests and major forests, policy and management appropriate to the forest characteristics are needed. Therefore, it is necessary to expand the survey headquarters for diagnosis and evaluation of customized forest health. For this reason, we have constructed a method of spatial scale through the spatial interpolation according to the characteristics of each index of the main sample point table of 29 index in the four points of diagnosis and evaluation report of the first forest health vitality report, PCA statistical analysis and correlative analysis are conducted to construct the indicators with significance, and then weights are selected for each index, and evaluation of forest health is conducted through statistical grading.

  8. Deep-learnt classification of light curves

    DEFF Research Database (Denmark)

    Mahabal, Ashish; Gieseke, Fabian; Pai, Akshay Sadananda Uppinakudru

    2017-01-01

    Astronomy light curves are sparse, gappy, and heteroscedastic. As a result standard time series methods regularly used for financial and similar datasets are of little help and astronomers are usually left to their own instruments and techniques to classify light curves. A common approach is to d...

  9. Large-scale User Facility Imaging and Scattering Techniques to Facilitate Basic Medical Research

    International Nuclear Information System (INIS)

    Miller, Stephen D.; Bilheux, Jean-Christophe; Gleason, Shaun Scott; Nichols, Trent L.; Bingham, Philip R.; Green, Mark L.

    2011-01-01

    Conceptually, modern medical imaging can be traced back to the late 1960's and into the early 1970's with the advent of computed tomography . This pioneering work was done by 1979 Nobel Prize winners Godfrey Hounsfield and Allan McLeod Cormack which evolved into the first prototype Computed Tomography (CT) scanner in 1971 and became commercially available in 1972. Unique to the CT scanner was the ability to utilize X-ray projections taken at regular angular increments from which reconstructed three-dimensional (3D) images could be produced. It is interesting to note that the mathematics to realize tomographic images was developed in 1917 by the Austrian mathematician Johann Radon who produced the mathematical relationships to derive 3D images from projections - known today as the Radon Transform . The confluence of newly advancing technologies, particularly in the areas of detectors, X-ray tubes, and computers combined with the earlier derived mathematical concepts ushered in a new era in diagnostic medicine via medical imaging (Beckmann, 2006). Occurring separately but at a similar time as the development of the CT scanner were efforts at the national level within the United States to produce user facilities to support scientific discovery based upon experimentation. Basic Energy Sciences within the United States Department of Energy currently supports 9 major user facilities along with 5 nanoscale science research centers dedicated to measurement sciences and experimental techniques supporting a very broad range of scientific disciplines. Tracing back the active user facilities, the Stanford Synchrotron Radiation Lightsource (SSRL) a SLAC National Accelerator Laboratory was built in 1974 and it was realized that its intense x-ray beam could be used to study protein molecular structure. The National Synchrotron Light Source (NSLS) at Brookhaven National Laboratory was commissioned in 1982 and currently has 60 x-ray beamlines optimized for a number of different

  10. Soil Water Retention Curve

    Science.gov (United States)

    Johnson, L. E.; Kim, J.; Cifelli, R.; Chandra, C. V.

    2016-12-01

    Potential water retention, S, is one of parameters commonly used in hydrologic modeling for soil moisture accounting. Physically, S indicates total amount of water which can be stored in soil and is expressed in units of depth. S can be represented as a change of soil moisture content and in this context is commonly used to estimate direct runoff, especially in the Soil Conservation Service (SCS) curve number (CN) method. Generally, the lumped and the distributed hydrologic models can easily use the SCS-CN method to estimate direct runoff. Changes in potential water retention have been used in previous SCS-CN studies; however, these studies have focused on long-term hydrologic simulations where S is allowed to vary at the daily time scale. While useful for hydrologic events that span multiple days, the resolution is too coarse for short-term applications such as flash flood events where S may not recover its full potential. In this study, a new method for estimating a time-variable potential water retention at hourly time-scales is presented. The methodology is applied for the Napa River basin, California. The streamflow gage at St Helena, located in the upper reaches of the basin, is used as the control gage site to evaluate the model performance as it is has minimal influences by reservoirs and diversions. Rainfall events from 2011 to 2012 are used for estimating the event-based SCS CN to transfer to S. As a result, we have derived the potential water retention curve and it is classified into three sections depending on the relative change in S. The first is a negative slope section arising from the difference in the rate of moving water through the soil column, the second is a zero change section representing the initial recovery the potential water retention, and the third is a positive change section representing the full recovery of the potential water retention. Also, we found that the soil water moving has traffic jam within 24 hours after finished first

  11. A Robust Computational Technique for Model Order Reduction of Two-Time-Scale Discrete Systems via Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Othman M. K. Alsmadi

    2015-01-01

    Full Text Available A robust computational technique for model order reduction (MOR of multi-time-scale discrete systems (single input single output (SISO and multi-input multioutput (MIMO is presented in this paper. This work is motivated by the singular perturbation of multi-time-scale systems where some specific dynamics may not have significant influence on the overall system behavior. The new approach is proposed using genetic algorithms (GA with the advantage of obtaining a reduced order model, maintaining the exact dominant dynamics in the reduced order, and minimizing the steady state error. The reduction process is performed by obtaining an upper triangular transformed matrix of the system state matrix defined in state space representation along with the elements of B, C, and D matrices. The GA computational procedure is based on maximizing the fitness function corresponding to the response deviation between the full and reduced order models. The proposed computational intelligence MOR method is compared to recently published work on MOR techniques where simulation results show the potential and advantages of the new approach.

  12. Effective combination of DIC, AE, and UPV nondestructive techniques on a scaled model of the Belgian nuclear waste container

    Science.gov (United States)

    Iliopoulos, Sokratis N.; Areias, Lou; Pyl, Lincy; Vantomme, John; Van Marcke, Philippe; Coppens, Erik; Aggelis, Dimitrios G.

    2015-03-01

    Protecting the environment and future generations against the potential hazards arising from high-level and heat emitting radioactive waste is a worldwide concern. Following this direction, the Belgian Agency for Radioactive Waste and Enriched Fissile Materials has come up with the reference design which considers the geological disposal of the waste in purely indurated clay. In this design the wastes are first post-conditioned in massive concrete structures called Supercontainers before being transported to the underground repositories. The Supercontainers are cylindrical structures which consist of four engineering barriers that from the inner to the outer surface are namely: the overpack, the filler, the concrete buffer and possibly the envelope. The overpack, which is made of carbon steel, is the place where the vitrified wastes and spent fuel are stored. The buffer, which is made of concrete, creates a highly alkaline environment ensuring slow and uniform overpack corrosion as well as radiological shielding. In order to evaluate the feasibility to construct such Supercontainers two scaled models have so far been designed and tested. The first scaled model indicated crack formation on the surface of the concrete buffer but the absence of a crack detection and monitoring system precluded defining the exact time of crack initiation, as well as the origin, the penetration depth, the crack path and the propagation history. For this reason, the second scaled model test was performed to obtain further insight by answering to the aforementioned questions using the Digital Image Correlation, Acoustic Emission and Ultrasonic Pulse Velocity nondestructive testing techniques.

  13. Inter-subject FDG PET Brain Networks Exhibit Multi-scale Community Structure with Different Normalization Techniques.

    Science.gov (United States)

    Sperry, Megan M; Kartha, Sonia; Granquist, Eric J; Winkelstein, Beth A

    2018-07-01

    Inter-subject networks are used to model correlations between brain regions and are particularly useful for metabolic imaging techniques, like 18F-2-deoxy-2-(18F)fluoro-D-glucose (FDG) positron emission tomography (PET). Since FDG PET typically produces a single image, correlations cannot be calculated over time. Little focus has been placed on the basic properties of inter-subject networks and if they are affected by group size and image normalization. FDG PET images were acquired from rats (n = 18), normalized by whole brain, visual cortex, or cerebellar FDG uptake, and used to construct correlation matrices. Group size effects on network stability were investigated by systematically adding rats and evaluating local network connectivity (node strength and clustering coefficient). Modularity and community structure were also evaluated in the differently normalized networks to assess meso-scale network relationships. Local network properties are stable regardless of normalization region for groups of at least 10. Whole brain-normalized networks are more modular than visual cortex- or cerebellum-normalized network (p network resolutions where modularity differs most between brain and randomized networks. Hierarchical analysis reveals consistent modules at different scales and clustering of spatially-proximate brain regions. Findings suggest inter-subject FDG PET networks are stable for reasonable group sizes and exhibit multi-scale modularity.

  14. Controlling for Response Bias in Self-Ratings of Personality: A Comparison of Impression Management Scales and the Overclaiming Technique.

    Science.gov (United States)

    Müller, Sascha; Moshagen, Morten

    2018-04-12

    Self-serving response distortions pose a threat to the validity of personality scales. A common approach to deal with this issue is to rely on impression management (IM) scales. More recently, the overclaiming technique (OCT) has been proposed as an alternative and arguably superior measure of such biases. In this study (N = 162), we tested these approaches in the context of self- and other-ratings using the HEXACO personality inventory. To the extent that the OCT and IM scales can be considered valid measures of response distortions, they are expected to account for inflated self-ratings in particular for those personality dimensions that are prone to socially desirable responding. However, the results show that neither the OCT nor IM account for overly favorable self-ratings. The validity of IM as a measure of response biases was further scrutinized by a substantial correlation with other-rated honesty-humility. As such, this study questions the use of both the OCT and IM to assess self-serving response distortions.

  15. Determination of formation heterogeneity at a range of scales using novel multi-electrode resistivity scanning techniques

    International Nuclear Information System (INIS)

    Williams, G.M.; Jackson, P.D.; Ward, R.S.; Sen, M.A.; Meldrum, P.; Lovell, M.

    1991-01-01

    The traditional method of measuring ground resistivity involves passing a current through two outer electrodes, measuring the potential developed across two electrodes in between, and applying Ohm's Law. In the RESCAN system developed by the British Geological Survey, each electrode can be electronically selected and controlled by software to either pass current or measure potential. Thousands of electrodes can be attached to the system either in 2-D surface arrays or along special plastic covered probes driven vertically into the ground or emplaced in boreholes. Under computer control, the resistivity distribution within the emplaced array can be determined automatically with unprecedented detail and speed, and may be displayed as an image. So far, the RESCAN system has been applied at the meso-scale in monitoring the radial migration of an electrolyte introduced into a recharge well in an unconsolidated aquifer; and CORSCAN at the micro-scale on drill cores to evaluate spatial variability in physical properties. The RESCAN technique has considerable potential for determining formation heterogeneity at different scales and provides a basis for developing stochastic models of groundwater and solute flow in heterogeneous systems. 13 figs.; 1 tab.; 12 refs

  16. Investigating sensitivity, specificity, and area under the curve of the Clinical COPD Questionnaire, COPD Assessment Test, and Modified Medical Research Council scale according to GOLD using St George's Respiratory Questionnaire cutoff 25 (and 20 as reference

    Directory of Open Access Journals (Sweden)

    Tsiligianni IG

    2016-05-01

    Full Text Available Ioanna G Tsiligianni,1,2 Harma J Alma,1,2 Corina de Jong,1,2 Danijel Jelusic,3 Michael Wittmann,3 Michael Schuler,4 Konrad Schultz,3 Boudewijn J Kollen,1 Thys van der Molen,1,2 Janwillem WH Kocks1,2 1Department of General Practice, 2GRIAC Research Institute, University Medical Center Groningen, University of Groningen, Groningen, the Netherlands; 3Klinik Bad Reichenhall, Center for Rehabilitation, Pulmonology and Orthopedics, Bad Reichenhall, 4Department of Medical Psychology, Psychotherapy and Rehabilitation Sciences, University of Würzburg, Würzburg, Germany Background: In the GOLD (Global initiative for chronic Obstructive Lung Disease strategy document, the Clinical COPD Questionnaire (CCQ, COPD Assessment Test (CAT, or modified Medical Research Council (mMRC scale are recommended for the assessment of symptoms using the cutoff points of CCQ ≥1, CAT ≥10, and mMRC scale ≥2 to indicate symptomatic patients. The current study investigates the criterion validity of the CCQ, CAT and mMRC scale based on a reference cutoff point of St George’s Respiratory Questionnaire (SGRQ ≥25, as suggested by GOLD, following sensitivity and specificity analysis. In addition, areas under the curve (AUCs of the CCQ, CAT, and mMRC scale were compared using two SGRQ cutoff points (≥25 and ≥20.Materials and methods: Two data sets were used: study A, 238 patients from a pulmonary rehabilitation program; and study B, 101 patients from primary care. Receiver-operating characteristic (ROC curves were used to assess the correspondence between the recommended cutoff points of the questionnaires.Results: Sensitivity, specificity, and AUC scores for cutoff point SGRQ ≥25 were: study A, 0.99, 0.43, and 0.96 for CCQ ≥1, 0.92, 0.48, and 0.89 for CAT ≥10, and 0.68, 0.91, and 0.91 for mMRC ≥2; study B, 0.87, 0.77, and 0.9 for CCQ ≥1, 0.76, 0.73, and 0.82 for CAT ≥10, and 0.21, 1, and 0.81 for mMRC ≥2. Sensitivity, specificity, and AUC scores for

  17. Investigating sensitivity, specificity, and area under the curve of the Clinical COPD Questionnaire, COPD Assessment Test, and Modified Medical Research Council scale according to GOLD using St George's Respiratory Questionnaire cutoff 25 (and 20) as reference.

    Science.gov (United States)

    Tsiligianni, Ioanna G; Alma, Harma J; de Jong, Corina; Jelusic, Danijel; Wittmann, Michael; Schuler, Michael; Schultz, Konrad; Kollen, Boudewijn J; van der Molen, Thys; Kocks, Janwillem Wh

    2016-01-01

    In the GOLD (Global initiative for chronic Obstructive Lung Disease) strategy document, the Clinical COPD Questionnaire (CCQ), COPD Assessment Test (CAT), or modified Medical Research Council (mMRC) scale are recommended for the assessment of symptoms using the cutoff points of CCQ ≥1, CAT ≥10, and mMRC scale ≥2 to indicate symptomatic patients. The current study investigates the criterion validity of the CCQ, CAT and mMRC scale based on a reference cutoff point of St George's Respiratory Questionnaire (SGRQ) ≥25, as suggested by GOLD, following sensitivity and specificity analysis. In addition, areas under the curve (AUCs) of the CCQ, CAT, and mMRC scale were compared using two SGRQ cutoff points (≥25 and ≥20). Two data sets were used: study A, 238 patients from a pulmonary rehabilitation program; and study B, 101 patients from primary care. Receiver-operating characteristic (ROC) curves were used to assess the correspondence between the recommended cutoff points of the questionnaires. Sensitivity, specificity, and AUC scores for cutoff point SGRQ ≥25 were: study A, 0.99, 0.43, and 0.96 for CCQ ≥1, 0.92, 0.48, and 0.89 for CAT ≥10, and 0.68, 0.91, and 0.91 for mMRC ≥2; study B, 0.87, 0.77, and 0.9 for CCQ ≥1, 0.76, 0.73, and 0.82 for CAT ≥10, and 0.21, 1, and 0.81 for mMRC ≥2. Sensitivity, specificity, and AUC scores for cutoff point SGRQ ≥20 were: study A, 0.99, 0.73, and 0.99 for CCQ ≥1, 0.91, 0.73, and 0.94 for CAT ≥10, and 0.66, 0.95, and 0.94 for mMRC ≥2; study B, 0.8, 0.89, and 0.89 for CCQ ≥1, 0.69, 0.78, and 0.8 for CAT ≥10, and 0.18, 1, and 0.81 for mMRC ≥2. Based on data from these two different samples, this study showed that the suggested cutoff point for the SGRQ (≥25) did not seem to correspond well with the established cutoff points of the CCQ or CAT scales, resulting in low specificity levels. The correspondence with the mMRC scale seemed satisfactory, though not optimal. The SGRQ threshold of ≥20

  18. String Sigma Models on Curved Supermanifolds

    Directory of Open Access Journals (Sweden)

    Roberto Catenacci

    2018-04-01

    Full Text Available We use the techniques of integral forms to analyze the easiest example of two-dimensional sigma models on a supermanifold. We write the action as an integral of a top integral form over a D = 2 supermanifold, and we show how to interpolate between different superspace actions. Then, we consider curved supermanifolds, and we show that the definitions used for flat supermanifolds can also be used for curved supermanifolds. We prove it by first considering the case of a curved rigid supermanifold and then the case of a generic curved supermanifold described by a single superfield E.

  19. A fusion of top-down and bottom-up modeling techniques to constrain regional scale carbon budgets

    Science.gov (United States)

    Goeckede, M.; Turner, D. P.; Michalak, A. M.; Vickers, D.; Law, B. E.

    2009-12-01

    The effort to constrain regional scale carbon budgets benefits from assimilating as many high quality data sources as possible in order to reduce uncertainties. Two of the most common approaches used in this field, bottom-up and top-down techniques, both have their strengths and weaknesses, and partly build on very different sources of information to train, drive, and validate the models. Within the context of the ORCA2 project, we follow both bottom-up and top-down modeling strategies with the ultimate objective of reconciling their surface flux estimates. The ORCA2 top-down component builds on a coupled WRF-STILT transport module that resolves the footprint function of a CO2 concentration measurement in high temporal and spatial resolution. Datasets involved in the current setup comprise GDAS meteorology, remote sensing products, VULCAN fossil fuel inventories, boundary conditions from CarbonTracker, and high-accuracy time series of atmospheric CO2 concentrations. Surface fluxes of CO2 are normally provided through a simple diagnostic model which is optimized against atmospheric observations. For the present study, we replaced the simple model with fluxes generated by an advanced bottom-up process model, Biome-BGC, which uses state-of-the-art algorithms to resolve plant-physiological processes, and 'grow' a biosphere based on biogeochemical conditions and climate history. This approach provides a more realistic description of biomass and nutrient pools than is the case for the simple model. The process model ingests various remote sensing data sources as well as high-resolution reanalysis meteorology, and can be trained against biometric inventories and eddy-covariance data. Linking the bottom-up flux fields to the atmospheric CO2 concentrations through the transport module allows evaluating the spatial representativeness of the BGC flux fields, and in that way assimilates more of the available information than either of the individual modeling techniques alone

  20. JUMPING THE CURVE

    Directory of Open Access Journals (Sweden)

    René Pellissier

    2012-01-01

    Full Text Available This paper explores the notion ofjump ing the curve,following from Handy 's S-curve onto a new curve with new rules policies and procedures. . It claims that the curve does not generally lie in wait but has to be invented by leadership. The focus of this paper is the identification (mathematically and inferentially ofthat point in time, known as the cusp in catastrophe theory, when it is time to change - pro-actively, pre-actively or reactively. These three scenarios are addressed separately and discussed in terms ofthe relevance ofeach.

  1. Construction of calibration curve for accountancy tank

    International Nuclear Information System (INIS)

    Kato, Takayuki; Goto, Yoshiki; Nidaira, Kazuo

    2009-01-01

    Tanks are equipped in a reprocessing plant for accounting solution of nuclear material. The careful measurement of volume in tanks is very important to implement rigorous accounting of nuclear material. The calibration curve relating the volume and level of solution needs to be constructed, where the level is determined by differential pressure of dip tubes. Several calibration curves are usually employed, but it's not explicitly decided how many segment are used, where to select segment, or what should be the degree of polynomial curve. These parameters, i.e., segment and degree of polynomial curve are mutually interrelated to give the better performance of calibration curve. Here we present the construction technique of giving optimum calibration curves and their characteristics. (author)

  2. Large scale applicability of a Fully Adaptive Non-Intrusive Spectral Projection technique: Sensitivity and uncertainty analysis of a transient

    International Nuclear Information System (INIS)

    Perkó, Zoltán; Lathouwers, Danny; Kloosterman, Jan Leen; Hagen, Tim van der

    2014-01-01

    Highlights: • Grid and basis adaptive Polynomial Chaos techniques are presented for S and U analysis. • Dimensionality reduction and incremental polynomial order reduce computational costs. • An unprotected loss of flow transient is investigated in a Gas Cooled Fast Reactor. • S and U analysis is performed with MC and adaptive PC methods, for 42 input parameters. • PC accurately estimates means, variances, PDFs, sensitivities and uncertainties. - Abstract: Since the early years of reactor physics the most prominent sensitivity and uncertainty (S and U) analysis methods in the nuclear community have been adjoint based techniques. While these are very effective for pure neutronics problems due to the linearity of the transport equation, they become complicated when coupled non-linear systems are involved. With the continuous increase in computational power such complicated multi-physics problems are becoming progressively tractable, hence affordable and easily applicable S and U analysis tools also have to be developed in parallel. For reactor physics problems for which adjoint methods are prohibitive Polynomial Chaos (PC) techniques offer an attractive alternative to traditional random sampling based approaches. At TU Delft such PC methods have been studied for a number of years and this paper presents a large scale application of our Fully Adaptive Non-Intrusive Spectral Projection (FANISP) algorithm for performing the sensitivity and uncertainty analysis of a Gas Cooled Fast Reactor (GFR) Unprotected Loss Of Flow (ULOF) transient. The transient was simulated using the Cathare 2 code system and a fully detailed model of the GFR2400 reactor design that was investigated in the European FP7 GoFastR project. Several sources of uncertainty were taken into account amounting to an unusually high number of stochastic input parameters (42) and numerous output quantities were investigated. The results show consistently good performance of the applied adaptive PC

  3. Computational Techniques for Model Predictive Control of Large-Scale Systems with Continuous-Valued and Discrete-Valued Inputs

    Directory of Open Access Journals (Sweden)

    Koichi Kobayashi

    2013-01-01

    Full Text Available We propose computational techniques for model predictive control of large-scale systems with both continuous-valued control inputs and discrete-valued control inputs, which are a class of hybrid systems. In the proposed method, we introduce the notion of virtual control inputs, which are obtained by relaxing discrete-valued control inputs to continuous variables. In online computation, first, we find continuous-valued control inputs and virtual control inputs minimizing a cost function. Next, using the obtained virtual control inputs, only discrete-valued control inputs at the current time are computed in each subsystem. In addition, we also discuss the effect of quantization errors. Finally, the effectiveness of the proposed method is shown by a numerical example. The proposed method enables us to reduce and decentralize the computation load.

  4. Application of the Particle Swarm Optimization (PSO) technique to the thermal-hydraulics project of a PWR reactor core in reduced scale

    International Nuclear Information System (INIS)

    Lima Junior, Carlos Alberto de Souza

    2008-09-01

    The reduced scale models design have been employed by engineers from several different industries fields such as offshore, spatial, oil extraction, nuclear industries and others. Reduced scale models are used in experiments because they are economically attractive than its own prototype (real scale) because in many cases they are cheaper than a real scale one and most of time they are also easier to build providing a way to lead the real scale design allowing indirect investigations and analysis to the real scale system (prototype). A reduced scale model (or experiment) must be able to represent all physical phenomena that occurs and further will do in the real scale one under operational conditions, e.g., in this case the reduced scale model is called similar. There are some different methods to design a reduced scale model and from those two are basic: the empiric method based on the expert's skill to determine which physical measures are relevant to the desired model; and the differential equation method that is based on a mathematical description of the prototype (real scale system) to model. Applying a mathematical technique to the differential equation that describes the prototype then highlighting the relevant physical measures so the reduced scale model design problem may be treated as an optimization problem. Many optimization techniques as Genetic Algorithm (GA), for example, have been developed to solve this class of problems and have also been applied to the reduced scale model design problem as well. In this work, Particle Swarm Optimization (PSO) technique is investigated as an alternative optimization tool for such problem. In this investigation a computational approach, based on particle swarm optimization technique (PSO), is used to perform a reduced scale two loop Pressurized Water Reactor (PWR) core, considering 100% of nominal power operation on a forced flow cooling circulation and non-accidental operating conditions. A performance comparison

  5. Tornado-Shaped Curves

    Science.gov (United States)

    Martínez, Sol Sáez; de la Rosa, Félix Martínez; Rojas, Sergio

    2017-01-01

    In Advanced Calculus, our students wonder if it is possible to graphically represent a tornado by means of a three-dimensional curve. In this paper, we show it is possible by providing the parametric equations of such tornado-shaped curves.

  6. Simulating Supernova Light Curves

    International Nuclear Information System (INIS)

    Even, Wesley Paul; Dolence, Joshua C.

    2016-01-01

    This report discusses supernova light simulations. A brief review of supernovae, basics of supernova light curves, simulation tools used at LANL, and supernova results are included. Further, it happens that many of the same methods used to generate simulated supernova light curves can also be used to model the emission from fireballs generated by explosions in the earth's atmosphere.

  7. Simulating Supernova Light Curves

    Energy Technology Data Exchange (ETDEWEB)

    Even, Wesley Paul [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Dolence, Joshua C. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-05-05

    This report discusses supernova light simulations. A brief review of supernovae, basics of supernova light curves, simulation tools used at LANL, and supernova results are included. Further, it happens that many of the same methods used to generate simulated supernova light curves can also be used to model the emission from fireballs generated by explosions in the earth’s atmosphere.

  8. Tempo curves considered harmful

    NARCIS (Netherlands)

    Desain, P.; Honing, H.

    1993-01-01

    In the literature of musicology, computer music research and the psychology of music, timing or tempo measurements are mostly presented in the form of continuous curves. The notion of these tempo curves is dangerous, despite its widespread use, because it lulls its users into the false impression

  9. Investigation of flow behaviour of coal particles in a pilot-scale fluidized bed gasifier (FBG) using radiotracer technique.

    Science.gov (United States)

    Pant, H J; Sharma, V K; Kamudu, M Vidya; Prakash, S G; Krishanamoorthy, S; Anandam, G; Rao, P Seshubabu; Ramani, N V S; Singh, Gursharan; Sonde, R R

    2009-09-01

    Knowledge of residence time distribution (RTD), mean residence time (MRT) and degree of axial mixing of solid phase is required for efficient operation of coal gasification process. Radiotracer technique was used to measure the RTD of coal particles in a pilot-scale fluidized bed gasifier (FBG). Two different radiotracers i.e. lanthanum-140 and gold-198 labeled coal particles (100 gm) were independently used as radiotracers. The radiotracer was instantaneously injected into the coal feed line and monitored at the ash extraction line at the bottom and gas outlet at the top of the gasifier using collimated scintillation detectors. The measured RTD data were treated and MRTs of coal/ash particles were determined. The treated data were simulated using tanks-in-series model. The simulation of RTD data indicated good degree of mixing with small fraction of the feed material bypassing/short-circuiting from the bottom of the gasifier. The results of the investigation were found useful for optimizing the design and operation of the FBG, and scale-up of the gasification process.

  10. The Use of Quality Control and Data Mining Techniques for Monitoring Scaled Scores: An Overview. Research Report. ETS RR-12-20

    Science.gov (United States)

    von Davier, Alina A.

    2012-01-01

    Maintaining comparability of test scores is a major challenge faced by testing programs that have almost continuous administrations. Among the potential problems are scale drift and rapid accumulation of errors. Many standard quality control techniques for testing programs, which can effectively detect and address scale drift for small numbers of…

  11. Research and realization of ten-print data quality control techniques for imperial scale automated fingerprint identification system

    Directory of Open Access Journals (Sweden)

    Qian Wang

    2017-01-01

    Full Text Available As the first individualization-information processing equipment put into practical service worldwide, Automated Fingerprint Identification System (AFIS has always been regarded as the first choice in individualization of criminal suspects or those who died in mass disasters. By integrating data within the existing regional large-scale AFIS database, many countries are constructing an ultra large state-of-the-art AFIS (or Imperial Scale AFIS system. Therefore, it is very important to develop a series of ten-print data quality controlling process for this system of this type, which would insure a substantial matching efficiency, as the pouring data come into this imperial scale being. As the image quality of ten-print data is closely relevant to AFIS matching proficiency, a lot of police departments have allocated huge amount of human and financial resources over this issue by carrying out manual verification works for years. Unfortunately, quality control method above is always proved to be inadequate because it is an astronomical task involved, in which it has always been problematic and less affiant for potential errors. Hence, we will implement quality control in the above procedure with supplementary-acquisition effect caused by the delay of feedback instructions sent from the human verification teams. In this article, a series of fingerprint image quality supervising techniques has been put forward, which makes it possible for computer programs to supervise the ten-print image quality in real-time and more accurate manner as substitute for traditional manual verifications. Besides its prominent advantages in the human and financial expenditures, it has also been proved to obviously improve the image quality of the AFIS ten-print database, which leads up to a dramatic improvement in the AFIS-matching accuracy as well.

  12. The curve shortening problem

    CERN Document Server

    Chou, Kai-Seng

    2001-01-01

    Although research in curve shortening flow has been very active for nearly 20 years, the results of those efforts have remained scattered throughout the literature. For the first time, The Curve Shortening Problem collects and illuminates those results in a comprehensive, rigorous, and self-contained account of the fundamental results.The authors present a complete treatment of the Gage-Hamilton theorem, a clear, detailed exposition of Grayson''s convexity theorem, a systematic discussion of invariant solutions, applications to the existence of simple closed geodesics on a surface, and a new, almost convexity theorem for the generalized curve shortening problem.Many questions regarding curve shortening remain outstanding. With its careful exposition and complete guide to the literature, The Curve Shortening Problem provides not only an outstanding starting point for graduate students and new investigations, but a superb reference that presents intriguing new results for those already active in the field.

  13. Analysis of Grassland Ecosystem Physiology at Multiple Scales Using Eddy Covariance, Stable Isotope and Remote Sensing Techniques

    Science.gov (United States)

    Flanagan, L. B.; Geske, N.; Emrick, C.; Johnson, B. G.

    2006-12-01

    Grassland ecosystems typically exhibit very large annual fluctuations in above-ground biomass production and net ecosystem productivity (NEP). Eddy covariance flux measurements, plant stable isotope analyses, and canopy spectral reflectance techniques have been applied to study environmental constraints on grassland ecosystem productivity and the acclimation responses of the ecosystem at a site near Lethbridge, Alberta, Canada. We have observed substantial interannual variation in grassland productivity during 1999-2005. In addition, there was a strong correlation between peak above-ground biomass production and NEP calculated from eddy covariance measurements. Interannual variation in NEP was strongly controlled by the total amount of precipitation received during the growing season (April-August). We also observed significant positive correlations between a multivariate ENSO index and total growing season precipitation, and between the ENSO index and annual NEP values. This suggested that a significant fraction of the annual variability in grassland productivity was associated with ENSO during 1999-2005. Grassland productivity varies asymmetrically in response to changes in precipitation with increases in productivity during wet years being much more pronounced than reductions during dry years. Strong increases in plant water-use efficiency, based on carbon and oxygen stable isotope analyses, contribute to the resilience of productivity during times of drought. Within a growing season increased stomatal limitation of photosynthesis, associated with improved water-use efficiency, resulted in apparent shifts in leaf xanthophyll cycle pigments and changes to the Photochemical Reflectance Index (PRI) calculated from hyper-spectral reflectance measurements conducted at the canopy-scale. These shifts in PRI were apparent before seasonal drought caused significant reductions in leaf area index (LAI) and changes to canopy-scale "greenness" based on NDVI values. With

  14. Learning Curve? Which One?

    Directory of Open Access Journals (Sweden)

    Paulo Prochno

    2004-07-01

    Full Text Available Learning curves have been studied for a long time. These studies provided strong support to the hypothesis that, as organizations produce more of a product, unit costs of production decrease at a decreasing rate (see Argote, 1999 for a comprehensive review of learning curve studies. But the organizational mechanisms that lead to these results are still underexplored. We know some drivers of learning curves (ADLER; CLARK, 1991; LAPRE et al., 2000, but we still lack a more detailed view of the organizational processes behind those curves. Through an ethnographic study, I bring a comprehensive account of the first year of operations of a new automotive plant, describing what was taking place on in the assembly area during the most relevant shifts of the learning curve. The emphasis is then on how learning occurs in that setting. My analysis suggests that the overall learning curve is in fact the result of an integration process that puts together several individual ongoing learning curves in different areas throughout the organization. In the end, I propose a model to understand the evolution of these learning processes and their supporting organizational mechanisms.

  15. Curved canals: Ancestral files revisited

    Directory of Open Access Journals (Sweden)

    Jain Nidhi

    2008-01-01

    Full Text Available The aim of this article is to provide an insight into different techniques of cleaning and shaping of curved root canals with hand instruments. Although a plethora of root canal instruments like ProFile, ProTaper, LightSpeed ® etc dominate the current scenario, the inexpensive conventional root canal hand files such as K-files and flexible files can be used to get optimum results when handled meticulously. Special emphasis has been put on the modifications in biomechanical canal preparation in a variety of curved canal cases. This article compiles a series of clinical cases of root canals with curvatures in the middle and apical third and with S-shaped curvatures that were successfully completed by employing only conventional root canal hand instruments.

  16. The crime kuznets curve

    OpenAIRE

    Buonanno, Paolo; Fergusson, Leopoldo; Vargas, Juan Fernando

    2014-01-01

    We document the existence of a Crime Kuznets Curve in US states since the 1970s. As income levels have risen, crime has followed an inverted U-shaped pattern, first increasing and then dropping. The Crime Kuznets Curve is not explained by income inequality. In fact, we show that during the sample period inequality has risen monotonically with income, ruling out the traditional Kuznets Curve. Our finding is robust to adding a large set of controls that are used in the literature to explain the...

  17. Fermions in curved spacetimes

    Energy Technology Data Exchange (ETDEWEB)

    Lippoldt, Stefan

    2016-01-21

    In this thesis we study a formulation of Dirac fermions in curved spacetime that respects general coordinate invariance as well as invariance under local spin base transformations. We emphasize the advantages of the spin base invariant formalism both from a conceptual as well as from a practical viewpoint. This suggests that local spin base invariance should be added to the list of (effective) properties of (quantum) gravity theories. We find support for this viewpoint by the explicit construction of a global realization of the Clifford algebra on a 2-sphere which is impossible in the spin-base non-invariant vielbein formalism. The natural variables for this formulation are spacetime-dependent Dirac matrices subject to the Clifford-algebra constraint. In particular, a coframe, i.e. vielbein field is not required. We disclose the hidden spin base invariance of the vielbein formalism. Explicit formulas for the spin connection as a function of the Dirac matrices are found. This connection consists of a canonical part that is completely fixed in terms of the Dirac matrices and a free part that can be interpreted as spin torsion. The common Lorentz symmetric gauge for the vielbein is constructed for the Dirac matrices, even for metrics which are not linearly connected. Under certain criteria, it constitutes the simplest possible gauge, demonstrating why this gauge is so useful. Using the spin base formulation for building a field theory of quantized gravity and matter fields, we show that it suffices to quantize the metric and the matter fields. This observation is of particular relevance for field theory approaches to quantum gravity, as it can serve for a purely metric-based quantization scheme for gravity even in the presence of fermions. Hence, in the second part of this thesis we critically examine the gauge, and the field-parametrization dependence of renormalization group flows in the vicinity of non-Gaussian fixed points in quantum gravity. While physical

  18. Bond yield curve construction

    Directory of Open Access Journals (Sweden)

    Kožul Nataša

    2014-01-01

    Full Text Available In the broadest sense, yield curve indicates the market's view of the evolution of interest rates over time. However, given that cost of borrowing it closely linked to creditworthiness (ability to repay, different yield curves will apply to different currencies, market sectors, or even individual issuers. As government borrowing is indicative of interest rate levels available to other market players in a particular country, and considering that bond issuance still remains the dominant form of sovereign debt, this paper describes yield curve construction using bonds. The relationship between zero-coupon yield, par yield and yield to maturity is given and their usage in determining curve discount factors is described. Their usage in deriving forward rates and pricing related derivative instruments is also discussed.

  19. SRHA calibration curve

    Data.gov (United States)

    U.S. Environmental Protection Agency — an UV calibration curve for SRHA quantitation. This dataset is associated with the following publication: Chang, X., and D. Bouchard. Surfactant-Wrapped Multiwalled...

  20. Bragg Curve Spectroscopy

    International Nuclear Information System (INIS)

    Gruhn, C.R.

    1981-05-01

    An alternative utilization is presented for the gaseous ionization chamber in the detection of energetic heavy ions, which is called Bragg Curve Spectroscopy (BCS). Conceptually, BCS involves using the maximum data available from the Bragg curve of the stopping heavy ion (HI) for purposes of identifying the particle and measuring its energy. A detector has been designed that measures the Bragg curve with high precision. From the Bragg curve the range from the length of the track, the total energy from the integral of the specific ionization over the track, the dE/dx from the specific ionization at the beginning of the track, and the Bragg peak from the maximum of the specific ionization of the HI are determined. This last signal measures the atomic number, Z, of the HI unambiguously

  1. ROBUST DECLINE CURVE ANALYSIS

    Directory of Open Access Journals (Sweden)

    Sutawanir Darwis

    2012-05-01

    Full Text Available Empirical decline curve analysis of oil production data gives reasonable answer in hyperbolic type curves situations; however the methodology has limitations in fitting real historical production data in present of unusual observations due to the effect of the treatment to the well in order to increase production capacity. The development ofrobust least squares offers new possibilities in better fitting production data using declinecurve analysis by down weighting the unusual observations. This paper proposes a robustleast squares fitting lmRobMM approach to estimate the decline rate of daily production data and compares the results with reservoir simulation results. For case study, we usethe oil production data at TBA Field West Java. The results demonstrated that theapproach is suitable for decline curve fitting and offers a new insight in decline curve analysis in the present of unusual observations.

  2. 3-D optical profilometry at micron scale with multi-frequency fringe projection using modified fibre optic Lloyd's mirror technique

    Science.gov (United States)

    Inanç, Arda; Kösoğlu, Gülşen; Yüksel, Heba; Naci Inci, Mehmet

    2018-06-01

    A new fibre optic Lloyd's mirror method is developed for extracting 3-D height distribution of various objects at the micron scale with a resolution of 4 μm. The fibre optic assembly is elegantly integrated to an optical microscope and a CCD camera. It is demonstrated that the proposed technique is quite suitable and practical to produce an interference pattern with an adjustable frequency. By increasing the distance between the fibre and the mirror with a micrometre stage in the Lloyd's mirror assembly, the separation between the two bright fringes is lowered down to the micron scale without using any additional elements as part of the optical projection unit. A fibre optic cable, whose polymer jacket is partially stripped, and a microfluidic channel are used as test objects to extract their surface topographies. Point by point sensitivity of the method is found to be around 8 μm, changing a couple of microns depending on the fringe frequency and the measured height. A straightforward calibration procedure for the phase to height conversion is also introduced by making use of the vertical moving stage of the optical microscope. The phase analysis of the acquired image is carried out by One Dimensional Continuous Wavelet Transform for which the chosen wavelet is the Morlet wavelet and the carrier removal of the projected fringe patterns is achieved by reference subtraction. Furthermore, flexible multi-frequency property of the proposed method allows measuring discontinuous heights where there are phase ambiguities like 2π by lowering the fringe frequency and eliminating the phase ambiguity.

  3. Power Curve Measurements FGW

    DEFF Research Database (Denmark)

    Georgieva Yankova, Ginka; Federici, Paolo

    This report describes power curve measurements carried out on a given turbine in a chosen period. The measurements are carried out in accordance to IEC 61400-12-1 Ed. 1 and FGW Teil 2.......This report describes power curve measurements carried out on a given turbine in a chosen period. The measurements are carried out in accordance to IEC 61400-12-1 Ed. 1 and FGW Teil 2....

  4. Curves and Abelian varieties

    CERN Document Server

    Alexeev, Valery; Clemens, C Herbert; Beauville, Arnaud

    2008-01-01

    This book is devoted to recent progress in the study of curves and abelian varieties. It discusses both classical aspects of this deep and beautiful subject as well as two important new developments, tropical geometry and the theory of log schemes. In addition to original research articles, this book contains three surveys devoted to singularities of theta divisors, of compactified Jacobians of singular curves, and of "strange duality" among moduli spaces of vector bundles on algebraic varieties.

  5. Mentorship, learning curves, and balance.

    Science.gov (United States)

    Cohen, Meryl S; Jacobs, Jeffrey P; Quintessenza, James A; Chai, Paul J; Lindberg, Harald L; Dickey, Jamie; Ungerleider, Ross M

    2007-09-01

    Professionals working in the arena of health care face a variety of challenges as their careers evolve and develop. In this review, we analyze the role of mentorship, learning curves, and balance in overcoming challenges that all such professionals are likely to encounter. These challenges can exist both in professional and personal life. As any professional involved in health care matures, complex professional skills must be mastered, and new professional skills must be acquired. These skills are both technical and judgmental. In most circumstances, these skills must be learned. In 2007, despite the continued need for obtaining new knowledge and learning new skills, the professional and public tolerance for a "learning curve" is much less than in previous decades. Mentorship is the key to success in these endeavours. The success of mentorship is two-sided, with responsibilities for both the mentor and the mentee. The benefits of this relationship must be bidirectional. It is the responsibility of both the student and the mentor to assure this bidirectional exchange of benefit. This relationship requires time, patience, dedication, and to some degree selflessness. This mentorship will ultimately be the best tool for mastering complex professional skills and maturing through various learning curves. Professional mentorship also requires that mentors identify and explicitly teach their mentees the relational skills and abilities inherent in learning the management of the triad of self, relationships with others, and professional responsibilities.Up to two decades ago, a learning curve was tolerated, and even expected, while professionals involved in healthcare developed the techniques that allowed for the treatment of previously untreatable diseases. Outcomes have now improved to the point that this type of learning curve is no longer acceptable to the public. Still, professionals must learn to perform and develop independence and confidence. The responsibility to

  6. Microscopy and Chemical Inversing Techniques to Determine the Photonic Crystal Structure of Iridescent Beetle Scales in the Cerambycidae Family

    Science.gov (United States)

    Richey, Lauren; Gardner, John; Standing, Michael; Jorgensen, Matthew; Bartl, Michael

    2010-10-01

    Photonic crystals (PCs) are periodic structures that manipulate electromagnetic waves by defining allowed and forbidden frequency bands known as photonic band gaps. Despite production of PC structures operating at infrared wavelengths, visible counterparts are difficult to fabricate because periodicities must satisfy the diffraction criteria. As part of an ongoing search for naturally occurring PCs [1], a three-dimensional array of nanoscopic spheres in the iridescent scales of the Cerambycidae insects A. elegans and G. celestis has been found. Such arrays are similar to opal gemstones and self-assembled colloidal spheres which can be chemically inverted to create a lattice-like PC. Through a chemical replication process [2], scanning electron microscopy analysis, sequential focused ion beam slicing and three-dimensional modeling, we analyzed the structural arrangement of the nanoscopic spheres. The study of naturally occurring structures and their inversing techniques into PCs allows for diversity in optical PC fabrication. [1] J.W. Galusha et al., Phys. Rev. E 77 (2008) 050904. [2] J.W. Galusha et al., J. Mater. Chem. 20 (2010) 1277.

  7. Approximation by planar elastic curves

    DEFF Research Database (Denmark)

    Brander, David; Gravesen, Jens; Nørbjerg, Toke Bjerge

    2016-01-01

    We give an algorithm for approximating a given plane curve segment by a planar elastic curve. The method depends on an analytic representation of the space of elastic curve segments, together with a geometric method for obtaining a good initial guess for the approximating curve. A gradient......-driven optimization is then used to find the approximating elastic curve....

  8. Power Curve Measurements REWS

    DEFF Research Database (Denmark)

    Gómez Arranz, Paula; Vesth, Allan

    This report describes the power curve measurements carried out on a given wind turbine in a chosen period. The measurements were carried out following the measurement procedure in the draft of IEC 61400-12-1 Ed.2 [1], with some deviations mostly regarding uncertainty calculation. Here, the refere......This report describes the power curve measurements carried out on a given wind turbine in a chosen period. The measurements were carried out following the measurement procedure in the draft of IEC 61400-12-1 Ed.2 [1], with some deviations mostly regarding uncertainty calculation. Here......, the reference wind speed used in the power curve is the equivalent wind speed obtained from lidar measurements at several heights between lower and upper blade tip, in combination with a hub height meteorological mast. The measurements have been performed using DTU’s measurement equipment, the analysis...

  9. Detecting corner points from digital curves

    International Nuclear Information System (INIS)

    Sarfraz, M.

    2011-01-01

    Corners in digital images give important clues for shape representation, recognition, and analysis. Since dominant information regarding shape is usually available at the corners, they provide important features for various real life applications in the disciplines like computer vision, pattern recognition, computer graphics. Corners are the robust features in the sense that they provide important information regarding objects under translation, rotation and scale change. They are also important from the view point of understanding human perception of objects. They play crucial role in decomposing or describing the digital curves. They are also used in scale space theory, image representation, stereo vision, motion tracking, image matching, building mosaics and font designing systems. If the corner points are identified properly, a shape can be represented in an efficient and compact way with sufficient accuracy. Corner detection schemes, based on their applications, can be broadly divided into two categories: binary (suitable for binary images) and gray level (suitable for gray level images). Corner detection approaches for binary images usually involve segmenting the image into regions and extracting boundaries from those regions that contain them. The techniques for gray level images can be categorized into two classes: (a) Template based and (b) gradient based. The template based techniques utilize correlation between a sub-image and a template of a given angle. A corner point is selected by finding the maximum of the correlation output. Gradient based techniques require computing curvature of an edge that passes through a neighborhood in a gray level image. Many corner detection algorithms have been proposed in the literature which can be broadly divided into two parts. One is to detect corner points from grayscale images and other relates to boundary based corner detection. This contribution mainly deals with techniques adopted for later approach

  10. Curved electromagnetic missiles

    International Nuclear Information System (INIS)

    Myers, J.M.; Shen, H.M.; Wu, T.T.

    1989-01-01

    Transient electromagnetic fields can exhibit interesting behavior in the limit of great distances from their sources. In situations of finite total radiated energy, the energy reaching a distant receiver can decrease with distance much more slowly than the usual r - 2 . Cases of such slow decrease have been referred to as electromagnetic missiles. All of the wide variety of known missiles propagate in essentially straight lines. A sketch is presented here of a missile that can follow a path that is strongly curved. An example of a curved electromagnetic missile is explicitly constructed and some of its properties are discussed. References to details available elsewhere are given

  11. Algebraic curves and cryptography

    CERN Document Server

    Murty, V Kumar

    2010-01-01

    It is by now a well-known paradigm that public-key cryptosystems can be built using finite Abelian groups and that algebraic geometry provides a supply of such groups through Abelian varieties over finite fields. Of special interest are the Abelian varieties that are Jacobians of algebraic curves. All of the articles in this volume are centered on the theme of point counting and explicit arithmetic on the Jacobians of curves over finite fields. The topics covered include Schoof's \\ell-adic point counting algorithm, the p-adic algorithms of Kedlaya and Denef-Vercauteren, explicit arithmetic on

  12. IGMtransmission: Transmission curve computation

    Science.gov (United States)

    Harrison, Christopher M.; Meiksin, Avery; Stock, David

    2015-04-01

    IGMtransmission is a Java graphical user interface that implements Monte Carlo simulations to compute the corrections to colors of high-redshift galaxies due to intergalactic attenuation based on current models of the Intergalactic Medium. The effects of absorption due to neutral hydrogen are considered, with particular attention to the stochastic effects of Lyman Limit Systems. Attenuation curves are produced, as well as colors for a wide range of filter responses and model galaxy spectra. Photometric filters are included for the Hubble Space Telescope, the Keck telescope, the Mt. Palomar 200-inch, the SUBARU telescope and UKIRT; alternative filter response curves and spectra may be readily uploaded.

  13. New Techniques Used in Modeling the 2017 Total Solar Eclipse: Energizing and Heating the Large-Scale Corona

    Science.gov (United States)

    Downs, Cooper; Mikic, Zoran; Linker, Jon A.; Caplan, Ronald M.; Lionello, Roberto; Torok, Tibor; Titov, Viacheslav; Riley, Pete; Mackay, Duncan; Upton, Lisa

    2017-08-01

    Over the past two decades, our group has used a magnetohydrodynamic (MHD) model of the corona to predict the appearance of total solar eclipses. In this presentation we detail recent innovations and new techniques applied to our prediction model for the August 21, 2017 total solar eclipse. First, we have developed a method for capturing the large-scale energized fields typical of the corona, namely the sheared/twisted fields built up through long-term processes of differential rotation and flux-emergence/cancellation. Using inferences of the location and chirality of filament channels (deduced from a magnetofrictional model driven by the evolving photospheric field produced by the Advective Flux Transport model), we tailor a customized boundary electric field profile that will emerge shear along the desired portions of polarity inversion lines (PILs) and cancel flux to create long twisted flux systems low in the corona. This method has the potential to improve the morphological shape of streamers in the low solar corona. Second, we apply, for the first time in our eclipse prediction simulations, a new wave-turbulence-dissipation (WTD) based model for coronal heating. This model has substantially fewer free parameters than previous empirical heating models, but is inherently sensitive to the 3D geometry and connectivity of the coronal field---a key property for modeling/predicting the thermal-magnetic structure of the solar corona. Overall, we will examine the effect of these considerations on white-light and EUV observables from the simulations, and present them in the context of our final 2017 eclipse prediction model.Research supported by NASA's Heliophysics Supporting Research and Living With a Star Programs.

  14. Cost curves for implantation of small scale hydroelectric power plant project in function of the average annual energy production; Curvas de custo de implantacao de pequenos projetos hidreletricos em funcao da producao media anual de energia

    Energy Technology Data Exchange (ETDEWEB)

    Veja, Fausto Alfredo Canales; Mendes, Carlos Andre Bulhoes; Beluco, Alexandre

    2008-10-15

    Because of its maturity, small hydropower generation is one of the main energy sources to be considered for electrification of areas far from the national grid. Once a site with hydropower potential is identified, technical and economical studies to assess its feasibility shall be done. Cost curves are helpful tools in the appraisal of the economical feasibility of this type of projects. This paper presents a method to determine initial cost curves as a function of the average energy production of the hydropower plant, by using a set of parametric cost curves and the flow duration curve at the analyzed location. The method is illustrated using information related to 18 pre-feasibility studies made in 2002, at the Central-Atlantic rural region of Nicaragua. (author)

  15. Phonon dispersion curves for CsCN

    International Nuclear Information System (INIS)

    Gaur, N.K.; Singh, Preeti; Rini, E.G.; Galgale, Jyostna; Singh, R.K.

    2004-01-01

    The motivation for the present work was gained from the recent publication on phonon dispersion curves (PDCs) of CsCN from the neutron scattering technique. We have applied the extended three-body force shell model (ETSM) by incorporating the effect of coupling between the translation modes and the orientation of cyanide molecules for the description of phonon dispersion curves of CsCN between the temperatures 195 and 295 K. Our results on PDCs in symmetric direction are in good agreement with the experimental data measured with inelastic neutron scattering technique. (author)

  16. Learning from uncertain curves

    DEFF Research Database (Denmark)

    Mallasto, Anton; Feragen, Aasa

    2017-01-01

    We introduce a novel framework for statistical analysis of populations of nondegenerate Gaussian processes (GPs), which are natural representations of uncertain curves. This allows inherent variation or uncertainty in function-valued data to be properly incorporated in the population analysis. Us...

  17. Power Curve Measurements

    DEFF Research Database (Denmark)

    Federici, Paolo; Kock, Carsten Weber

    This report describes the power curve measurements performed with a nacelle LIDAR on a given wind turbine in a wind farm and during a chosen measurement period. The measurements and analysis are carried out in accordance to the guidelines in the procedure “DTU Wind Energy-E-0019” [1]. The reporting...

  18. Power Curve Measurements, FGW

    DEFF Research Database (Denmark)

    Vesth, Allan; Kock, Carsten Weber

    The report describes power curve measurements carried out on a given wind turbine. The measurements are carried out in accordance to Ref. [1]. A site calibration has been carried out; see Ref. [2], and the measured flow correction factors for different wind directions are used in the present...... analyze of power performance of the turbine....

  19. Power Curve Measurements

    DEFF Research Database (Denmark)

    Federici, Paolo; Vesth, Allan

    The report describes power curve measurements carried out on a given wind turbine. The measurements are carried out in accordance to Ref. [1]. A site calibration has been carried out; see Ref. [2], and the measured flow correction factors for different wind directions are used in the present...... analyze of power performance of the turbine....

  20. Power Curve Measurements

    DEFF Research Database (Denmark)

    Villanueva, Héctor; Gómez Arranz, Paula

    The report describes power curve measurements carried out on a given wind turbine. The measurements are carried out in accordance to Ref. [1]. A site calibration has been carried out; see Ref. [2], and the measured flow correction factors for different wind directions are used in the present...... analyze of power performance of the turbine...

  1. Carbon Lorenz Curves

    NARCIS (Netherlands)

    Groot, L.F.M.|info:eu-repo/dai/nl/073642398

    2008-01-01

    The purpose of this paper is twofold. First, it exhibits that standard tools in the measurement of income inequality, such as the Lorenz curve and the Gini-index, can successfully be applied to the issues of inequality measurement of carbon emissions and the equity of abatement policies across

  2. The Axial Curve Rotator.

    Science.gov (United States)

    Hunter, Walter M.

    This document contains detailed directions for constructing a device that mechanically produces the three-dimensional shape resulting from the rotation of any algebraic line or curve around either axis on the coordinate plant. The device was developed in response to student difficulty in visualizing, and thus grasping the mathematical principles…

  3. Nacelle lidar power curve

    DEFF Research Database (Denmark)

    Gómez Arranz, Paula; Wagner, Rozenn

    This report describes the power curve measurements performed with a nacelle LIDAR on a given wind turbine in a wind farm and during a chosen measurement period. The measurements and analysis are carried out in accordance to the guidelines in the procedure “DTU Wind Energy-E-0019” [1]. The reporting...

  4. Power curve report

    DEFF Research Database (Denmark)

    Vesth, Allan; Kock, Carsten Weber

    The report describes power curve measurements carried out on a given wind turbine. The measurements are carried out in accordance to Ref. [1]. A site calibration has been carried out; see Ref. [2], and the measured flow correction factors for different wind directions are used in the present...

  5. Textbook Factor Demand Curves.

    Science.gov (United States)

    Davis, Joe C.

    1994-01-01

    Maintains that teachers and textbook graphics follow the same basic pattern in illustrating changes in demand curves when product prices increase. Asserts that the use of computer graphics will enable teachers to be more precise in their graphic presentation of price elasticity. (CFR)

  6. ECM using Edwards curves

    NARCIS (Netherlands)

    Bernstein, D.J.; Birkner, P.; Lange, T.; Peters, C.P.

    2013-01-01

    This paper introduces EECM-MPFQ, a fast implementation of the elliptic-curve method of factoring integers. EECM-MPFQ uses fewer modular multiplications than the well-known GMP-ECM software, takes less time than GMP-ECM, and finds more primes than GMP-ECM. The main improvements above the

  7. Power Curve Measurements FGW

    DEFF Research Database (Denmark)

    Federici, Paolo; Kock, Carsten Weber

    The report describes power curve measurements carried out on a given wind turbine. The measurements are carried out in accordance to Ref. [1]. A site calibration has been carried out; see Ref. [2], and the measured flow correction factors for different wind directions are used in the present...... analyze of power performance of the turbine...

  8. Comparing effects of land reclamation techniques on water pollution and fishery loss for a large-scale offshore airport island in Jinzhou Bay, Bohai Sea, China.

    Science.gov (United States)

    Yan, Hua-Kun; Wang, Nuo; Yu, Tiao-Lan; Fu, Qiang; Liang, Chen

    2013-06-15

    Plans are being made to construct Dalian Offshore Airport in Jinzhou Bay with a reclamation area of 21 km(2). The large-scale reclamation can be expected to have negative effects on the marine environment, and these effects vary depending on the reclamation techniques used. Water quality mathematical models were developed and biology resource investigations were conducted to compare effects of an underwater explosion sediment removal and rock dumping technique and a silt dredging and rock dumping technique on water pollution and fishery loss. The findings show that creation of the artificial island with the underwater explosion sediment removal technique would greatly impact the marine environment. However, the impact for the silt dredging technique would be less. The conclusions from this study provide an important foundation for the planning of Dalian Offshore Airport and can be used as a reference for similar coastal reclamation and marine environment protection. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  9. Utilization of curve offsets in additive manufacturing

    Science.gov (United States)

    Haseltalab, Vahid; Yaman, Ulas; Dolen, Melik

    2018-05-01

    Curve offsets are utilized in different fields of engineering and science. Additive manufacturing, which lately becomes an explicit requirement in manufacturing industry, utilizes curve offsets widely. One of the necessities of offsetting is for scaling which is required if there is shrinkage after the fabrication or if the surface quality of the resulting part is unacceptable. Therefore, some post-processing is indispensable. But the major application of curve offsets in additive manufacturing processes is for generating head trajectories. In a point-wise AM process, a correct tool-path in each layer can reduce lots of costs and increase the surface quality of the fabricated parts. In this study, different curve offset generation algorithms are analyzed to show their capabilities and disadvantages through some test cases and improvements on their drawbacks are suggested.

  10. A statistical forecast model using the time-scale decomposition technique to predict rainfall during flood period over the middle and lower reaches of the Yangtze River Valley

    Science.gov (United States)

    Hu, Yijia; Zhong, Zhong; Zhu, Yimin; Ha, Yao

    2018-04-01

    In this paper, a statistical forecast model using the time-scale decomposition method is established to do the seasonal prediction of the rainfall during flood period (FPR) over the middle and lower reaches of the Yangtze River Valley (MLYRV). This method decomposites the rainfall over the MLYRV into three time-scale components, namely, the interannual component with the period less than 8 years, the interdecadal component with the period from 8 to 30 years, and the interdecadal component with the period larger than 30 years. Then, the predictors are selected for the three time-scale components of FPR through the correlation analysis. At last, a statistical forecast model is established using the multiple linear regression technique to predict the three time-scale components of the FPR, respectively. The results show that this forecast model can capture the interannual and interdecadal variation of FPR. The hindcast of FPR during 14 years from 2001 to 2014 shows that the FPR can be predicted successfully in 11 out of the 14 years. This forecast model performs better than the model using traditional scheme without time-scale decomposition. Therefore, the statistical forecast model using the time-scale decomposition technique has good skills and application value in the operational prediction of FPR over the MLYRV.

  11. The genus curve of the Abell clusters

    Science.gov (United States)

    Rhoads, James E.; Gott, J. Richard, III; Postman, Marc

    1994-01-01

    We study the topology of large-scale structure through a genus curve measurement of the recent Abell catalog redshift survey of Postman, Huchra, and Geller (1992). The structure is found to be spongelike near median density and to exhibit isolated superclusters and voids at high and low densities, respectively. The genus curve shows a slight shift toward 'meatball' topology, but remains consistent with the hypothesis of Gaussian random phase initial conditions. The amplitude of the genus curve corresponds to a power-law spectrum with index n = 0.21-0.47+0.43 on scales of 48/h Mpc or to a cold dark matter power spectrum with omega h = 0.36-0.17+0.46.

  12. The genus curve of the Abell clusters

    Science.gov (United States)

    Rhoads, James E.; Gott, J. Richard, III; Postman, Marc

    1994-01-01

    We study the topology of large-scale structure through a genus curve measurement of the recent Abell catalog redshift survey of Postman, Huchra, and Geller (1992). The structure is found to be spongelike near median density and to exhibit isolated superclusters and voids at high and low densities, respectively. The genus curve shows a slight shift toward 'meatball' topology, but remains consistent with the hypothesis of Gaussian random phase initial conditions. The amplitude of the genus curve corresponds to a power-law spectrum with index n = 0.21(sub -0.47 sup +0.43) on scales of 48/h Mpc or to a cold dark matter power spectrum with omega h = 0.36(sub -0.17 sup +0.46).

  13. Carbon Lorenz Curves

    Energy Technology Data Exchange (ETDEWEB)

    Groot, L. [Utrecht University, Utrecht School of Economics, Janskerkhof 12, 3512 BL Utrecht (Netherlands)

    2008-11-15

    The purpose of this paper is twofold. First, it exhibits that standard tools in the measurement of income inequality, such as the Lorenz curve and the Gini-index, can successfully be applied to the issues of inequality measurement of carbon emissions and the equity of abatement policies across countries. These tools allow policy-makers and the general public to grasp at a single glance the impact of conventional distribution rules such as equal caps or grandfathering, or more sophisticated ones, on the distribution of greenhouse gas emissions. Second, using the Samuelson rule for the optimal provision of a public good, the Pareto-optimal distribution of carbon emissions is compared with the distribution that follows if countries follow Nash-Cournot abatement strategies. It is shown that the Pareto-optimal distribution under the Samuelson rule can be approximated by the equal cap division, represented by the diagonal in the Lorenz curve diagram.

  14. Dynamics of curved fronts

    CERN Document Server

    Pelce, Pierre

    1989-01-01

    In recent years, much progress has been made in the understanding of interface dynamics of various systems: hydrodynamics, crystal growth, chemical reactions, and combustion. Dynamics of Curved Fronts is an important contribution to this field and will be an indispensable reference work for researchers and graduate students in physics, applied mathematics, and chemical engineering. The book consist of a 100 page introduction by the editor and 33 seminal articles from various disciplines.

  15. International Wage Curves

    OpenAIRE

    David G. Blanchflower; Andrew J. Oswald

    1992-01-01

    The paper provides evidence for the existence of a negatively sloped locus linking the level of pay to the rate of regional (or industry) unemployment. This "wage curve" is estimated using microeconomic data for Britain, the US, Canada, Korea, Austria, Italy, Holland, Switzerland, Norway, and Germany, The average unemployment elasticity of pay is approximately -0.1. The paper sets out a multi-region efficiency wage model and argues that its predictions are consistent with the data.

  16. Anatomical curve identification

    Science.gov (United States)

    Bowman, Adrian W.; Katina, Stanislav; Smith, Joanna; Brown, Denise

    2015-01-01

    Methods for capturing images in three dimensions are now widely available, with stereo-photogrammetry and laser scanning being two common approaches. In anatomical studies, a number of landmarks are usually identified manually from each of these images and these form the basis of subsequent statistical analysis. However, landmarks express only a very small proportion of the information available from the images. Anatomically defined curves have the advantage of providing a much richer expression of shape. This is explored in the context of identifying the boundary of breasts from an image of the female torso and the boundary of the lips from a facial image. The curves of interest are characterised by ridges or valleys. Key issues in estimation are the ability to navigate across the anatomical surface in three-dimensions, the ability to recognise the relevant boundary and the need to assess the evidence for the presence of the surface feature of interest. The first issue is addressed by the use of principal curves, as an extension of principal components, the second by suitable assessment of curvature and the third by change-point detection. P-spline smoothing is used as an integral part of the methods but adaptations are made to the specific anatomical features of interest. After estimation of the boundary curves, the intermediate surfaces of the anatomical feature of interest can be characterised by surface interpolation. This allows shape variation to be explored using standard methods such as principal components. These tools are applied to a collection of images of women where one breast has been reconstructed after mastectomy and where interest lies in shape differences between the reconstructed and unreconstructed breasts. They are also applied to a collection of lip images where possible differences in shape between males and females are of interest. PMID:26041943

  17. Estimating Corporate Yield Curves

    OpenAIRE

    Antionio Diaz; Frank Skinner

    2001-01-01

    This paper represents the first study of retail deposit spreads of UK financial institutions using stochastic interest rate modelling and the market comparable approach. By replicating quoted fixed deposit rates using the Black Derman and Toy (1990) stochastic interest rate model, we find that the spread between fixed and variable rates of interest can be modeled (and priced) using an interest rate swap analogy. We also find that we can estimate an individual bank deposit yield curve as a spr...

  18. LCC: Light Curves Classifier

    Science.gov (United States)

    Vo, Martin

    2017-08-01

    Light Curves Classifier uses data mining and machine learning to obtain and classify desired objects. This task can be accomplished by attributes of light curves or any time series, including shapes, histograms, or variograms, or by other available information about the inspected objects, such as color indices, temperatures, and abundances. After specifying features which describe the objects to be searched, the software trains on a given training sample, and can then be used for unsupervised clustering for visualizing the natural separation of the sample. The package can be also used for automatic tuning parameters of used methods (for example, number of hidden neurons or binning ratio). Trained classifiers can be used for filtering outputs from astronomical databases or data stored locally. The Light Curve Classifier can also be used for simple downloading of light curves and all available information of queried stars. It natively can connect to OgleII, OgleIII, ASAS, CoRoT, Kepler, Catalina and MACHO, and new connectors or descriptors can be implemented. In addition to direct usage of the package and command line UI, the program can be used through a web interface. Users can create jobs for ”training” methods on given objects, querying databases and filtering outputs by trained filters. Preimplemented descriptors, classifier and connectors can be picked by simple clicks and their parameters can be tuned by giving ranges of these values. All combinations are then calculated and the best one is used for creating the filter. Natural separation of the data can be visualized by unsupervised clustering.

  19. Deep-learnt classification of light curves

    DEFF Research Database (Denmark)

    Mahabal, Ashish; Gieseke, Fabian; Pai, Akshay Sadananda Uppinakudru

    2017-01-01

    is to derive statistical features from the time series and to use machine learning methods, generally supervised, to separate objects into a few of the standard classes. In this work, we transform the time series to two-dimensional light curve representations in order to classify them using modern deep......Astronomy light curves are sparse, gappy, and heteroscedastic. As a result standard time series methods regularly used for financial and similar datasets are of little help and astronomers are usually left to their own instruments and techniques to classify light curves. A common approach...... learning techniques. In particular, we show that convolutional neural networks based classifiers work well for broad characterization and classification. We use labeled datasets of periodic variables from CRTS survey and show how this opens doors for a quick classification of diverse classes with several...

  20. Flow characteristics of curved ducts

    Directory of Open Access Journals (Sweden)

    Rudolf P.

    2007-10-01

    Full Text Available Curved channels are very often present in real hydraulic systems, e.g. curved diffusers of hydraulic turbines, S-shaped bulb turbines, fittings, etc. Curvature brings change of velocity profile, generation of vortices and production of hydraulic losses. Flow simulation using CFD techniques were performed to understand these phenomena. Cases ranging from single elbow to coupled elbows in shapes of U, S and spatial right angle position with circular cross-section were modeled for Re = 60000. Spatial development of the flow was studied and consequently it was deduced that minor losses are connected with the transformation of pressure energy into kinetic energy and vice versa. This transformation is a dissipative process and is reflected in the amount of the energy irreversibly lost. Least loss coefficient is connected with flow in U-shape elbows, biggest one with flow in Sshape elbows. Finally, the extent of the flow domain influenced by presence of curvature was examined. This isimportant for proper placement of mano- and flowmeters during experimental tests. Simulations were verified with experimental results presented in literature.

  1. Sound concentration caused by curved surfaces

    NARCIS (Netherlands)

    Vercammen, M.L.S.

    2012-01-01

    In room acoustics the focusing effect of reflections from concave surfaces is a wellknown problem. Although curved surfaces are found throughout the history of architecture, the occurrence of concave surfaces has tended to increase in modern architecture, due to new techniques in design, materials

  2. Shock melting method to determine melting curve by molecular dynamics: Cu, Pd, and Al

    International Nuclear Information System (INIS)

    Liu, Zhong-Li; Zhang, Xiu-Lu; Cai, Ling-Cang

    2015-01-01

    A melting simulation method, the shock melting (SM) method, is proposed and proved to be able to determine the melting curves of materials accurately and efficiently. The SM method, which is based on the multi-scale shock technique, determines melting curves by preheating and/or prepressurizing materials before shock. This strategy was extensively verified using both classical and ab initio molecular dynamics (MD). First, the SM method yielded the same satisfactory melting curve of Cu with only 360 atoms using classical MD, compared to the results from the Z-method and the two-phase coexistence method. Then, it also produced a satisfactory melting curve of Pd with only 756 atoms. Finally, the SM method combined with ab initio MD cheaply achieved a good melting curve of Al with only 180 atoms, which agrees well with the experimental data and the calculated results from other methods. It turned out that the SM method is an alternative efficient method for calculating the melting curves of materials

  3. Shock melting method to determine melting curve by molecular dynamics: Cu, Pd, and Al.

    Science.gov (United States)

    Liu, Zhong-Li; Zhang, Xiu-Lu; Cai, Ling-Cang

    2015-09-21

    A melting simulation method, the shock melting (SM) method, is proposed and proved to be able to determine the melting curves of materials accurately and efficiently. The SM method, which is based on the multi-scale shock technique, determines melting curves by preheating and/or prepressurizing materials before shock. This strategy was extensively verified using both classical and ab initio molecular dynamics (MD). First, the SM method yielded the same satisfactory melting curve of Cu with only 360 atoms using classical MD, compared to the results from the Z-method and the two-phase coexistence method. Then, it also produced a satisfactory melting curve of Pd with only 756 atoms. Finally, the SM method combined with ab initio MD cheaply achieved a good melting curve of Al with only 180 atoms, which agrees well with the experimental data and the calculated results from other methods. It turned out that the SM method is an alternative efficient method for calculating the melting curves of materials.

  4. Nonparametric estimation of age-specific reference percentile curves with radial smoothing.

    Science.gov (United States)

    Wan, Xiaohai; Qu, Yongming; Huang, Yao; Zhang, Xiao; Song, Hanping; Jiang, Honghua

    2012-01-01

    Reference percentile curves represent the covariate-dependent distribution of a quantitative measurement and are often used to summarize and monitor dynamic processes such as human growth. We propose a new nonparametric method based on a radial smoothing (RS) technique to estimate age-specific reference percentile curves assuming the underlying distribution is relatively close to normal. We compared the RS method with both the LMS and the generalized additive models for location, scale and shape (GAMLSS) methods using simulated data and found that our method has smaller estimation error than the two existing methods. We also applied the new method to analyze height growth data from children being followed in a clinical observational study of growth hormone treatment, and compared the growth curves between those with growth disorders and the general population. Copyright © 2011 Elsevier Inc. All rights reserved.

  5. Using in-field and remote sensing techniques for the monitoring of small-scale permafrost decline in Northern Quebec

    Science.gov (United States)

    May, Inga; Kim, Jun Su; Spannraft, Kati; Ludwig, Ralf; Hajnsek, Irena; Bernier, Monique; Allard, Michel

    2010-05-01

    Permafrost-affected soils represent about 45% of Canadian arctic and subarctic regions. Under the recently recorded changed climate conditions, the areas located in the discontinuous permafrost zones are likely to belong to the most impacted environments. Degradations of Palsas and lithalsas as being the most distinct permafrost landforms as well as an extension of wetlands have been observe during the past decades by several research teams all over the northern Arctic. These alterations, caused by longer an warmer thawing periods, are expected to become more and more frequent in the future. The effects on human beings and on the surrounding sensitive ecosystems are presumed to be momentous and of high relevance. Hence, there is a high demand for new techniques that are able to detect, and possibly even predict, the behavior of the permafrost within a changing environment. The presented study is part of an international research collaboration between LMU, INRS and UL within the framework of ArcticNet. The project intends to develop a monitoring system strongly based on remote sensing imagery and GIS-based data analysis, using a test site located in northern Quebec (Umiujaq, 56°33' N, 76°33' W). It shall be investigated to which extent the interpretation of satellite imagery is feasible to partially substitute costly and difficult geophysical point measurements, and to provide spatial knowledge about the major factors that control permafrost dynamics and ecosystem change. In a first step, these factors, mainly expected to be determined from changes in topography, vegetation cover and snow cover, are identified and validated by means of several consecutive ground truthing initiatives supporting the analysis of multi-sensoral time series of remotely sensed information. Both sources are used to generate and feed different concepts for modeling permafrost dynamics by ways of parameter retrieval and data assimilation. On this poster, the outcomes of the first project

  6. Use of acoustic emission technique to study the spalling behaviour of oxide scales on Ni-10Cr-8Al containing sulphur and/or yttrium impurity

    International Nuclear Information System (INIS)

    Khanna, A.S.; Quadakkers, W.J.; Jonas, H.

    1989-01-01

    It is now well established that the presence of small amounts of sulphur impurity in a NiCrAl-based alloy causes a deleterious effect on their high temperature oxidation behaviour. It is, however, not clear whether the adverse effect is due to a decrease in the spalling resistance of the oxide scale or due to an enhanced scale growth. In order to confirm which of the factors is dominating, two independent experimental techniques were used in the investigation of the oxidation behaviour of Ni-10Cr-8Al containing sulphur- and/or yttrium additions: conventional thermogravimetry, to study the scale growth rates and acoustic emission analysis to study the scale adherence. The results indicated that the dominant factor responsible for the deleterious effect of sulphur impurity on the oxidation of a Ni-10Cr-8Al alloy, was a significant change in the growth rate and the composition of the scale. Addition of yttrium improved the oxidation behaviour, not only by increasing the scale adherence, but also by reducing the scale growth due to gettering of sulphur. (orig.) [de

  7. Possibilities of LA-ICP-MS technique for the spatial elemental analysis of the recent fish scales: Line scan vs. depth profiling

    International Nuclear Information System (INIS)

    Hola, Marketa; Kalvoda, Jiri; Novakova, Hana; Skoda, Radek; Kanicky, Viktor

    2011-01-01

    LA-ICP-MS and solution based ICP-MS in combination with electron microprobe are presented as a method for the determination of the elemental spatial distribution in fish scales which represent an example of a heterogeneous layered bone structure. Two different LA-ICP-MS techniques were tested on recent common carp (Cyprinus carpio) scales: (a)A line scan through the whole fish scale perpendicular to the growth rings. The ablation crater of 55 μm width and 50 μm depth allowed analysis of the elemental distribution in the external layer. Suitable ablation conditions providing a deeper ablation crater gave average values from the external HAP layer and the collagen basal plate. (b)Depth profiling using spot analysis was tested in fish scales for the first time. Spot analysis allows information to be obtained about the depth profile of the elements at the selected position on the sample. The combination of all mentioned laser ablation techniques provides complete information about the elemental distribution in the fish scale samples. The results were compared with the solution based ICP-MS and EMP analyses. The fact that the results of depth profiling are in a good agreement both with EMP and PIXE results and, with the assumed ways of incorporation of the studied elements in the HAP structure, suggests a very good potential for this method.

  8. Possibilities of LA-ICP-MS technique for the spatial elemental analysis of the recent fish scales: Line scan vs. depth profiling

    Energy Technology Data Exchange (ETDEWEB)

    Hola, Marketa [Department of Chemistry, Masaryk University of Brno, Kamenice 5, 625 00 Brno (Czech Republic); Kalvoda, Jiri, E-mail: jkalvoda@centrum.cz [Department of Geological Sciences, Masaryk University of Brno, Kotlarska 2, 611 37 Brno (Czech Republic); Novakova, Hana [Department of Chemistry, Masaryk University of Brno, Kamenice 5, 625 00 Brno (Czech Republic); Skoda, Radek [Department of Geological Sciences, Masaryk University of Brno, Kotlarska 2, 611 37 Brno (Czech Republic); Kanicky, Viktor [Department of Chemistry, Masaryk University of Brno, Kamenice 5, 625 00 Brno (Czech Republic)

    2011-01-01

    LA-ICP-MS and solution based ICP-MS in combination with electron microprobe are presented as a method for the determination of the elemental spatial distribution in fish scales which represent an example of a heterogeneous layered bone structure. Two different LA-ICP-MS techniques were tested on recent common carp (Cyprinus carpio) scales: (a)A line scan through the whole fish scale perpendicular to the growth rings. The ablation crater of 55 {mu}m width and 50 {mu}m depth allowed analysis of the elemental distribution in the external layer. Suitable ablation conditions providing a deeper ablation crater gave average values from the external HAP layer and the collagen basal plate. (b)Depth profiling using spot analysis was tested in fish scales for the first time. Spot analysis allows information to be obtained about the depth profile of the elements at the selected position on the sample. The combination of all mentioned laser ablation techniques provides complete information about the elemental distribution in the fish scale samples. The results were compared with the solution based ICP-MS and EMP analyses. The fact that the results of depth profiling are in a good agreement both with EMP and PIXE results and, with the assumed ways of incorporation of the studied elements in the HAP structure, suggests a very good potential for this method.

  9. Construction of a Scale-Questionnaire on the Attitude of the Teaching Staff as Opposed to the Educative Innovation by Means of Techniques of Cooperative Work (CAPIC

    Directory of Open Access Journals (Sweden)

    Joan Andrés Traver Martí

    2007-05-01

    Full Text Available In the present work the construction process of a scale-questionnaire is described to measure the attitude of the teaching staff as opposed to the educational innovation by means of techniques of cooperative work (CAPIC.  In order to carry out its design and elaboration we need on the one hand a model of analysis of the attitudes and an instrument of measurement of the same ones capable of guiding its practical dynamics.  The Theory of the Reasoned Action of Fisbhein and Ajzen (1975, 1980 and the summative scales (Likert have fulfilled, in both cases, this paper.

  10. Uniformization of elliptic curves

    OpenAIRE

    Ülkem, Özge; Ulkem, Ozge

    2015-01-01

    Every elliptic curve E defined over C is analytically isomorphic to C*=qZ for some q ∊ C*. Similarly, Tate has shown that if E is defined over a p-adic field K, then E is analytically isomorphic to K*=qZ for some q ∊ K . Further the isomorphism E(K) ≅ K*/qZ respects the action of the Galois group GK/K, where K is the algebraic closure of K. I will explain the construction of this isomorphism.

  11. Extraction of bioactives from Orthosiphon stamineus using microwave and ultrasound-assisted techniques: Process optimization and scale up.

    Science.gov (United States)

    Chan, Chung-Hung; See, Tiam-You; Yusoff, Rozita; Ngoh, Gek-Cheng; Kow, Kien-Woh

    2017-04-15

    This work demonstrated the optimization and scale up of microwave-assisted extraction (MAE) and ultrasonic-assisted extraction (UAE) of bioactive compounds from Orthosiphon stamineus using energy-based parameters such as absorbed power density and absorbed energy density (APD-AED) and response surface methodology (RSM). The intensive optimum conditions of MAE obtained at 80% EtOH, 50mL/g, APD of 0.35W/mL, AED of 250J/mL can be used to determine the optimum conditions of the scale-dependent parameters i.e. microwave power and treatment time at various extraction scales (100-300mL solvent loading). The yields of the up scaled conditions were consistent with less than 8% discrepancy and they were about 91-98% of the Soxhlet extraction yield. By adapting APD-AED method in the case of UAE, the intensive optimum conditions of the extraction, i.e. 70% EtOH, 30mL/g, APD of 0.22W/mL, AED of 450J/mL are able to achieve similar scale up results. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Roc curves for continuous data

    CERN Document Server

    Krzanowski, Wojtek J

    2009-01-01

    Since ROC curves have become ubiquitous in many application areas, the various advances have been scattered across disparate articles and texts. ROC Curves for Continuous Data is the first book solely devoted to the subject, bringing together all the relevant material to provide a clear understanding of how to analyze ROC curves.The fundamental theory of ROC curvesThe book first discusses the relationship between the ROC curve and numerous performance measures and then extends the theory into practice by describing how ROC curves are estimated. Further building on the theory, the authors prese

  13. The composing technique of fast and large scale nuclear data acquisition and control system with single chip microcomputers and PC computers

    International Nuclear Information System (INIS)

    Xu Zurun; Wu Shiying; Liu Haitao; Yao Yangsen; Wang Yingguan; Yang Chaowen

    1998-01-01

    The technique of employing single-chip microcomputers and PC computers to compose a fast and large scale nuclear data acquisition and control system was discussed in detail. The optimum composition mode of this kind of system, the acquisition and control circuit unit based on single-chip microcomputers, the real-time communication methods and the software composition under the Windows 3.2 were also described. One, two and three dimensional spectra measured by this system were demonstrated

  14. The composing technique of fast and large scale nuclear data acquisition and control system with single chip microcomputers and PC computers

    International Nuclear Information System (INIS)

    Xu Zurun; Wu Shiying; Liu Haitao; Yao Yangsen; Wang Yingguan; Yang Chaowen

    1997-01-01

    The technique of employing single-chip microcomputers and PC computers to compose a fast and large scale nuclear data acquisition and control system was discussed in detail. The optimum composition mode of this kind of system, the acquisition and control circuit unit based on single-chip microcomputers, the real-time communication methods and the software composition under the Windows 3.2 were also described. One, two and three dimensional spectra measured by this system were demonstrated

  15. From Curve Fitting to Machine Learning

    CERN Document Server

    Zielesny, Achim

    2011-01-01

    The analysis of experimental data is at heart of science from its beginnings. But it was the advent of digital computers that allowed the execution of highly non-linear and increasingly complex data analysis procedures - methods that were completely unfeasible before. Non-linear curve fitting, clustering and machine learning belong to these modern techniques which are a further step towards computational intelligence. The goal of this book is to provide an interactive and illustrative guide to these topics. It concentrates on the road from two dimensional curve fitting to multidimensional clus

  16. Bound states in curved quantum waveguides

    International Nuclear Information System (INIS)

    Exner, P.; Seba, P.

    1987-01-01

    We study free quantum particle living on a curved planar strip Ω of a fixed width d with Dirichlet boundary conditions. It can serve as a model for electrons in thin films on a cylindrical-type substrate, or in a curved quantum wire. Assuming that the boundary of Ω is infinitely smooth and its curvature decays fast enough at infinity, we prove that a bound state with energy below the first transversal mode exists for all sufficiently small d. A lower bound on the critical width is obtained using the Birman-Schwinger technique. (orig.)

  17. Variability of the Wind Turbine Power Curve

    Directory of Open Access Journals (Sweden)

    Mahesh M. Bandi

    2016-09-01

    Full Text Available Wind turbine power curves are calibrated by turbine manufacturers under requirements stipulated by the International Electrotechnical Commission to provide a functional mapping between the mean wind speed v ¯ and the mean turbine power output P ¯ . Wind plant operators employ these power curves to estimate or forecast wind power generation under given wind conditions. However, it is general knowledge that wide variability exists in these mean calibration values. We first analyse how the standard deviation in wind speed σ v affects the mean P ¯ and the standard deviation σ P of wind power. We find that the magnitude of wind power fluctuations scales as the square of the mean wind speed. Using data from three planetary locations, we find that the wind speed standard deviation σ v systematically varies with mean wind speed v ¯ , and in some instances, follows a scaling of the form σ v = C × v ¯ α ; C being a constant and α a fractional power. We show that, when applicable, this scaling form provides a minimal parameter description of the power curve in terms of v ¯ alone. Wind data from different locations establishes that (in instances when this scaling exists the exponent α varies with location, owing to the influence of local environmental conditions on wind speed variability. Since manufacturer-calibrated power curves cannot account for variability influenced by local conditions, this variability translates to forecast uncertainty in power generation. We close with a proposal for operators to perform post-installation recalibration of their turbine power curves to account for the influence of local environmental factors on wind speed variability in order to reduce the uncertainty of wind power forecasts. Understanding the relationship between wind’s speed and its variability is likely to lead to lower costs for the integration of wind power into the electric grid.

  18. Satellite altimetry based rating curves throughout the entire Amazon basin

    Science.gov (United States)

    Paris, A.; Calmant, S.; Paiva, R. C.; Collischonn, W.; Silva, J. S.; Bonnet, M.; Seyler, F.

    2013-05-01

    The Amazonian basin is the largest hydrological basin all over the world. In the recent past years, the basin has experienced an unusual succession of extreme draughts and floods, which origin is still a matter of debate. Yet, the amount of data available is poor, both over time and space scales, due to factor like basin's size, access difficulty and so on. One of the major locks is to get discharge series distributed over the entire basin. Satellite altimetry can be used to improve our knowledge of the hydrological stream flow conditions in the basin, through rating curves. Rating curves are mathematical relationships between stage and discharge at a given place. The common way to determine the parameters of the relationship is to compute the non-linear regression between the discharge and stage series. In this study, the discharge data was obtained by simulation through the entire basin using the MGB-IPH model with TRMM Merge input rainfall data and assimilation of gage data, run from 1998 to 2010. The stage dataset is made of ~800 altimetry series at ENVISAT and JASON-2 virtual stations. Altimetry series span between 2002 and 2010. In the present work we present the benefits of using stochastic methods instead of probabilistic ones to determine a dataset of rating curve parameters which are consistent throughout the entire Amazon basin. The rating curve parameters have been computed using a parameter optimization technique based on Markov Chain Monte Carlo sampler and Bayesian inference scheme. This technique provides an estimate of the best parameters for the rating curve, but also their posterior probability distribution, allowing the determination of a credibility interval for the rating curve. Also is included in the rating curve determination the error over discharges estimates from the MGB-IPH model. These MGB-IPH errors come from either errors in the discharge derived from the gage readings or errors in the satellite rainfall estimates. The present

  19. Scale-up from microtiter plate to laboratory fermenter: evaluation by online monitoring techniques of growth and protein expression in Escherichia coli and Hansenula polymorpha fermentations

    Directory of Open Access Journals (Sweden)

    Engelbrecht Christoph

    2009-12-01

    Full Text Available Abstract Background In the past decade, an enormous number of new bioprocesses have evolved in the biotechnology industry. These bioprocesses have to be developed fast and at a maximum productivity. Up to now, only few microbioreactors were developed to fulfill these demands and to facilitate sample processing. One predominant reaction platform is the shaken microtiter plate (MTP, which provides high-throughput at minimal expenses in time, money and work effort. By taking advantage of this simple and efficient microbioreactor array, a new online monitoring technique for biomass and fluorescence, called BioLector, has been recently developed. The combination of high-throughput and high information content makes the BioLector a very powerful tool in bioprocess development. Nevertheless, the scalabilty of results from the micro-scale to laboratory or even larger scales is very important for short development times. Therefore, engineering parameters regarding the reactor design and its operation conditions play an important role even on a micro-scale. In order to evaluate the scale-up from a microtiter plate scale (200 μL to a stirred tank fermenter scale (1.4 L, two standard microbial expression systems, Escherichia coli and Hansenula polymorpha, were fermented in parallel at both scales and compared with regard to the biomass and protein formation. Results Volumetric mass transfer coefficients (kLa ranging from 100 to 350 1/h were obtained in 96-well microtiter plates. Even with a suboptimal mass transfer condition in the microtiter plate compared to the stirred tank fermenter (kLa = 370-600 1/h, identical growth and protein expression kinetics were attained in bacteria and yeast fermentations. The bioprocess kinetics were evaluated by optical online measurements of biomass and protein concentrations exhibiting the same fermentation times and maximum signal deviations below 10% between the scales. In the experiments, the widely applied green

  20. Calculating Soil Wetness, Evapotranspiration and Carbon Cycle Processes Over Large Grid Areas Using a New Scaling Technique

    Science.gov (United States)

    Sellers, Piers

    2012-01-01

    Soil wetness typically shows great spatial variability over the length scales of general circulation model (GCM) grid areas (approx 100 km ), and the functions relating evapotranspiration and photosynthetic rate to local-scale (approx 1 m) soil wetness are highly non-linear. Soil respiration is also highly dependent on very small-scale variations in soil wetness. We therefore expect significant inaccuracies whenever we insert a single grid area-average soil wetness value into a function to calculate any of these rates for the grid area. For the particular case of evapotranspiration., this method - use of a grid-averaged soil wetness value - can also provoke severe oscillations in the evapotranspiration rate and soil wetness under some conditions. A method is presented whereby the probability distribution timction(pdf) for soil wetness within a grid area is represented by binning. and numerical integration of the binned pdf is performed to provide a spatially-integrated wetness stress term for the whole grid area, which then permits calculation of grid area fluxes in a single operation. The method is very accurate when 10 or more bins are used, can deal realistically with spatially variable precipitation, conserves moisture exactly and allows for precise modification of the soil wetness pdf after every time step. The method could also be applied to other ecological problems where small-scale processes must be area-integrated, or upscaled, to estimate fluxes over large areas, for example in treatments of the terrestrial carbon budget or trace gas generation.

  1. Scaled experiments using the helium technique to study the vehicular blockage effect on longitudinal ventilation control in tunnels

    DEFF Research Database (Denmark)

    Alva, Wilson Ulises Rojas; Jomaas, Grunde; Dederichs, Anne

    2015-01-01

    A model tunnel (1:30 compared to a standard tunnel section) with a helium-air smoke mixture was used to study the vehicular blockage effect on longitudinal ventilation smoke control. The experimental results showed excellent agreement with full-scale data and confirmed that the critical velocity...

  2. Repeatability of riparian vegetation sampling methods: how useful are these techniques for broad-scale, long-term monitoring?

    Science.gov (United States)

    Marc C. Coles-Ritchie; Richard C. Henderson; Eric K. Archer; Caroline Kennedy; Jeffrey L. Kershner

    2004-01-01

    Tests were conducted to evaluate variability among observers for riparian vegetation data collection methods and data reduction techniques. The methods are used as part of a largescale monitoring program designed to detect changes in riparian resource conditions on Federal lands. Methods were evaluated using agreement matrices, the Bray-Curtis dissimilarity metric, the...

  3. The use of a resource-based relative value scale (RBRVS) to determine practice expense costs: a novel technique of practice management for the vascular surgeon.

    Science.gov (United States)

    Mabry, C D

    2001-03-01

    Vascular surgeons have had to contend with rising costs while their reimbursements have undergone steady reductions. The use of newer accounting techniques can help vascular surgeons better manage their practices, plan for future expansion, and control costs. This article reviews traditional accounting methods, together with activity-based costing (ABC) principles that have been used in the past for practice expense analysis. The main focus is on a new technique-resource-based costing (RBC)-which uses the widely available Resource-Based Relative Value Scale (RBRVS) as its basis. The RBC technique promises easier implementation as well as more flexibility in determining true costs of performing various procedures, as opposed to more traditional accounting methods. It is hoped that RBC will assist vascular surgeons in coping with decreasing reimbursement. Copyright 2001 by W.B. Saunders Company

  4. Curved Josephson junction

    International Nuclear Information System (INIS)

    Dobrowolski, Tomasz

    2012-01-01

    The constant curvature one and quasi-one dimensional Josephson junction is considered. On the base of Maxwell equations, the sine–Gordon equation that describes an influence of curvature on the kink motion was obtained. It is showed that the method of geometrical reduction of the sine–Gordon model from three to lower dimensional manifold leads to an identical form of the sine–Gordon equation. - Highlights: ► The research on dynamics of the phase in a curved Josephson junction is performed. ► The geometrical reduction is applied to the sine–Gordon model. ► The results of geometrical reduction and the fundamental research are compared.

  5. A simple transformation for converting CW-OSL curves to LM-OSL curves

    DEFF Research Database (Denmark)

    Bulur, E.

    2000-01-01

    A simple mathematical transformation is introduced to convert from OSL decay curves obtained in the conventional way to those obtained using a linear modulation technique based on a linear increase of the stimulation light intensity during OSL measurement. The validity of the transformation...... was tested by the IR-stimulated luminescence curves from feldspars, recorded using both the conventional and the linear modulation techniques. The transformation was further applied to green-light-stimulated OSL from K and Na feldspars. (C) 2000 Elsevier Science Ltd. All rights reserved....

  6. Curved-Duct

    Directory of Open Access Journals (Sweden)

    Je Hyun Baekt

    2000-01-01

    Full Text Available A numerical study is conducted on the fully-developed laminar flow of an incompressible viscous fluid in a square duct rotating about a perpendicular axis to the axial direction of the duct. At the straight duct, the rotation produces vortices due to the Coriolis force. Generally two vortex cells are formed and the axial velocity distribution is distorted by the effect of this Coriolis force. When a convective force is weak, two counter-rotating vortices are shown with a quasi-parabolic axial velocity profile for weak rotation rates. As the rotation rate increases, the axial velocity on the vertical centreline of the duct begins to flatten and the location of vorticity center is moved near to wall by the effect of the Coriolis force. When the convective inertia force is strong, a double-vortex secondary flow appears in the transverse planes of the duct for weak rotation rates but as the speed of rotation increases the secondary flow is shown to split into an asymmetric configuration of four counter-rotating vortices. If the rotation rates are increased further, the secondary flow restabilizes to a slightly asymmetric double-vortex configuration. Also, a numerical study is conducted on the laminar flow of an incompressible viscous fluid in a 90°-bend square duct that rotates about axis parallel to the axial direction of the inlet. At a 90°-bend square duct, the feature of flow by the effect of a Coriolis force and a centrifugal force, namely a secondary flow by the centrifugal force in the curved region and the Coriolis force in the downstream region, is shown since the centrifugal force in curved region and the Coriolis force in downstream region are dominant respectively.

  7. Elliptic curves for applications (Tutorial)

    NARCIS (Netherlands)

    Lange, T.; Bernstein, D.J.; Chatterjee, S.

    2011-01-01

    More than 25 years ago, elliptic curves over finite fields were suggested as a group in which the Discrete Logarithm Problem (DLP) can be hard. Since then many researchers have scrutinized the security of the DLP on elliptic curves with the result that for suitably chosen curves only exponential

  8. Titration Curves: Fact and Fiction.

    Science.gov (United States)

    Chamberlain, John

    1997-01-01

    Discusses ways in which datalogging equipment can enable titration curves to be measured accurately and how computing power can be used to predict the shape of curves. Highlights include sources of error, use of spreadsheets to generate titration curves, titration of a weak acid with a strong alkali, dibasic acids, weak acid and weak base, and…

  9. An interactive editor for curve-skeletons: SkeletonLab

    OpenAIRE

    Barbieri, Simone; Meloni, P.; Usai, F.; Spano, L.D.; Scateni, R.

    2016-01-01

    Curve-skeletons are powerful shape descriptors able to provide higher level information on topology, structure and semantics of a given digital object. Their range of application is wide and encompasses computer animation, shape matching, modelling and remeshing. While a universally accepted definition of curve-skeleton is still lacking, there are currently many algorithms for the curve-skeleton computation (or skeletonization) as well as different techniques for building a mesh around a give...

  10. Techniques for Large-Scale Bacterial Genome Manipulation and Characterization of the Mutants with Respect to In Silico Metabolic Reconstructions.

    Science.gov (United States)

    diCenzo, George C; Finan, Turlough M

    2018-01-01

    The rate at which all genes within a bacterial genome can be identified far exceeds the ability to characterize these genes. To assist in associating genes with cellular functions, a large-scale bacterial genome deletion approach can be employed to rapidly screen tens to thousands of genes for desired phenotypes. Here, we provide a detailed protocol for the generation of deletions of large segments of bacterial genomes that relies on the activity of a site-specific recombinase. In this procedure, two recombinase recognition target sequences are introduced into known positions of a bacterial genome through single cross-over plasmid integration. Subsequent expression of the site-specific recombinase mediates recombination between the two target sequences, resulting in the excision of the intervening region and its loss from the genome. We further illustrate how this deletion system can be readily adapted to function as a large-scale in vivo cloning procedure, in which the region excised from the genome is captured as a replicative plasmid. We next provide a procedure for the metabolic analysis of bacterial large-scale genome deletion mutants using the Biolog Phenotype MicroArray™ system. Finally, a pipeline is described, and a sample Matlab script is provided, for the integration of the obtained data with a draft metabolic reconstruction for the refinement of the reactions and gene-protein-reaction relationships in a metabolic reconstruction.

  11. Pyrolysis as a technique for separating heavy metals from hyperaccumulators. Part II: Lab-scale pyrolysis of synthetic hyperaccumulator biomass

    International Nuclear Information System (INIS)

    Koppolu, Lakshmi; Agblevor, F.A.; Clements, L.D.

    2003-01-01

    Synthetic hyperaccumulator biomass (SHB) impregnated with Ni, Zn, Cu, Co or Cr was used to conduct 11 experiments in a lab-scale fluidized bed reactor. Two runs with blank corn stover, with no metal added, were also conducted. The reactor was operated in an entrained mode in a oxygen-free (N 2 ) environment at 873 K and 1 atm. The apparent gas residence time through the lab-scale reactor was 0.6 s at 873 K. The material balance for the lab-scale experiments on N 2 -free basis varied between 81% and 98%. The presence of a heavy metal in the SHB decreased the char yield and increased the tar yield, compared to the blank. The char and gas yields appeared to depend on the form of the metal salt used to prepare the SHB. However, the metal distribution in the product streams did not seem to be influenced by the chemical form of the metal salt used to prepare the SHB. Greater than 98.5% of the metal in the product stream was concentrated in the char formed by pyrolyzing and gasifying the SHB in the reactor. The metal concentration in the char varied between 0.7 and 15.3% depending on the type of metal in the SHB. However, the metal concentration was increased 4 to 6 times in the char compared to the feed

  12. A review of the processes and lab-scale techniques for the treatment of spent rechargeable NiMH batteries

    Science.gov (United States)

    Innocenzi, Valentina; Ippolito, Nicolò Maria; De Michelis, Ida; Prisciandaro, Marina; Medici, Franco; Vegliò, Francesco

    2017-09-01

    The purpose of this work is to describe and review the current status of the recycling technologies of spent NiMH batteries. In the first part of the work, the structure and characterization of NiMH accumulators are introduced followed by the description of the main scientific studies and the industrial processes. Various recycling routes including physical, pyrometallurgical and hydrometallurgical ones are discussed. The hydrometallurgical methods for the recovery of base metals and rare earths are mainly developed on the laboratory and pilot scale. The operating industrial methods are pyrometallurgical ones and are efficient only on the recovery of certain components of spent batteries. In particular fraction rich in nickel and other materials are recovered; instead the rare earths are lost in the slag and must be further refined by hydrometallurgical process to recover them. Considering the actual legislation regarding the disposal of spent batteries and the preservation of raw materials issues, implementations on laboratory scale and plant optimization studies should be conducted in order to overcome the industrial problems of the scale up for the hydrometallurgical processes.

  13. A Journey Between Two Curves

    Directory of Open Access Journals (Sweden)

    Sergey A. Cherkis

    2007-03-01

    Full Text Available A typical solution of an integrable system is described in terms of a holomorphic curve and a line bundle over it. The curve provides the action variables while the time evolution is a linear flow on the curve's Jacobian. Even though the system of Nahm equations is closely related to the Hitchin system, the curves appearing in these two cases have very different nature. The former can be described in terms of some classical scattering problem while the latter provides a solution to some Seiberg-Witten gauge theory. This note identifies the setup in which one can formulate the question of relating the two curves.

  14. Identification of variables for site calibration and power curve assessment in complex terrain. Task 8, a literature survey on theory and practice of parameter identification, specification and estimation (ISE) techniques

    Energy Technology Data Exchange (ETDEWEB)

    Verhoef, J.P.; Leendertse, G.P. [ECN Wind, Petten (Netherlands)

    2001-04-01

    This document presents the literature survey results on Identification, Specification and Estimation (ISE) techniques for variables within the SiteParIden project. Besides an overview of the different general techniques also an overview is given on EU funded wind energy projects where some of these techniques have been applied more specifically. The main problem in applications like power performance assessment and site calibration is to establish an appropriate model for predicting the considered dependent variable with the aid of measured independent (explanatory) variables. In these applications detailed knowledge on what the relevant variables are and how their precise appearance in the model would be is typically missing. Therefore, the identification (of variables) and the specification (of the model relation) are important steps in the model building phase. For the determination of the parameters in the model a reliable variable estimation technique is required. In EU funded wind energy projects the linear regression technique is the most commonly applied tool for the estimation step. The linear regression technique may fail in finding reliable parameter estimates when the model variables are strongly correlated, either due to the experimental set-up or because of their particular appearance in the model. This situation of multicollinearity sometimes results in unrealistic parameter values, e.g. with the wrong algebraic sign. It is concluded that different approaches, like multi-binning can provide a better way of identifying the relevant variables. However further research in these applications is needed and it is recommended that alternative methods (neural networks, singular value decomposition etc.) should also be tested on their usefulness in a succeeding project. Increased interest in complex terrains, as feasible locations for wind farms, has also emphasised the need for adequate models. A common standard procedure to prescribe the statistical

  15. A neural network for the Bragg synthetic curves recognition

    International Nuclear Information System (INIS)

    Reynoso V, M.R.; Vega C, J.J.; Fernandez A, J.; Belmont M, E.; Policroniades R, R.; Moreno B, E.

    1996-01-01

    A ionization chamber was employed named Bragg curve spectroscopy. The Bragg peak amplitude is a monotone growing function of Z, which permits to identify elements through their measurement. A better technique for this measurement is to improve the use of neural networks with the purpose of the identification of the Bragg curve. (Author)

  16. Graph Theory-Based Technique for Isolating Corrupted Boundary Conditions in Continental-Scale River Network Hydrodynamic Simulation

    Science.gov (United States)

    Yu, C. W.; Hodges, B. R.; Liu, F.

    2017-12-01

    Development of continental-scale river network models creates challenges where the massive amount of boundary condition data encounters the sensitivity of a dynamic nu- merical model. The topographic data sets used to define the river channel characteristics may include either corrupt data or complex configurations that cause instabilities in a numerical solution of the Saint-Venant equations. For local-scale river models (e.g. HEC- RAS), modelers typically rely on past experience to make ad hoc boundary condition adjustments that ensure a stable solution - the proof of the adjustment is merely the sta- bility of the solution. To date, there do not exist any formal methodologies or automated procedures for a priori detecting/fixing boundary conditions that cause instabilities in a dynamic model. Formal methodologies for data screening and adjustment are a critical need for simulations with a large number of river reaches that draw their boundary con- dition data from a wide variety of sources. At the continental scale, we simply cannot assume that we will have access to river-channel cross-section data that has been ade- quately analyzed and processed. Herein, we argue that problematic boundary condition data for unsteady dynamic modeling can be identified through numerical modeling with the steady-state Saint-Venant equations. The fragility of numerical stability increases with the complexity of branching in river network system and instabilities (even in an unsteady solution) are typically triggered by the nonlinear advection term in Saint-Venant equations. It follows that the behavior of the simpler steady-state equations (which retain the nonlin- ear term) can be used to screen the boundary condition data for problematic regions. In this research, we propose a graph-theory based method to isolate the location of corrupted boundary condition data in a continental-scale river network and demonstrate its utility with a network of O(10^4) elements. Acknowledgement

  17. Some Examples of Residence-Time Distribution Studies in Large-Scale Chemical Processes by Using Radiotracer Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Bullock, R. M.; Johnson, P.; Whiston, J. [Imperial Chemical Industries Ltd., Billingham, Co., Durham (United Kingdom)

    1967-06-15

    The application of radiotracers to determine flow patterns in chemical processes is discussed with particular reference to the derivation of design data from model reactors for translation to large-scale units, the study of operating efficiency and design attainment in established plant and the rapid identification of various types of process malfunction. The requirements governing the selection of tracers for various types of media are considered and an example is given of the testing of the behaviour of a typical tracer before use in a particular large-scale process operating at 250 atm and 200 Degree-Sign C. Information which may be derived from flow patterns is discussed including the determination of mixing parameters, gas hold-up in gas/liquid reactions and the detection of channelling and stagnant regions. Practical results and their interpretation are given in relation to an define hydroformylation reaction system, a process for the conversion of propylene to isopropanol, a moving bed catalyst system for the isomerization of xylenes and a three-stage gas-liquid reaction system. The use of mean residence-time data for the detection of leakage between reaction vessels and a heat interchanger system is given as an example of the identification of process malfunction. (author)

  18. A radiochemical technique for the establishment of a solvent-independent scale of ion activities in amphiprotic solvents

    International Nuclear Information System (INIS)

    Kim, J.I.; Duschner, H.; Born, H.J.

    1975-01-01

    The radiochemical determination of solubilities of hardly soluble compounds of silver (Ph 4 BAg, AgCl), by means of Ag-110m in amphiprotic solutions is used for setting-up a solvent-independent scale of ion activities based on the concept of the media effect. The media effects of the salts are calculated from the solubility data of the Ag compounds in question. The splitting into the media effects of single ions takes place with the extrathermodynamic assumption of the same media effects for large ions, such as Ph 4 B - = Ph 4 As - . A standardized ion activity scale in connection with the activity coefficients for the solvent in question can be established with water as the basic state of the chemical potential. As the sum of the media effects of the single ions gives the media effect of the salt concerned, which is easily obtained from data which are experimentally accessible (solubility, vapour pressure, ion exchange ect.), this method leads to single ion activities of a large number of ions in a multitude of solvents. (orig./LH) [de

  19. Pyrolysis as a technique for separating heavy metals from hyperaccumulators. Part III: pilot-scale pyrolysis of synthetic hyperaccumulator biomass

    International Nuclear Information System (INIS)

    Koppolu, Lakshmi; Prasad, Ramakrishna; Davis Clements, L.

    2004-01-01

    Synthetic hyperaccumulator biomass (SHB) feed impregnated with Ni, Zn or Cu was used to conduct six experiments in a pilot-scale, spouted bed gasifier. Two runs each using corn stover with no metal added (blank runs) were also conducted. The reactor was operated in an entrained mode in an oxygen free (N 2 ) environment at 873 K and 1 atm. The apparent gas residence time in the heated zone of the pilot-scale reactor was 1.4 s at 873 K. The material balance closure for the eight experiments on an N 2 -free basis varied between 79% and 92%. Nearly 99% of the metal recovered in the product stream was concentrated in the char formed by pyrolyzing the SHB in the reactor. The metal concentration in the char varied between 6.6% and 16.6%, depending on the type of metal and whether the char was collected in the cyclone or ashbox. The metal component was concentrated by 3.2-6 times in the char, compared to the feed

  20. Elimination of chromatographic and mass spectrometric problems in GC-MS analysis of Lavender essential oil by multivariate curve resolution techniques: Improving the peak purity assessment by variable size moving window-evolving factor analysis.

    Science.gov (United States)

    Jalali-Heravi, Mehdi; Moazeni-Pourasil, Roudabeh Sadat; Sereshti, Hassan

    2015-03-01

    In analysis of complex natural matrices by gas chromatography-mass spectrometry (GC-MS), many disturbing factors such as baseline drift, spectral background, homoscedastic and heteroscedastic noise, peak shape deformation (non-Gaussian peaks), low S/N ratio and co-elution (overlapped and/or embedded peaks) lead the researchers to handle them to serve time, money and experimental efforts. This study aimed to improve the GC-MS analysis of complex natural matrices utilizing multivariate curve resolution (MCR) methods. In addition, to assess the peak purity of the two-dimensional data, a method called variable size moving window-evolving factor analysis (VSMW-EFA) is introduced and examined. The proposed methodology was applied to the GC-MS analysis of Iranian Lavender essential oil, which resulted in extending the number of identified constituents from 56 to 143 components. It was found that the most abundant constituents of the Iranian Lavender essential oil are α-pinene (16.51%), camphor (10.20%), 1,8-cineole (9.50%), bornyl acetate (8.11%) and camphene (6.50%). This indicates that the Iranian type Lavender contains a relatively high percentage of α-pinene. Comparison of different types of Lavender essential oils showed the composition similarity between Iranian and Italian (Sardinia Island) Lavenders. Published by Elsevier B.V.

  1. Studying the method of linearization of exponential calibration curves

    International Nuclear Information System (INIS)

    Bunzh, Z.A.

    1989-01-01

    The results of study of the method for linearization of exponential calibration curves are given. The calibration technique and comparison of the proposed method with piecewise-linear approximation and power series expansion, are given

  2. A Note on Comparing the Elasticities of Demand Curves.

    Science.gov (United States)

    Nieswiadomy, Michael

    1986-01-01

    Demonstrates a simple and useful way to compare the elasticity of demand at each price (or quantity) for different demand curves. The technique is particularly useful for the intermediate microeconomic course. (Author)

  3. Conditional sampling technique to test the applicability of the Taylor hypothesis for the large-scale coherent structures

    Science.gov (United States)

    Hussain, A. K. M. F.

    1980-01-01

    Comparisons of the distributions of large scale structures in turbulent flow with distributions based on time dependent signals from stationary probes and the Taylor hypothesis are presented. The study investigated an area in the near field of a 7.62 cm circular air jet at a Re of 32,000, specifically having coherent structures through small-amplitude controlled excitation and stable vortex pairing in the jet column mode. Hot-wire and X-wire anemometry were employed to establish phase averaged spatial distributions of longitudinal and lateral velocities, coherent Reynolds stress and vorticity, background turbulent intensities, streamlines and pseudo-stream functions. The Taylor hypothesis was used to calculate spatial distributions of the phase-averaged properties, with results indicating that the usage of the local time-average velocity or streamwise velocity produces large distortions.

  4. Using value stream mapping technique through the lean production transformation process: An implementation in a large-scaled tractor company

    Directory of Open Access Journals (Sweden)

    Mehmet Rıza Adalı

    2017-04-01

    Full Text Available Today’s world, manufacturing industries have to continue their development and continuity in more competitive environment via decreasing their costs. As a first step in the lean production process transformation is to analyze the value added activities and non-value adding activities. This study aims at applying the concepts of Value Stream Mapping (VSM in a large-scaled tractor company in Sakarya. Waste and process time are identified by mapping the current state in the production line of platform. The future state was suggested with improvements for elimination of waste and reduction of lead time, which went from 13,08 to 4,35 days. Analysis are made using current and future states to support the suggested improvements and cycle time of the production line of platform is improved 8%. Results showed that VSM is a good alternative in the decision-making for change in production process.

  5. Measurement of residence time distribution of liquid phase in an industrial-scale continuous pulp digester using radiotracer technique.

    Science.gov (United States)

    Sheoran, Meenakshi; Goswami, Sunil; Pant, Harish J; Biswal, Jayashree; Sharma, Vijay K; Chandra, Avinash; Bhunia, Haripada; Bajpai, Pramod K; Rao, S Madhukar; Dash, A

    2016-05-01

    A series of radiotracer experiments was carried out to measure residence time distribution (RTD) of liquid phase (alkali) in an industrial-scale continuous pulp digester in a paper industry in India. Bromine-82 as ammonium bromide was used as a radiotracer. Experiments were carried out at different biomass and white liquor flow rates. The measured RTD data were treated and mean residence times in individual digester tubes as well in the whole digester were determined. The RTD was also analyzed to identify flow abnormalities and investigate flow dynamics of the liquid phase in the pulp digester. Flow channeling was observed in the first section (tube 1) of the digester. Both axial dispersion and tanks-in-series with backmixing models preceded with a plug flow component were used to simulate the measured RTD and quantify the degree of axial mixing. Based on the study, optimum conditions for operating the digester were proposed. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Comparison of three different scales techniques for the dynamic mechanical characterization of two polymers (PDMS and SU8)

    Science.gov (United States)

    Le Rouzic, J.; Delobelle, P.; Vairac, P.; Cretin, B.

    2009-10-01

    In this article the dynamic mechanical characterization of PDMS and SU8 resin using dynamic mechanical analysis, nanoindentation and the scanning microdeformation microscope have been presented. The methods are hereby explained, extended for viscoelastic behaviours, and their compatibility underlined. The storage and loss moduli of these polymers over a wide range of frequencies (from 0.01 Hz to somekHz) have been measured. These techniques are shown fairly matching and the two different viscoelastic behaviours of these two polymers have been exhibited. Indeed, PDMS shows moduli which still increase at 5kHz whereas SU8 ones decrease much sooner. From a material point of view, the Havriliak and Negami model to estimate instantaneous, relaxed moduli and time constant of these materials has been identified.

  7. Quantum field theory in curved spacetime

    International Nuclear Information System (INIS)

    Gibbons, G.W.

    1978-04-01

    The purpose of this article is to outline what the extension of such a treatment to curved space entails and to discuss what essentially new features arise when one takes into account the quantum mechanical nature of gravitating systems. I shall throughout assume a classical, unquantized gravitational field and confine the discussion to matter fields although similar techniques and ideas may be applied to 'gravitons' - that is linearized perturbations of the metric propagating on some fixed, unperturbed, background. (orig./WL) [de

  8. Curvas de referência de pontos brutos no Stanford-Binet Intelligence Scale de crianças e adolescentes Curvas de referencia de puntaje bruto en el Stanford-Binet Intelligence Scale de niños y adolescentes Curves reference crude scores in Stanford-Binet Intelligence Scale for children and adolescents

    Directory of Open Access Journals (Sweden)

    Márcia Regina Fumagalli Marteleto

    2012-12-01

    Full Text Available O trabalho teve como objetivo construir curvas de referência de pontos brutos das Áreas e do Total do Stanford-Binet em crianças e adolescentes paulistanos. Foram avaliadas individualmente 257 crianças e adolescentes, com idade média de 5 anos e 10 meses, sendo 130 (50,58% do sexo feminino e 127 (49,42% do sexo masculino, todas frequentadoras de Escolas Públicas de Educação Infantil e Fundamental, de diferentes regiões da cidade de São Paulo. O teste foi aplicado individualmente na própria escola das crianças, sempre a partir do primeiro item, independentemente da idade da criança. Os participantes foram agrupados por idade; calcularam-se medidas descritivas para cada faixa etária desta população. Foram confeccionadas curvas de referência para Áreas e Total do Stanford Binet com os pontos brutos obtidos. Os pontos brutos foram distribuídos de acordo com a curva normal.El estudio tuvo como objetivo construir curvas de referencia de puntajes brutos de las Áreas y del Total del Stanford-Binet en niños y adolescentes del estado de São Paulo-Brasil. Fueron evaluados individualmente 257 niños y adolescentes, con edad media de 5 años y 10 meses, siendo 130 (50,58% del sexo femenino y 127 (49,42% del sexo masculino, todos frecuentadores de escuelas públicas de educación infantil y básica, de diferentes regiones de la ciudad de São Paulo. El test fue aplicado individualmente en la propia escuela de los niños, siempre a partir del primer ítem, independientemente de la edad del niño. Los participantes fueron agrupados por edad; se calculó medidas descriptivas para cada rango etario de esta población. Fueron confeccionadas curvas de referencia para Áreas y Total del Stanford Binet con los puntajes brutos obtenidos. Los puntajes brutos fueron distribuidos de acuerdo con la curva normal.The objective of this study was to construct curves reference crude scores on the areas and total of the Stanford-Binet test for children in

  9. Detection of different-time-scale signals in the length of day variation based on EEMD analysis technique

    Directory of Open Access Journals (Sweden)

    Wenbin Shen

    2016-05-01

    Full Text Available Scientists pay great attention to different-time-scale signals in the length of day (LOD variations ΔLOD, which provide signatures of the Earth's interior structure, couplings among different layers, and potential excitations of ocean and atmosphere. In this study, based on the ensemble empirical mode decomposition (EEMD, we analyzed the latest time series of ΔLOD data spanning from January 1962 to March 2015. We observed the signals with periods and amplitudes of about 0.5 month and 0.19 ms, 1.0 month and 0.19 ms, 0.5 yr and 0.22 ms, 1.0 yr and 0.18 ms, 2.28 yr and 0.03 ms, 5.48 yr and 0.05 ms, respectively, in coincidence with the results of predecessors. In addition, some signals that were previously not definitely observed by predecessors were detected in this study, with periods and amplitudes of 9.13 d and 0.12 ms, 13.69 yr and 0.10 ms, respectively. The mechanisms of the LOD fluctuations of these two signals are still open.

  10. Advanced chip designs and novel cooling techniques for brightness scaling of industrial, high power diode laser bars

    Science.gov (United States)

    Heinemann, S.; McDougall, S. D.; Ryu, G.; Zhao, L.; Liu, X.; Holy, C.; Jiang, C.-L.; Modak, P.; Xiong, Y.; Vethake, T.; Strohmaier, S. G.; Schmidt, B.; Zimer, H.

    2018-02-01

    The advance of high power semiconductor diode laser technology is driven by the rapidly growing industrial laser market, with such high power solid state laser systems requiring ever more reliable diode sources with higher brightness and efficiency at lower cost. In this paper we report simulation and experimental data demonstrating most recent progress in high brightness semiconductor laser bars for industrial applications. The advancements are in three principle areas: vertical laser chip epitaxy design, lateral laser chip current injection control, and chip cooling technology. With such improvements, we demonstrate disk laser pump laser bars with output power over 250W with 60% efficiency at the operating current. Ion implantation was investigated for improved current confinement. Initial lifetime tests show excellent reliability. For direct diode applications 96% polarization are additional requirements. Double sided cooling deploying hard solder and optimized laser design enable single emitter performance also for high fill factor bars and allow further power scaling to more than 350W with 65% peak efficiency with less than 8 degrees slow axis divergence and high polarization.

  11. Measurement of residence time distribution of liquid phase in an industrial-scale continuous pulp digester using radiotracer technique

    International Nuclear Information System (INIS)

    Sheoran, Meenakshi; Goswami, Sunil; Pant, Harish J.; Biswal, Jayashree; Sharma, Vijay K.; Chandra, Avinash; Bhunia, Haripada; Bajpai, Pramod K.; Rao, S. Madhukar; Dash, A.

    2016-01-01

    A series of radiotracer experiments was carried out to measure residence time distribution (RTD) of liquid phase (alkali) in an industrial-scale continuous pulp digester in a paper industry in India. Bromine-82 as ammonium bromide was used as a radiotracer. Experiments were carried out at different biomass and white liquor flow rates. The measured RTD data were treated and mean residence times in individual digester tubes as well in the whole digester were determined. The RTD was also analyzed to identify flow abnormalities and investigate flow dynamics of the liquid phase in the pulp digester. Flow channeling was observed in the first section (tube 1) of the digester. Both axial dispersion and tanks-in-series with backmixing models preceded with a plug flow component were used to simulate the measured RTD and quantify the degree of axial mixing. Based on the study, optimum conditions for operating the digester were proposed. - Highlights: • Radiotracer experiments were conducted to measure RTD of liquid phase in a pulp digester • Mean residence times of white liquor were measured • Axial dispersion and tanks-in-series models were used to investigate flow patterns • Parallel flow paths were observed in first section of the digester • Optimized flow rates of biomass and liquor were obtained

  12. Study on development and actual application of scientific crime detection technique using small scale neutron radiation source

    International Nuclear Information System (INIS)

    Suzuki, Yasuhiro; Kishi, Toru; Tachikawa, Noboru; Ishikawa, Isamu.

    1997-01-01

    PGA (Prompt γ-ray Analysis) is an analytic method of γ-ray generated from atomic nuclei of elements in the specimen just after irradiation (within 10(exp-14)sec.) of neutron to it. As using neutron with excellent transmission for an exciting source, this method can be used for inspecting the matters in closed containers non-destructively, and can also detect non-destructively light elements such as boron, nitrogen and others difficult by other non-destructive analysis. Especially, it is found that this method can detect such high concentration of nitrogen, chlorine and others which are characteristic elements for the explosives. However, as there are a number of limitations at the nuclear reactor site, development of an analytical apparatus for small scale neutron radiation source was begun, at first. In this fiscal year, analysis of the light elements such as nitrogen, chlorine and others using PGA was attempted by using 252-Cf as the simplest neutron source in its operation. As the 252-Cf neutron flux was considerably lower than that of nuclear reactor, its analytical sensitivity was also investigated. (G.K.)

  13. Validation of a scale for network therapy: a technique for systematic use of peer and family support in addition treatment.

    Science.gov (United States)

    Keller, D S; Galanter, M; Weinberg, S

    1997-02-01

    Substance abuse treatments are increasingly employing standardized formats. This is especially the case for approaches that utilize an individual psychotherapy format but less so for family-based approaches. Network therapy, an approach that involves family members and peers in the patient's relapse prevention efforts, is theoretically and clinically differentiated in this paper from family systems therapy for addiction. Based on these conceptual differences, a Network Therapy Rating Scale (NTRS) was developed to measure the integrity and differentiability of network therapy from other family-based approaches to addiction treatment. Seven addictions faculty and 10 third- and fourth-year psychiatry residents recently trained in the network approach used the NTRS to rate excerpts of network and family systems therapy sessions. Data revealed the NTRS had high internal consistency reliability when utilized by both groups of raters. In addition, network and nonnetwork subscales within the NTRS rated congruent therapy excerpts significantly higher than noncongruent therapy excerpts, indicating that the NTRS subscales measure what they are designed to measure. Implications for research and training are discussed.

  14. Models of genus one curves

    OpenAIRE

    Sadek, Mohammad

    2010-01-01

    In this thesis we give insight into the minimisation problem of genus one curves defined by equations other than Weierstrass equations. We are interested in genus one curves given as double covers of P1, plane cubics, or complete intersections of two quadrics in P3. By minimising such a curve we mean making the invariants associated to its defining equations as small as possible using a suitable change of coordinates. We study the non-uniqueness of minimisations of the genus one curves des...

  15. The Small-scale Structure of Photospheric Convection Retrieved by a Deconvolution Technique Applied to Hinode /SP Data

    Energy Technology Data Exchange (ETDEWEB)

    Oba, T. [SOKENDAI (The Graduate University for Advanced Studies), 3-1-1 Yoshinodai, Chuo-ku, Sagamihara, Kanagawa 252–5210 (Japan); Riethmüller, T. L.; Solanki, S. K. [Max-Planck-Institut für Sonnensystemforschung (MPS), Justus-von-Liebig-Weg 3, D-37077 Göttingen (Germany); Iida, Y. [Department of Science and Technology/Kwansei Gakuin University, Gakuen 2-1, Sanda, Hyogo, 669–1337 Japan (Japan); Quintero Noda, C.; Shimizu, T. [Institute of Space and Astronautical Science, Japan Aerospace Exploration Agency, 3-1-1 Yoshinodai, Chuo-ku, Sagamihara, Kanagawa 252–5210 (Japan)

    2017-11-01

    Solar granules are bright patterns surrounded by dark channels, called intergranular lanes, in the solar photosphere and are a manifestation of overshooting convection. Observational studies generally find stronger upflows in granules and weaker downflows in intergranular lanes. This trend is, however, inconsistent with the results of numerical simulations in which downflows are stronger than upflows through the joint action of gravitational acceleration/deceleration and pressure gradients. One cause of this discrepancy is the image degradation caused by optical distortion and light diffraction and scattering that takes place in an imaging instrument. We apply a deconvolution technique to Hinode /SP data in an attempt to recover the original solar scene. Our results show a significant enhancement in both the convective upflows and downflows but particularly for the latter. After deconvolution, the up- and downflows reach maximum amplitudes of −3.0 km s{sup −1} and +3.0 km s{sup −1} at an average geometrical height of roughly 50 km, respectively. We found that the velocity distributions after deconvolution match those derived from numerical simulations. After deconvolution, the net LOS velocity averaged over the whole field of view lies close to zero as expected in a rough sense from mass balance.

  16. Mechanical microencapsulation: The best technique in taste masking for the manufacturing scale - Effect of polymer encapsulation on drug targeting.

    Science.gov (United States)

    Al-Kasmi, Basheer; Alsirawan, Mhd Bashir; Bashimam, Mais; El-Zein, Hind

    2017-08-28

    Drug taste masking is a crucial process for the preparation of pediatric and geriatric formulations as well as fast dissolving tablets. Taste masking techniques aim to prevent drug release in saliva and at the same time to obtain the desired release profile in gastrointestinal tract. Several taste masking methods are reported, however this review has focused on a group of promising methods; complexation, encapsulation, and hot melting. The effects of each method on the physicochemical properties of the drug are described in details. Furthermore, a scoring system was established to evaluate each process using recent published data of selected factors. These include, input, process, and output factors that are related to each taste masking method. Input factors include the attributes of the materials used for taste masking. Process factors include equipment type and process parameters. Finally, output factors, include taste masking quality and yield. As a result, Mechanical microencapsulation obtained the highest score (5/8) along with complexation with cyclodextrin suggesting that these methods are the most preferable for drug taste masking. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Flood evolution assessment and monitoring using hydrological modelling techniques: analysis of the inundation areas at a regional scale

    Science.gov (United States)

    Podhoranyi, M.; Kuchar, S.; Portero, A.

    2016-08-01

    The primary objective of this study is to present techniques that cover usage of a hydrodynamic model as the main tool for monitoring and assessment of flood events while focusing on modelling of inundation areas. We analyzed the 2010 flood event (14th May - 20th May) that occurred in the Moravian-Silesian region (Czech Republic). Under investigation were four main catchments: Opava, Odra, Olše and Ostravice. Four hydrodynamic models were created and implemented into the Floreon+ platform in order to map inundation areas that arose during the flood event. In order to study the dynamics of the water, we applied an unsteady flow simulation for the entire area (HEC-RAS 4.1). The inundation areas were monitored, evaluated and recorded semi-automatically by means of the Floreon+ platform. We focused on information about the extent and presence of the flood areas. The modeled flooded areas were verified by comparing them with real data from different sources (official reports, aerial photos and hydrological networks). The study confirmed that hydrodynamic modeling is a very useful tool for mapping and monitoring of inundation areas. Overall, our models detected 48 inundation areas during the 2010 flood event.

  18. Development of Somatic Embryo Maturation and Growing Techniques of Norway Spruce Emblings towards Large-Scale Field Testing

    Directory of Open Access Journals (Sweden)

    Mikko Tikkinen

    2018-06-01

    Full Text Available The possibility to utilize non-additive genetic gain in planting stock has increased the interest towards vegetative propagation. In Finland, the increased planting of Norway spruce combined with fluctuant seed yields has resulted in shortages of improved regeneration material. Somatic embryogenesis is an attractive method to rapidly facilitate breeding results, not in the least, because juvenile propagation material can be cryostored for decades. Further development of technology for the somatic embryogenesis of Norway spruce is essential, as the high cost of somatic embryo plants (emblings limits deployment. We examined the effects of maturation media varying in abscisic acid (20, 30 or 60 µM and polyethylene glycol 4000 (PEG concentrations, as well as the effect of cryopreservation cycles on embryo production, and the effects of two growing techniques on embling survival and growth. Embryo production and nursery performance of 712 genotypes from 12 full-sib families were evaluated. Most embryos per gram of fresh embryogenic mass (296 ± 31 were obtained by using 30 µM abscisic acid without PEG in the maturation media. Transplanting the emblings into nursery after one-week in vitro germination resulted in 77% survival and the tallest emblings after the first growing season. Genotypes with good production properties were found in all families.

  19. IDF-curves for precipitation In Belgium

    International Nuclear Information System (INIS)

    Mohymont, Bernard; Demarde, Gaston R.

    2004-01-01

    The Intensity-Duration-Frequency (IDF) curves for precipitation constitute a relationship between the intensity, the duration and the frequency of rainfall amounts. The intensity of precipitation is expressed in mm/h, the duration or aggregation time is the length of the interval considered while the frequency stands for the probability of occurrence of the event. IDF-curves constitute a classical and useful tool that is primarily used to dimension hydraulic structures in general, as e.g., sewer systems and which are consequently used to assess the risk of inundation. In this presentation, the IDF relation for precipitation is studied for different locations in Belgium. These locations correspond to two long-term, high-quality precipitation networks of the RMIB: (a) the daily precipitation depths of the climatological network (more than 200 stations, 1951-2001 baseline period); (b) the high-frequency 10-minutes precipitation depths of the hydro meteorological network (more than 30 stations, 15 to 33 years baseline period). For the station of Uccle, an uninterrupted time-series of more than one hundred years of 10-minutes rainfall data is available. The proposed technique for assessing the curves is based on maximum annual values of precipitation. A new analytical formula for the IDF-curves was developed such that these curves stay valid for aggregation times ranging from 10 minutes to 30 days (when fitted with appropriate data). Moreover, all parameters of this formula have physical dimensions. Finally, adequate spatial interpolation techniques are used to provide nationwide extreme values precipitation depths for short- to long-term durations With a given return period. These values are estimated on the grid points of the Belgian ALADIN-domain used in the operational weather forecasts at the RMIB.(Author)

  20. Trend analyses with river sediment rating curves

    Science.gov (United States)

    Warrick, Jonathan A.

    2015-01-01

    Sediment rating curves, which are fitted relationships between river discharge (Q) and suspended-sediment concentration (C), are commonly used to assess patterns and trends in river water quality. In many of these studies it is assumed that rating curves have a power-law form (i.e., C = aQb, where a and b are fitted parameters). Two fundamental questions about the utility of these techniques are assessed in this paper: (i) How well to the parameters, a and b, characterize trends in the data? (ii) Are trends in rating curves diagnostic of changes to river water or sediment discharge? As noted in previous research, the offset parameter, a, is not an independent variable for most rivers, but rather strongly dependent on b and Q. Here it is shown that a is a poor metric for trends in the vertical offset of a rating curve, and a new parameter, â, as determined by the discharge-normalized power function [C = â (Q/QGM)b], where QGM is the geometric mean of the Q values sampled, provides a better characterization of trends. However, these techniques must be applied carefully, because curvature in the relationship between log(Q) and log(C), which exists for many rivers, can produce false trends in â and b. Also, it is shown that trends in â and b are not uniquely diagnostic of river water or sediment supply conditions. For example, an increase in â can be caused by an increase in sediment supply, a decrease in water supply, or a combination of these conditions. Large changes in water and sediment supplies can occur without any change in the parameters, â and b. Thus, trend analyses using sediment rating curves must include additional assessments of the time-dependent rates and trends of river water, sediment concentrations, and sediment discharge.

  1. New approach to the dosimetry of ionizing radiations by fluorescence measurement, according to the single photon counting technique, correlated in time at the nanosecond scale

    International Nuclear Information System (INIS)

    Sohier, Till

    2011-01-01

    This research thesis reports the first fundamental study of the dosimetry of charged and gamma radiations by measurement of fluorescence resolved in time at a nanosecond scale, in organic matter. This method allows an in-depth and real-time analysis of the deposited dose, while taking ionisation as well as excitation processes into account. The author describes mechanisms of interaction and deposition of energy on dense matter, reports the detailed study of the ion-matter interaction, and the interaction of secondary electrons produced within traces. He addresses mechanisms of energy relaxation, and more particularly the study or organic scintillators. Then, he presents the adopted experimental approach: experimental observation with a statistic reconstitution of the curve representing the intensity of the emitted fluorescence in time and with a nanosecond resolution by using a scintillating sensor for time correlated single photon counting (TCSPC). The next part reports the development of an experimental multi-modal platform for dosimetry by TCSPC aimed at the measurement of fluorescence decays under pulsed excitation (nanosecond pulsed ion beams) and continuous flow excitation (non pulsed beams and radioactive sources). Experimental results are then presented for fluorescence measurements, and compared with measurements obtained by using an ionization chamber under the same irradiation conditions: dose deposited by hellions and carbon ions within polyvinyl toluene and polyethylene terephthalate, use of scintillating optic fibers under gamma irradiation of Caesium 137 and Cobalt 60. A new experimental approach is finally presented to perform dosimetry measurements while experimentally ignoring luminescence produced by Cerenkov effect [fr

  2. Multi-scale full-field measurements and near-wall modeling of turbulent subcooled boiling flow using innovative experimental techniques

    Energy Technology Data Exchange (ETDEWEB)

    Hassan, Yassin A., E-mail: y-hassan@tamu.edu

    2016-04-01

    Highlights: • Near wall full-field velocity components under subcooled boiling were measured. • Simultaneous shadowgraphy, infrared thermometry wall temperature and particle-tracking velocimetry techniques were combined. • Near wall velocity modifications under subcooling boiling were observed. - Abstract: Multi-phase flows are one of the challenges on which the CFD simulation community has been working extensively with a relatively low success. The phenomena associated behind the momentum and heat transfer mechanisms associated to multi-phase flows are highly complex requiring resolving simultaneously for multiple scales on time and space. Part of the reasons behind the low predictive capability of CFD when studying multi-phase flows, is the scarcity of CFD-grade experimental data for validation. The complexity of the phenomena and its sensitivity to small sources of perturbations makes its measurements a difficult task. Non-intrusive and innovative measuring techniques are required to accurately measure multi-phase flow parameters while at the same time satisfying the high resolution required to validate CFD simulations. In this context, this work explores the feasible implementation of innovative measuring techniques that can provide whole-field and multi-scale measurements of two-phase flow turbulence, heat transfer, and boiling parameters. To this end, three visualization techniques are simultaneously implemented to study subcooled boiling flow through a vertical rectangular channel with a single heated wall. These techniques are listed next and are used as follow: (1) High-speed infrared thermometry (IR-T) is used to study the impact of the boiling level on the heat transfer coefficients at the heated wall, (2) Particle Tracking Velocimetry (PTV) is used to analyze the influence that boiling parameters have on the liquid phase turbulence statistics, (3) High-speed shadowgraphy with LED illumination is used to obtain the gas phase dynamics. To account

  3. Meter-scale Urban Land Cover Mapping for EPA EnviroAtlas Using Machine Learning and OBIA Remote Sensing Techniques

    Science.gov (United States)

    Pilant, A. N.; Baynes, J.; Dannenberg, M.; Riegel, J.; Rudder, C.; Endres, K.

    2013-12-01

    US EPA EnviroAtlas is an online collection of tools and resources that provides geospatial data, maps, research, and analysis on the relationships between nature, people, health, and the economy (http://www.epa.gov/research/enviroatlas/index.htm). Using EnviroAtlas, you can see and explore information related to the benefits (e.g., ecosystem services) that humans receive from nature, including clean air, clean and plentiful water, natural hazard mitigation, biodiversity conservation, food, fuel, and materials, recreational opportunities, and cultural and aesthetic value. EPA developed several urban land cover maps at very high spatial resolution (one-meter pixel size) for a portion of EnviroAtlas devoted to urban studies. This urban mapping effort supported analysis of relations among land cover, human health and demographics at the US Census Block Group level. Supervised classification of 2010 USDA NAIP (National Agricultural Imagery Program) digital aerial photos produced eight-class land cover maps for several cities, including Durham, NC, Portland, ME, Tampa, FL, New Bedford, MA, Pittsburgh, PA, Portland, OR, and Milwaukee, WI. Semi-automated feature extraction methods were used to classify the NAIP imagery: genetic algorithms/machine learning, random forest, and object-based image analysis (OBIA). In this presentation we describe the image processing and fuzzy accuracy assessment methods used, and report on some sustainability and ecosystem service metrics computed using this land cover as input (e.g., carbon sequestration from USFS iTREE model; health and demographics in relation to road buffer forest width). We also discuss the land cover classification schema (a modified Anderson Level 1 after the National Land Cover Data (NLCD)), and offer some observations on lessons learned. Meter-scale urban land cover in Portland, OR overlaid on NAIP aerial photo. Streets, buildings and individual trees are identifiable.

  4. Investigating sensitivity, specificity, and area under the curve of the Clinical COPD Questionnaire, COPD Assessment Test, and Modified Medical Research Council scale according to GOLD using St George's Respiratory Questionnaire cutoff 25 (and 20) as reference

    NARCIS (Netherlands)

    Tsiligianni, Ioanna G.; Alma, Harma J.; de Jong, Corina; Jelusic, Danijel; Wittmann, Michael; Schuler, Michael; Schultz, Konrad; Kollen, Boudewijn J.; van der Molen, Thys; Kocks, Janwillem W. H.

    2016-01-01

    Background: In the GOLD (Global initiative for chronic Obstructive Lung Disease) strategy document, the Clinical COPD Questionnaire (CCQ), COPD Assessment Test (CAT), or modified Medical Research Council (mMRC) scale are recommended for the assessment of symptoms using the cutoff points of CCQ >= 1,

  5. Advanced Fabrication Techniques for Precisely Controlled Micro and Nano Scale Environments for Complex Tissue Regeneration and Biomedical Applications

    Science.gov (United States)

    Holmes, Benjamin

    As modern medicine advances, it is still very challenging to cure joint defects due to their poor inherent regenerative capacity, complex stratified architecture, and disparate biomechanical properties. The current clinical standard for catastrophic or late stage joint degradation is a total joint implant, where the damaged joint is completely excised and replaced with a metallic or artificial joint. However, these procedures still only lasts for 10-15 years, and there are hosts of recovery complications which can occur. Thus, these studies have sought to employ advanced biomaterials and scaffold fabricated techniques to effectively regrow joint tissue, instead of merely replacing it with artificial materials. We can hypothesize here that the inclusion of biomimetic and bioactive nanomaterials with highly functional electrospun and 3D printed scaffold can improve physical characteristics (mechanical strength, surface interactions and nanotexture) enhance cellular growth and direct stem cell differentiation for bone, cartilage and vascular growth as well as cancer metastasis modeling. Nanomaterial inclusion and controlled 3D printed features effectively increased nano surface roughness, Young's Modulus and provided effective flow paths for simulated arterial blood. All of the approaches explored proved highly effective for increasing cell growth, as a result of increasing micro-complexity and nanomaterial incorporation. Additionally, chondrogenic and osteogenic differentiation, cell migration, cell to cell interaction and vascular formation were enhanced. Finally, growth-factor(gf)-loaded polymer nanospheres greatly improved vascular cell behavior, and provided a highly bioactive scaffold for mesenchymal stem cell (MSC) and human umbilical vein endothelial cell (HUVEC) co-culture and bone formation. In conclusion, electrospinning and 3D printing when combined effectively with biomimetic and bioactive nanomaterials (i.e. carbon nanomaterials, collagen, nHA, polymer

  6. Dual kinetic curves in reversible electrochemical systems.

    Directory of Open Access Journals (Sweden)

    Michael J Hankins

    Full Text Available We introduce dual kinetic chronoamperometry, in which reciprocal relations are established between the kinetic curves of electrochemical reactions that start from symmetrical initial conditions. We have performed numerical and experimental studies in which the kinetic curves of the electron-transfer processes are analyzed for a reversible first order reaction. Experimental tests were done with the ferrocyanide/ferricyanide system in which the concentrations of each component could be measured separately using the platinum disk/gold ring electrode. It is shown that the proper ratio of the transient kinetic curves obtained from cathodic and anodic mass transfer limited regions give thermodynamic time invariances related to the reaction quotient of the bulk concentrations. Therefore, thermodynamic time invariances can be observed at any time using the dual kinetic curves for reversible reactions. The technique provides a unique possibility to extract the non-steady state trajectory starting from one initial condition based only on the equilibrium constant and the trajectory which starts from the symmetrical initial condition. The results could impact battery technology by predicting the concentrations and currents of the underlying non-steady state processes in a wide domain from thermodynamic principles and limited kinetic information.

  7. Are Moral Disengagement, Neutralization Techniques, and Self-Serving Cognitive Distortions the Same? Developing a Unified Scale of Moral Neutralization of Aggression

    Directory of Open Access Journals (Sweden)

    Denis Ribeaud

    2010-12-01

    Full Text Available

    Can the three concepts of Neutralization Techniques, Moral Disengagement, and Secondary Self-Serving Cognitive Distortions be conceived theoretically and empirically
    as capturing the same cognitive processes and thus be measured with one single scale of Moral Neutralization? First, we show how the different approaches overlap conceptually. Second, in Study 1, we verify that four scales derived from the three conceptions of Moral Neutralization are correlated in such a way that they can be conceived as measuring the same phenomenon. Third, building on the results of Study 1, we derive a unified scale of Moral Neutralization which specifically focuses on the neutralization of aggression and test it in a large general population sample of preadolescents (Study 2. Confirmatory factor analyses suggest a good internal consistency and acceptable cross-gender factorial invariance. Correlation analyses with related behavioral and cognitive constructs corroborate the scale’s criterion and convergent validity. In the final section we present a possible integration of Moral Neutralization in a broader framework of crime causation.

  8. Extended analysis of cooling curves

    International Nuclear Information System (INIS)

    Djurdjevic, M.B.; Kierkus, W.T.; Liliac, R.E.; Sokolowski, J.H.

    2002-01-01

    Thermal Analysis (TA) is the measurement of changes in a physical property of a material that is heated through a phase transformation temperature range. The temperature changes in the material are recorded as a function of the heating or cooling time in such a manner that allows for the detection of phase transformations. In order to increase accuracy, characteristic points on the cooling curve have been identified using the first derivative curve plotted versus time. In this paper, an alternative approach to the analysis of the cooling curve has been proposed. The first derivative curve has been plotted versus temperature and all characteristic points have been identified with the same accuracy achieved using the traditional method. The new cooling curve analysis also enables the Dendrite Coherency Point (DCP) to be detected using only one thermocouple. (author)

  9. Parameter Deduction and Accuracy Analysis of Track Beam Curves in Straddle-type Monorail Systems

    Directory of Open Access Journals (Sweden)

    Xiaobo Zhao

    2015-12-01

    Full Text Available The accuracy of the bottom curve of a PC track beam is strongly related to the production quality of the entire beam. Many factors may affect the parameters of the bottom curve, such as the superelevation of the curve and the deformation of a PC track beam. At present, no effective method has been developed to determine the bottom curve of a PC track beam; therefore, a new technique is presented in this paper to deduce the parameters of such a curve and to control the accuracy of the computation results. First, the domain of the bottom curve of a PC track beam is assumed to be a spindle plane. Then, the corresponding supposed top curve domain is determined based on a geometrical relationship that is the opposite of that identified by the conventional method. Second, several optimal points are selected from the supposed top curve domain according to the dichotomy algorithm; the supposed top curve is thus generated by connecting these points. Finally, one rigorous criterion is established in the fractal dimension to assess the accuracy of the assumed top curve deduced in the previous step. If this supposed curve coincides completely with the known top curve, then the assumed bottom curve corresponding to the assumed top curve is considered to be the real bottom curve. This technique of determining the bottom curve of a PC track beam is thus proven to be efficient and accurate.

  10. Developing Techniques for Small Scale Indigenous Molybdenum-99 Production Using LEU Fission at Tajoura Research Center-Libya [Country report: Libya

    International Nuclear Information System (INIS)

    Alwaer, Sami M.

    2015-01-01

    The object of this work was to assist the IAEA by providing the Libyan country report about the Coordination Research Project (CRP), on the subject of “Developing techniques for small scale indigenous Mo-99 production using LEU-foil” which took place over five years and four RCMs. A CRP on this subject was approved in early 2005. The objectives of this CRP are to: transfer know-how in the area of 99 Mo production using LEU targets based on reference technologies from leading laboratories in the field to the participating laboratories in the CRP; develop national work plans based on various stages of technical development and objectives in this field; establish the procedures and protocols to be employed, including quality control and assurance procedures; establish the coordinated activities and programme for preparation, irradiation, and processing of LEU targets [a]; and to compare results obtained in the implementation of the technique in order to provide follow up advice and assistance. Technetium-99m ( 99m Tc), the daughter product of molybdenum-99 ( 99 Mo), is the most commonly utilized medical radioisotope in the world, used for approximately 20-25 million medical diagnostic procedures annually, comprising some 80% of all diagnostic nuclear medicine procedures. National and international efforts are underway to shift the production of medical isotopes from highly enriched uranium (HEU) to low enriched uranium (LEU) targets. A small but growing amount of the current global 99 Mo production is derived from the irradiation of LEU targets. The IAEA became aware of the interest of a number of developing Member States that are seeking to become small scale, indigenous producers of 99 Mo to meet local nuclear medicine requirements. The IAEA initiated Coordinated Research Project (CRP) T.1.20.18 “Developing techniques for small-scale indigenous production of Mo-99 using LEU or neutron activation” in order to assist countries in this field. The more

  11. Hysteroscopic sterilization using a virtual reality simulator: assessment of learning curve.

    Science.gov (United States)

    Janse, Juliënne A; Goedegebuure, Ruben S A; Veersema, Sebastiaan; Broekmans, Frank J M; Schreuder, Henk W R

    2013-01-01

    To assess the learning curve using a virtual reality simulator for hysteroscopic sterilization with the Essure method. Prospective multicenter study (Canadian Task Force classification II-2). University and teaching hospital in the Netherlands. Thirty novices (medical students) and five experts (gynecologists who had performed >150 Essure sterilization procedures). All participants performed nine repetitions of bilateral Essure placement on the simulator. Novices returned after 2 weeks and performed a second series of five repetitions to assess retention of skills. Structured observations on performance using the Global Rating Scale and parameters derived from the simulator provided measurements for analysis. The learning curve is represented by improvement per procedure. Two-way repeated-measures analysis of variance was used to analyze learning curves. Effect size (ES) was calculated to express the practical significance of the results (ES ≥ 0.50 indicates a large learning effect). For all parameters, significant improvements were found in novice performance within nine repetitions. Large learning effects were established for six of eight parameters (p learning curve established in this study endorses future implementation of the simulator in curricula on hysteroscopic skill acquisition for clinicians who are interested in learning this sterilization technique. Copyright © 2013 AAGL. Published by Elsevier Inc. All rights reserved.

  12. Melting curves of gammairradiated DNA

    International Nuclear Information System (INIS)

    Hofer, H.; Altmann, H.; Kehrer, M.

    1978-08-01

    Melting curves of gammairradiated DNA and data derived of them, are reported. The diminished stability is explained by basedestruction. DNA denatures completely at room temperature, if at least every fifth basepair is broken or weakened by irradiation. (author)

  13. Management of the learning curve

    DEFF Research Database (Denmark)

    Pedersen, Peter-Christian; Slepniov, Dmitrij

    2016-01-01

    Purpose – This paper focuses on the management of the learning curve in overseas capacity expansions. The purpose of this paper is to unravel the direct as well as indirect influences on the learning curve and to advance the understanding of how these affect its management. Design...... the dimensions of the learning process involved in a capacity expansion project and identified the direct and indirect labour influences on the production learning curve. On this basis, the study proposes solutions to managing learning curves in overseas capacity expansions. Furthermore, the paper concludes...... with measures that have the potential to significantly reduce the non-value-added time when establishing new capacities overseas. Originality/value – The paper uses a longitudinal in-depth case study of a Danish wind turbine manufacturer and goes beyond a simplistic treatment of the lead time and learning...

  14. Growth curves for Laron syndrome.

    OpenAIRE

    Laron, Z; Lilos, P; Klinger, B

    1993-01-01

    Growth curves for children with Laron syndrome were constructed on the basis of repeated measurements made throughout infancy, childhood, and puberty in 24 (10 boys, 14 girls) of the 41 patients with this syndrome investigated in our clinic. Growth retardation was already noted at birth, the birth length ranging from 42 to 46 cm in the 12/20 available measurements. The postnatal growth curves deviated sharply from the normal from infancy on. Both sexes showed no clear pubertal spurt. Girls co...

  15. Flow over riblet curved surfaces

    Energy Technology Data Exchange (ETDEWEB)

    Loureiro, J B R; Freire, A P Silva, E-mail: atila@mecanica.ufrj.br [Mechanical Engineering Program, Federal University of Rio de Janeiro (COPPE/UFRJ), C.P. 68503, 21.941-972, Rio de Janeiro, RJ (Brazil)

    2011-12-22

    The present work studies the mechanics of turbulent drag reduction over curved surfaces by riblets. The effects of surface modification on flow separation over steep and smooth curved surfaces are investigated. Four types of two-dimensional surfaces are studied based on the morphometric parameters that describe the body of a blue whale. Local measurements of mean velocity and turbulence profiles are obtained through laser Doppler anemometry (LDA) and particle image velocimetry (PIV).

  16. Impact of entrainment and impingement on fish populations in the Hudson River estuary. Volume III. An analysis of the validity of the utilities' stock-recruitment curve-fitting exercise and prior estimation of beta technique. Environmental Sciences Division publication No. 1792

    International Nuclear Information System (INIS)

    Christensen, S.W.; Goodyear, C.P.; Kirk, B.L.

    1982-03-01

    This report addresses the validity of the utilities' use of the Ricker stock-recruitment model to extrapolate the combined entrainment-impingement losses of young fish to reductions in the equilibrium population size of adult fish. In our testimony, a methodology was developed and applied to address a single fundamental question: if the Ricker model really did apply to the Hudson River striped bass population, could the utilities' estimates, based on curve-fitting, of the parameter alpha (which controls the impact) be considered reliable. In addition, an analysis is included of the efficacy of an alternative means of estimating alpha, termed the technique of prior estimation of beta (used by the utilities in a report prepared for regulatory hearings on the Cornwall Pumped Storage Project). This validation methodology should also be useful in evaluating inferences drawn in the literature from fits of stock-recruitment models to data obtained from other fish stocks

  17. Intersection numbers of spectral curves

    CERN Document Server

    Eynard, B.

    2011-01-01

    We compute the symplectic invariants of an arbitrary spectral curve with only 1 branchpoint in terms of integrals of characteristic classes in the moduli space of curves. Our formula associates to any spectral curve, a characteristic class, which is determined by the laplace transform of the spectral curve. This is a hint to the key role of Laplace transform in mirror symmetry. When the spectral curve is y=\\sqrt{x}, the formula gives Kontsevich--Witten intersection numbers, when the spectral curve is chosen to be the Lambert function \\exp{x}=y\\exp{-y}, the formula gives the ELSV formula for Hurwitz numbers, and when one chooses the mirror of C^3 with framing f, i.e. \\exp{-x}=\\exp{-yf}(1-\\exp{-y}), the formula gives the Marino-Vafa formula, i.e. the generating function of Gromov-Witten invariants of C^3. In some sense this formula generalizes ELSV, Marino-Vafa formula, and Mumford formula.

  18. Dissolution glow curve in LLD

    International Nuclear Information System (INIS)

    Haverkamp, U.; Wiezorek, C.; Poetter, R.

    1990-01-01

    Lyoluminescence dosimetry is based upon light emission during dissolution of previously irradiated dosimetric materials. The lyoluminescence signal is expressed in the dissolution glow curve. These curves begin, depending on the dissolution system, with a high peak followed by an exponentially decreasing intensity. System parameters that influence the graph of the dissolution glow curve, are, for example, injection speed, temperature and pH value of the solution and the design of the dissolution cell. The initial peak does not significantly correlate with the absorbed dose, it is mainly an effect of the injection. The decay of the curve consists of two exponential components: one fast and one slow. The components depend on the absorbed dose and the dosimetric materials used. In particular, the slow component correlates with the absorbed dose. In contrast to the fast component the argument of the exponential function of the slow component is independent of the dosimetric materials investigated: trehalose, glucose and mannitol. The maximum value, following the peak of the curve, and the integral light output are a measure of the absorbed dose. The reason for the different light outputs of various dosimetric materials after irradiation with the same dose is the differing solubility. The character of the dissolution glow curves is the same following irradiation with photons, electrons or neutrons. (author)

  19. Detecting Neolithic Burial Mounds from LiDAR-Derived Elevation Data Using a Multi-Scale Approach and Machine Learning Techniques

    Directory of Open Access Journals (Sweden)

    Alexandre Guyot

    2018-02-01

    Full Text Available Airborne LiDAR technology is widely used in archaeology and over the past decade has emerged as an accurate tool to describe anthropomorphic landforms. Archaeological features are traditionally emphasised on a LiDAR-derived Digital Terrain Model (DTM using multiple Visualisation Techniques (VTs, and occasionally aided by automated feature detection or classification techniques. Such an approach offers limited results when applied to heterogeneous structures (different sizes, morphologies, which is often the case for archaeological remains that have been altered throughout the ages. This study proposes to overcome these limitations by developing a multi-scale analysis of topographic position combined with supervised machine learning algorithms (Random Forest. Rather than highlighting individual topographic anomalies, the multi-scalar approach allows archaeological features to be examined not only as individual objects, but within their broader spatial context. This innovative and straightforward method provides two levels of results: a composite image of topographic surface structure and a probability map of the presence of archaeological structures. The method was developed to detect and characterise megalithic funeral structures in the region of Carnac, the Bay of Quiberon, and the Gulf of Morbihan (France, which is currently considered for inclusion on the UNESCO World Heritage List. As a result, known archaeological sites have successfully been geo-referenced with a greater accuracy than before (even when located under dense vegetation and a ground-check confirmed the identification of a previously unknown Neolithic burial mound in the commune of Carnac.

  20. LEARNING CURVE IN ENDOSCOPIC TRANSNASAL SELLAR REGION SURGERY

    Directory of Open Access Journals (Sweden)

    Ananth G

    2016-07-01

    Full Text Available BACKGROUND The endoscopic endonasal approach for the sellar region lesions is a novel technique and an effective surgical option. The evidence thus far has been conflicting with reports in favour and against a learning curve. We attempt to determine the learning curve associated with this approach. METHODS Retrospective and prospective data of the patients who were surgically treated for sellar region lesions between the year 2013 and 2016 was collected, 32 patients were operated by the endoscopic endonasal approach at Vydehi Institute of Medical Sciences and Research Centre, Bangalore. Age, sex, presenting symptoms, length of hospital stay, surgical approach, type of dissection, duration of surgery, sellar floor repair, intraoperative and postoperative complications were noted. All the procedures were performed by a single neurosurgeon. RESULTS A total of 32 patients were operated amongst which 21 patients were non-functioning pituitary adenomas, 2 were growth hormone secreting functional adenomas, 1 was an invasive pituitary adenoma, 4 were craniopharyngiomas, 2 were meningiomas, 1 was Rathke’s cleft cyst and 1 was a clival chordoma. Headache was the mode of presentation in 12 patients, 12 patients had visual deficits, 6 patients presented with hormonal disturbances amongst which 4 patients presented with features of panhypopituitarism and 2 with acromegaly. Amongst the 4 patients with panhypopituitarism, 2 also had DI, two patients presented with CSF rhinorrhoea. There was a 100% improvement in the patients who presented with visual symptoms. Gross total resection was achieved in all 4 cases of craniopharyngiomas and 13 cases of pituitary adenomas. Postoperative CSF leak was seen in 4 patients who underwent re-exploration and sellar floor repair, 9 patients had postoperative Diabetes Insipidus (DI which was transient, the incidence of DI reduced towards the end of the study. There was a 25% decrease in the operating time towards the end of

  1. Development of Techniques for Small Scale Indigenous 99Mo Production Using LEU Targets at ICN Pitesti-Romania [Country report: Romania

    International Nuclear Information System (INIS)

    2015-01-01

    Initiation of the IAEA Coordinated Research Project (CRP) “Development Techniques for Small Scale Indigenous 99 Mo Production Using LEU Fission or Neutron Activation” during 2005 allowed Member States to participate through their research organization on contractor arrangement to accomplish the CRP objectives. Among these, the participating research organization Institute for Nuclear Research Pitesti Romania (ICN), was the beneficiary of financial support and Argonne National Laboratory assistance provided by US Department of Energy to the CRP for development of techniques for fission 99 Mo production based on LEU modified CINTICHEM process. The Agency’s role in this field was to assist in the transfer and adaptation of existing technology in order to disseminate a technique, which advances international non-proliferation objectives and promotes sustainable development needs, while also contributing to extend the production capacity for addressing supply shortages from the latest years. The Institute for Nuclear Research, considering the existing good conditions of infrastructure of the research reactor with suitable irradiation conditions for radioisotopes, a post irradiation laboratory with direct transfer of irradiated targets from the reactor and handling of high radioactive sources, and simultaneously the existence of an expanding internal market, decided to undertake the necessary steps in order to produce fission molybdenum. The Institute intends to develop the capability to respond to the domestic needs in cooperation with the IFINN–HH from Bucharest, which is able to perform the last step consisting in the loading of fission molybdenum on chromatography generators and dispensing to the final client. The primary scope of the project is the development of the necessary technological steps and chemical processing steps in order to be able to cover the entire process for fission molybdenum production at the required standard of purity

  2. An Adaptive Pruning Algorithm for the Discrete L-Curve Criterion

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Jensen, Toke Koldborg; Rodriguez, Giuseppe

    2004-01-01

    SVD or regularizing CG iterations). Our algorithm needs no pre-defined parameters, and in order to capture the global features of the curve in an adaptive fashion, we use a sequence of pruned L-curves that correspond to considering the curves at different scales. We compare our new algorithm...

  3. Curve Boxplot: Generalization of Boxplot for Ensembles of Curves.

    Science.gov (United States)

    Mirzargar, Mahsa; Whitaker, Ross T; Kirby, Robert M

    2014-12-01

    In simulation science, computational scientists often study the behavior of their simulations by repeated solutions with variations in parameters and/or boundary values or initial conditions. Through such simulation ensembles, one can try to understand or quantify the variability or uncertainty in a solution as a function of the various inputs or model assumptions. In response to a growing interest in simulation ensembles, the visualization community has developed a suite of methods for allowing users to observe and understand the properties of these ensembles in an efficient and effective manner. An important aspect of visualizing simulations is the analysis of derived features, often represented as points, surfaces, or curves. In this paper, we present a novel, nonparametric method for summarizing ensembles of 2D and 3D curves. We propose an extension of a method from descriptive statistics, data depth, to curves. We also demonstrate a set of rendering and visualization strategies for showing rank statistics of an ensemble of curves, which is a generalization of traditional whisker plots or boxplots to multidimensional curves. Results are presented for applications in neuroimaging, hurricane forecasting and fluid dynamics.

  4. Multiphoton absorption coefficients in solids: an universal curve

    International Nuclear Information System (INIS)

    Brandi, H.S.; Araujo, C.B. de

    1983-04-01

    An universal curve for the frequency dependence of the multiphoton absorption coefficient is proposed based on a 'non-perturbative' approach. Specific applications have been made to obtain two, three, four and five photons absorption coefficient in different materials. Properly scaling of the two photon absorption coefficient and the use of the universal curve yields results for the higher order absorption coefficients in good agreement with the experimental data. (Author) [pt

  5. Estimating Aquifer Transmissivity Using the Recession-Curve-Displacement Method in Tanzania’s Kilombero Valley

    Directory of Open Access Journals (Sweden)

    William Senkondo

    2017-12-01

    Full Text Available Information on aquifer processes and characteristics across scales has long been a cornerstone for understanding water resources. However, point measurements are often limited in extent and representativeness. Techniques that increase the support scale (footprint of measurements or leverage existing observations in novel ways can thus be useful. In this study, we used a recession-curve-displacement method to estimate regional-scale aquifer transmissivity (T from streamflow records across the Kilombero Valley of Tanzania. We compare these estimates to local-scale estimates made from pumping tests across the Kilombero Valley. The median T from the pumping tests was 0.18 m2/min. This was quite similar to the median T estimated from the recession-curve-displacement method applied during the wet season for the entire basin (0.14 m2/min and for one of the two sub-basins tested (0.16 m2/min. On the basis of our findings, there appears to be reasonable potential to inform water resource management and hydrologic model development through streamflow-derived transmissivity estimates, which is promising for data-limited environments facing rapid development, such as the Kilombero Valley.

  6. A learning curve for solar thermal power

    Science.gov (United States)

    Platzer, Werner J.; Dinter, Frank

    2016-05-01

    Photovoltaics started its success story by predicting the cost degression depending on cumulated installed capacity. This so-called learning curve was published and used for predictions for PV modules first, then predictions of system cost decrease also were developed. This approach is less sensitive to political decisions and changing market situations than predictions on the time axis. Cost degression due to innovation, use of scaling effects, improved project management, standardised procedures including the search for better sites and optimization of project size are learning effects which can only be utilised when projects are developed. Therefore a presentation of CAPEX versus cumulated installed capacity is proposed in order to show the possible future advancement of the technology to politics and market. However from a wide range of publications on cost for CSP it is difficult to derive a learning curve. A logical cost structure for direct and indirect capital expenditure is needed as the basis for further analysis. Using derived reference cost for typical power plant configurations predictions of future cost have been derived. Only on the basis of that cost structure and the learning curve levelised cost of electricity for solar thermal power plants should be calculated for individual projects with different capacity factors in various locations.

  7. Two-Point Codes for the Generalised GK curve

    DEFF Research Database (Denmark)

    Barelli, Élise; Beelen, Peter; Datta, Mrinmoy

    2017-01-01

    completely cover and in many cases improve on their results, using different techniques, while also supporting any GGK curve. Our method builds on the order bound for AG codes: to enable this, we study certain Weierstrass semigroups. This allows an efficient algorithm for computing our improved bounds. We......We improve previously known lower bounds for the minimum distance of certain two-point AG codes constructed using a Generalized Giulietti–Korchmaros curve (GGK). Castellanos and Tizziotti recently described such bounds for two-point codes coming from the Giulietti–Korchmaros curve (GK). Our results...

  8. Real-Time Exponential Curve Fits Using Discrete Calculus

    Science.gov (United States)

    Rowe, Geoffrey

    2010-01-01

    An improved solution for curve fitting data to an exponential equation (y = Ae(exp Bt) + C) has been developed. This improvement is in four areas -- speed, stability, determinant processing time, and the removal of limits. The solution presented avoids iterative techniques and their stability errors by using three mathematical ideas: discrete calculus, a special relationship (be tween exponential curves and the Mean Value Theorem for Derivatives), and a simple linear curve fit algorithm. This method can also be applied to fitting data to the general power law equation y = Ax(exp B) + C and the general geometric growth equation y = Ak(exp Bt) + C.

  9. On the distribution of Weierstrass points on Gorenstein quintic curves

    Directory of Open Access Journals (Sweden)

    Kamel Alwaleed

    2016-07-01

    Full Text Available This paper is concerned with developing a technique to compute in a very precise way the distribution of Weierstrass points on the members of any 1-parameter family Ca, a∈C, of Gorenstein quintic curves with respect to the dualizing sheaf KCa. The nicest feature of the procedure is that it gives a way to produce examples of existence of Weierstrass points with prescribed special gap sequences, by looking at plane curves or, more generally, to subcanonical curves embedded in some higher dimensional projective space.

  10. Shape optimization of self-avoiding curves

    Science.gov (United States)

    Walker, Shawn W.

    2016-04-01

    This paper presents a softened notion of proximity (or self-avoidance) for curves. We then derive a sensitivity result, based on shape differential calculus, for the proximity. This is combined with a gradient-based optimization approach to compute three-dimensional, parameterized curves that minimize the sum of an elastic (bending) energy and a proximity energy that maintains self-avoidance by a penalization technique. Minimizers are computed by a sequential-quadratic-programming (SQP) method where the bending energy and proximity energy are approximated by a finite element method. We then apply this method to two problems. First, we simulate adsorbed polymer strands that are constrained to be bound to a surface and be (locally) inextensible. This is a basic model of semi-flexible polymers adsorbed onto a surface (a current topic in material science). Several examples of minimizing curve shapes on a variety of surfaces are shown. An advantage of the method is that it can be much faster than using molecular dynamics for simulating polymer strands on surfaces. Second, we apply our proximity penalization to the computation of ideal knots. We present a heuristic scheme, utilizing the SQP method above, for minimizing rope-length and apply it in the case of the trefoil knot. Applications of this method could be for generating good initial guesses to a more accurate (but expensive) knot-tightening algorithm.

  11. Numerical Characterization of Piezoceramics Using Resonance Curves

    Science.gov (United States)

    Pérez, Nicolás; Buiochi, Flávio; Brizzotti Andrade, Marco Aurélio; Adamowski, Julio Cezar

    2016-01-01

    Piezoelectric materials characterization is a challenging problem involving physical concepts, electrical and mechanical measurements and numerical optimization techniques. Piezoelectric ceramics such as Lead Zirconate Titanate (PZT) belong to the 6 mm symmetry class, which requires five elastic, three piezoelectric and two dielectric constants to fully represent the material properties. If losses are considered, the material properties can be represented by complex numbers. In this case, 20 independent material constants are required to obtain the full model. Several numerical methods have been used to adjust the theoretical models to the experimental results. The continuous improvement of the computer processing ability has allowed the use of a specific numerical method, the Finite Element Method (FEM), to iteratively solve the problem of finding the piezoelectric constants. This review presents the recent advances in the numerical characterization of 6 mm piezoelectric materials from experimental electrical impedance curves. The basic strategy consists in measuring the electrical impedance curve of a piezoelectric disk, and then combining the Finite Element Method with an iterative algorithm to find a set of material properties that minimizes the difference between the numerical impedance curve and the experimental one. Different methods to validate the results are also discussed. Examples of characterization of some common piezoelectric ceramics are presented to show the practical application of the described methods. PMID:28787875

  12. Detection of flaws below curved surfaces

    International Nuclear Information System (INIS)

    Elsley, R.K.; Addison, R.C.; Graham, L.J.

    1983-01-01

    A measurement model has been developed to describe ultrasonic measurements made with circular piston transducers in parts with flat or cylindrically curved surfaces. The model includes noise terms to describe electrical noise, scatterer noise and echo noise as well as effects of attenuation, diffraction and Fresnel loss. An experimental procedure for calibrating the noise terms of the model was developed. Experimental measurements were made on a set of known flaws located beneath a cylindrically curved surface. The model was verified by using it to correct the experimental measurements to obtain the absolute scattering amplitude of the flaws. For longitudinal wave propagation within the part, the derived scattering amplitudes were consistent with predictions at internal angles of less than 30 0 . At larger angles, focusing and aberrations caused a lack of agreement; the model needs further refinement in this case. For shear waves, it was found that the frequency for optimum flaw detection in the presence of material noise is lower than that for longitudinal waves; lower frequency measurements are currently in progress. The measurement model was then used to make preliminary predictions of the best experimental measurement technique for the detection of cracks located under cylindrically curved surfaces

  13. Numerical Characterization of Piezoceramics Using Resonance Curves

    Directory of Open Access Journals (Sweden)

    Nicolás Pérez

    2016-01-01

    Full Text Available Piezoelectric materials characterization is a challenging problem involving physical concepts, electrical and mechanical measurements and numerical optimization techniques. Piezoelectric ceramics such as Lead Zirconate Titanate (PZT belong to the 6 mm symmetry class, which requires five elastic, three piezoelectric and two dielectric constants to fully represent the material properties. If losses are considered, the material properties can be represented by complex numbers. In this case, 20 independent material constants are required to obtain the full model. Several numerical methods have been used to adjust the theoretical models to the experimental results. The continuous improvement of the computer processing ability has allowed the use of a specific numerical method, the Finite Element Method (FEM, to iteratively solve the problem of finding the piezoelectric constants. This review presents the recent advances in the numerical characterization of 6 mm piezoelectric materials from experimental electrical impedance curves. The basic strategy consists in measuring the electrical impedance curve of a piezoelectric disk, and then combining the Finite Element Method with an iterative algorithm to find a set of material properties that minimizes the difference between the numerical impedance curve and the experimental one. Different methods to validate the results are also discussed. Examples of characterization of some common piezoelectric ceramics are presented to show the practical application of the described methods.

  14. Curve Digitizer – A software for multiple curves digitizing

    Directory of Open Access Journals (Sweden)

    Florentin ŞPERLEA

    2010-06-01

    Full Text Available The Curve Digitizer is software that extracts data from an image file representing a graphicand returns them as pairs of numbers which can then be used for further analysis and applications.Numbers can be read on a computer screen stored in files or copied on paper. The final result is adata set that can be used with other tools such as MSEXCEL. Curve Digitizer provides a useful toolfor any researcher or engineer interested in quantifying the data displayed graphically. The image filecan be obtained by scanning a document

  15. Statistical atmospheric inversion of local gas emissions by coupling the tracer release technique and local-scale transport modelling: a test case with controlled methane emissions

    Directory of Open Access Journals (Sweden)

    S. Ars

    2017-12-01

    Full Text Available This study presents a new concept for estimating the pollutant emission rates of a site and its main facilities using a series of atmospheric measurements across the pollutant plumes. This concept combines the tracer release method, local-scale atmospheric transport modelling and a statistical atmospheric inversion approach. The conversion between the controlled emission and the measured atmospheric concentrations of the released tracer across the plume places valuable constraints on the atmospheric transport. This is used to optimise the configuration of the transport model parameters and the model uncertainty statistics in the inversion system. The emission rates of all sources are then inverted to optimise the match between the concentrations simulated with the transport model and the pollutants' measured atmospheric concentrations, accounting for the transport model uncertainty. In principle, by using atmospheric transport modelling, this concept does not strongly rely on the good colocation between the tracer and pollutant sources and can be used to monitor multiple sources within a single site, unlike the classical tracer release technique. The statistical inversion framework and the use of the tracer data for the configuration of the transport and inversion modelling systems should ensure that the transport modelling errors are correctly handled in the source estimation. The potential of this new concept is evaluated with a relatively simple practical implementation based on a Gaussian plume model and a series of inversions of controlled methane point sources using acetylene as a tracer gas. The experimental conditions are chosen so that they are suitable for the use of a Gaussian plume model to simulate the atmospheric transport. In these experiments, different configurations of methane and acetylene point source locations are tested to assess the efficiency of the method in comparison to the classic tracer release technique in coping

  16. Statistical atmospheric inversion of local gas emissions by coupling the tracer release technique and local-scale transport modelling: a test case with controlled methane emissions

    Science.gov (United States)

    Ars, Sébastien; Broquet, Grégoire; Yver Kwok, Camille; Roustan, Yelva; Wu, Lin; Arzoumanian, Emmanuel; Bousquet, Philippe

    2017-12-01

    This study presents a new concept for estimating the pollutant emission rates of a site and its main facilities using a series of atmospheric measurements across the pollutant plumes. This concept combines the tracer release method, local-scale atmospheric transport modelling and a statistical atmospheric inversion approach. The conversion between the controlled emission and the measured atmospheric concentrations of the released tracer across the plume places valuable constraints on the atmospheric transport. This is used to optimise the configuration of the transport model parameters and the model uncertainty statistics in the inversion system. The emission rates of all sources are then inverted to optimise the match between the concentrations simulated with the transport model and the pollutants' measured atmospheric concentrations, accounting for the transport model uncertainty. In principle, by using atmospheric transport modelling, this concept does not strongly rely on the good colocation between the tracer and pollutant sources and can be used to monitor multiple sources within a single site, unlike the classical tracer release technique. The statistical inversion framework and the use of the tracer data for the configuration of the transport and inversion modelling systems should ensure that the transport modelling errors are correctly handled in the source estimation. The potential of this new concept is evaluated with a relatively simple practical implementation based on a Gaussian plume model and a series of inversions of controlled methane point sources using acetylene as a tracer gas. The experimental conditions are chosen so that they are suitable for the use of a Gaussian plume model to simulate the atmospheric transport. In these experiments, different configurations of methane and acetylene point source locations are tested to assess the efficiency of the method in comparison to the classic tracer release technique in coping with the distances

  17. Nonequilibrium recombination after a curved shock wave

    Science.gov (United States)

    Wen, Chihyung; Hornung, Hans

    2010-02-01

    The effect of nonequilibrium recombination after a curved two-dimensional shock wave in a hypervelocity dissociating flow of an inviscid Lighthill-Freeman gas is considered. An analytical solution is obtained with the effective shock values derived by Hornung (1976) [5] and the assumption that the flow is ‘quasi-frozen’ after a thin dissociating layer near the shock. The solution gives the expression of dissociation fraction as a function of temperature on a streamline. A rule of thumb can then be provided to check the validity of binary scaling for experimental conditions and a tool to determine the limiting streamline that delineates the validity zone of binary scaling. The effects on the nonequilibrium chemical reaction of the large difference in free stream temperature between free-piston shock tunnel and equivalent flight conditions are discussed. Numerical examples are presented and the results are compared with solutions obtained with two-dimensional Euler equations using the code of Candler (1988) [10].

  18. Vertex algebras and algebraic curves

    CERN Document Server

    Frenkel, Edward

    2004-01-01

    Vertex algebras are algebraic objects that encapsulate the concept of operator product expansion from two-dimensional conformal field theory. Vertex algebras are fast becoming ubiquitous in many areas of modern mathematics, with applications to representation theory, algebraic geometry, the theory of finite groups, modular functions, topology, integrable systems, and combinatorics. This book is an introduction to the theory of vertex algebras with a particular emphasis on the relationship with the geometry of algebraic curves. The notion of a vertex algebra is introduced in a coordinate-independent way, so that vertex operators become well defined on arbitrary smooth algebraic curves, possibly equipped with additional data, such as a vector bundle. Vertex algebras then appear as the algebraic objects encoding the geometric structure of various moduli spaces associated with algebraic curves. Therefore they may be used to give a geometric interpretation of various questions of representation theory. The book co...

  19. Rational points on elliptic curves

    CERN Document Server

    Silverman, Joseph H

    2015-01-01

    The theory of elliptic curves involves a pleasing blend of algebra, geometry, analysis, and number theory. This book stresses this interplay as it develops the basic theory, thereby providing an opportunity for advanced undergraduates to appreciate the unity of modern mathematics. At the same time, every effort has been made to use only methods and results commonly included in the undergraduate curriculum. This accessibility, the informal writing style, and a wealth of exercises make Rational Points on Elliptic Curves an ideal introduction for students at all levels who are interested in learning about Diophantine equations and arithmetic geometry. Most concretely, an elliptic curve is the set of zeroes of a cubic polynomial in two variables. If the polynomial has rational coefficients, then one can ask for a description of those zeroes whose coordinates are either integers or rational numbers. It is this number theoretic question that is the main subject of this book. Topics covered include the geometry and ...

  20. Theoretical melting curve of caesium

    International Nuclear Information System (INIS)

    Simozar, S.; Girifalco, L.A.; Pennsylvania Univ., Philadelphia

    1983-01-01

    A statistical-mechanical model is developed to account for the complex melting curve of caesium. The model assumes the existence of three different species of caesium defined by three different electronic states. On the basis of this model, the free energy of melting and the melting curve are computed up to 60 kbar, using the solid-state data and the initial slope of the fusion curve as input parameters. The calculated phase diagram agrees with experiment to within the experimental error. Other thermodynamic properties including the entropy and volume of melting were also computed, and they agree with experiment. Since the theory requires only one adjustable constant, this is taken as strong evidence that the three-species model is satisfactory for caesium. (author)

  1. Migration and the Wage Curve:

    DEFF Research Database (Denmark)

    Brücker, Herbert; Jahn, Elke J.

    in a general equilibrium framework. For the empirical analysis we employ the IABS, a two percent sample of the German labor force. We find that the elasticity of the wage curve is particularly high for young workers and workers with a university degree, while it is low for older workers and workers......  Based on a wage curve approach we examine the labor market effects of migration in Germany. The wage curve relies on the assumption that wages respond to a change in the unemployment rate, albeit imperfectly. This allows one to derive the wage and employment effects of migration simultaneously...... with a vocational degree. The wage and employment effects of migration are moderate: a 1 percent increase in the German labor force through immigration increases the aggregate unemployment rate by less than 0.1 percentage points and reduces average wages by less 0.1 percent. While native workers benefit from...

  2. Laffer Curves and Home Production

    Directory of Open Access Journals (Sweden)

    Kotamäki Mauri

    2017-06-01

    Full Text Available In the earlier related literature, consumption tax rate Laffer curve is found to be strictly increasing (see Trabandt and Uhlig (2011. In this paper, a general equilibrium macro model is augmented by introducing a substitute for private consumption in the form of home production. The introduction of home production brings about an additional margin of adjustment – an increase in consumption tax rate not only decreases labor supply and reduces the consumption tax base but also allows a substitution of market goods with home-produced goods. The main objective of this paper is to show that, after the introduction of home production, the consumption tax Laffer curve exhibits an inverse U-shape. Also the income tax Laffer curves are significantly altered. The result shown in this paper casts doubt on some of the earlier results in the literature.

  3. Complexity of Curved Glass Structures

    Science.gov (United States)

    Kosić, T.; Svetel, I.; Cekić, Z.

    2017-11-01

    Despite the increasing number of research on the architectural structures of curvilinear forms and technological and practical improvement of the glass production observed over recent years, there is still a lack of comprehensive codes and standards, recommendations and experience data linked to real-life curved glass structures applications regarding design, manufacture, use, performance and economy. However, more and more complex buildings and structures with the large areas of glass envelope geometrically complex shape are built every year. The aim of the presented research is to collect data on the existing design philosophy on curved glass structure cases. The investigation includes a survey about how architects and engineers deal with different design aspects of curved glass structures with a special focus on the design and construction process, glass types and structural and fixing systems. The current paper gives a brief overview of the survey findings.

  4. Some fundamental questions about R-curves

    International Nuclear Information System (INIS)

    Kolednik, O.

    1992-01-01

    With the help of two simple thought experiments it is demonstrated that there exist two physically different types of fracture toughness. The crack-growth toughness, which is identical to the Griffith crack growth resistance, R, is a measure of the non-reversible energy which is needed to produce an increment of new crack area. The size of R is reflected by the slopes of the R-curves commonly used. So an increasing J-Δa-curve does not mean that the crack-growth resistance increases. The fracture initiation toughness, J i , is a normalized total energy (related to the ligament area) which must be put into the specimen up to fracture initiation. Only for ideally brittle materials R and J i have equal sizes. For small-scale yielding a relationship exists between R and J i , ao a one-parameter description of fracture processes is applicable. For large-scale yielding R and J i are not strictly related and both parameters are necessary to describe the fracture process. (orig.) [de

  5. The micro-scale synthesis of (117)Sn-enriched tributyltin chloride and its characterization by GC-ICP-MS and NMR techniques.

    Science.gov (United States)

    Peeters, Kelly; Iskra, Jernej; Zuliani, Tea; Ščančar, Janez; Milačič, Radmila

    2014-07-01

    Organotin compounds (OTCs) are among the most toxic substances ever introduced to the environment by man. They are common pollutants in marine ecosystems, but are also present in the terrestrial environment, accumulated mainly in sewage sludge and landfill leachates. In investigations of the degradation and methylation processes of OTC in environmental samples, the use of enriched isotopic tracers represents a powerful analytical tool. Sn-enriched OTC are also necessary in application of the isotope dilution mass spectrometry technique for their accurate quantification. Since Sn-enriched monobutyltin (MBT), dibutyltin (DBT) and tributyltin (TBT) are not commercially available as single species, "in house" synthesis of individual butyltin-enriched species is necessary. In the present work, the preparation of the most toxic butyltin, namely TBT, was performed via a simple synthetic path, starting with bromination of metallic Sn, followed by butylation with butyl lithium. The tetrabutyltin (TeBT) formed was transformed to tributyltin chloride (TBTCl) using concentrated hydrochloric acid (HCl). The purity of the synthesized TBT was verified by speciation analysis using the techniques of gas chromatography coupled to inductively coupled plasma mass spectrometry (GC-ICP-MS) and nuclear magnetic resonance (NMR). The results showed that TBT had a purity of more than 97%. The remaining 3% corresponded to DBT. TBT was quantified by reverse isotope dilution GC-ICP-MS. The synthesis yield was around 60%. The advantage of this procedure over those previously reported lies in its possibility to be applied on a micro-scale (starting with 10mg of metallic Sn). This feature is of crucial importance, since enriched metallic Sn is extremely expensive. The procedure is simple and repeatable, and was successfully applied for the preparation of (117)Sn-enriched TBTCl from (117)Sn-enriched metal. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Optimization on Spaces of Curves

    DEFF Research Database (Denmark)

    Møller-Andersen, Jakob

    in Rd, and methods to solve the initial and boundary value problem for geodesics allowing us to compute the Karcher mean and principal components analysis of data of curves. We apply the methods to study shape variation in synthetic data in the Kimia shape database, in HeLa cell nuclei and cycles...... of cardiac deformations. Finally we investigate a new application of Riemannian shape analysis in shape optimization. We setup a simple elliptic model problem, and describe how to apply shape calculus to obtain directional derivatives in the manifold of planar curves. We present an implementation based...

  7. Tracing a planar algebraic curve

    International Nuclear Information System (INIS)

    Chen Falai; Kozak, J.

    1994-09-01

    In this paper, an algorithm that determines a real algebraic curve is outlined. Its basic step is to divide the plane into subdomains that include only simple branches of the algebraic curve without singular points. Each of the branches is then stably and efficiently traced in the particular subdomain. Except for the tracing, the algorithm requires only a couple of simple operations on polynomials that can be carried out exactly if the coefficients are rational, and the determination of zeros of several polynomials of one variable. (author). 5 refs, 4 figs

  8. The New Keynesian Phillips Curve

    DEFF Research Database (Denmark)

    Ólafsson, Tjörvi

    This paper provides a survey on the recent literature on the new Keynesian Phillips curve: the controversies surrounding its microfoundation and estimation, the approaches that have been tried to improve its empirical fit and the challenges it faces adapting to the open-economy framework. The new......, learning or state-dependant pricing. The introduction of openeconomy factors into the new Keynesian Phillips curve complicate matters further as it must capture the nexus between price setting, inflation and the exchange rate. This is nevertheless a crucial feature for any model to be used for inflation...... forecasting in a small open economy like Iceland....

  9. Determination of water retention curves of concrete

    International Nuclear Information System (INIS)

    Villar, M.V.; Romero, F.J.

    2015-01-01

    The water retention curves of concrete and mortar obtained with two different techniques and following wetting and drying paths were determined. The material was the same used to manufacture the disposal cells of the Spanish surface facility of El Cabril. The water retention capacity of mortar is clearly higher than that of concrete when expressed as gravimetric water content, but the difference reduces when it is expressed as degree of saturation. Hysteresis between wetting and drying was observed for both materials, particularly for mortar. The tests went on for very long periods of time, and concerns about the geochemical, mineralogical and porosity changes occurred in the materials during the determinations (changes in dry mass, grain density, samples volume) and their repercussion on the results obtained (water content and degree of saturation computation) were raised. Also, the fact of having used techniques applying total and matrix suction could have affected the results. (authors)

  10. Multiperiodicity in the light curve of Alpha Orionis

    International Nuclear Information System (INIS)

    Karovska, M.

    1987-01-01

    Alpha Ori, a supergiant star classified as M2 Iab, is characterized by pronounced variability encompassing most of its observed parameters. Variability on two different time scales has been observed in the light and velocity curves: a long period variation of about 6 years and superposed on this, irregular fluctuations having a time scale of several hundred days. This paper reports the results of Fourier analysis of more than 60- years of AAVSO (American Association of Variable Stars Observers) data which suggest a multiperiodicity in the light curve of α Ori

  11. Predicting the effectiveness of different mulching techniques in reducing post-fire runoff and erosion at plot scale with the RUSLE, MMF and PESERA models.

    Science.gov (United States)

    Vieira, D C S; Serpa, D; Nunes, J P C; Prats, S A; Neves, R; Keizer, J J

    2018-08-01

    Wildfires have become a recurrent threat for many Mediterranean forest ecosystems. The characteristics of the Mediterranean climate, with its warm and dry summers and mild and wet winters, make this a region prone to wildfire occurrence as well as to post-fire soil erosion. This threat is expected to be aggravated in the future due to climate change and land management practices and planning. The wide recognition of wildfires as a driver for runoff and erosion in burnt forest areas has created a strong demand for model-based tools for predicting the post-fire hydrological and erosion response and, in particular, for predicting the effectiveness of post-fire management operations to mitigate these responses. In this study, the effectiveness of two post-fire treatments (hydromulch and natural pine needle mulch) in reducing post-fire runoff and soil erosion was evaluated against control conditions (i.e. untreated conditions), at different spatial scales. The main objective of this study was to use field data to evaluate the ability of different erosion models: (i) empirical (RUSLE), (ii) semi-empirical (MMF), and (iii) physically-based (PESERA), to predict the hydrological and erosive response as well as the effectiveness of different mulching techniques in fire-affected areas. The results of this study showed that all three models were reasonably able to reproduce the hydrological and erosive processes occurring in burned forest areas. In addition, it was demonstrated that the models can be calibrated at a small spatial scale (0.5 m 2 ) but provide accurate results at greater spatial scales (10 m 2 ). From this work, the RUSLE model seems to be ideal for fast and simple applications (i.e. prioritization of areas-at-risk) mainly due to its simplicity and reduced data requirements. On the other hand, the more complex MMF and PESERA models would be valuable as a base of a possible tool for assessing the risk of water contamination in fire-affected water bodies and

  12. Signature Curves Statistics of DNA Supercoils

    OpenAIRE

    Shakiban, Cheri; Lloyd, Peter

    2004-01-01

    In this paper we describe the Euclidean signature curves for two dimensional closed curves in the plane and their generalization to closed space curves. The focus will be on discrete numerical methods for approximating such curves. Further we will apply these numerical methods to plot the signature curves related to three-dimensional simulated DNA supercoils. Our primary focus will be on statistical analysis of the data generated for the signature curves of the supercoils. We will try to esta...

  13. Section curve reconstruction and mean-camber curve extraction of a point-sampled blade surface.

    Directory of Open Access Journals (Sweden)

    Wen-long Li

    Full Text Available The blade is one of the most critical parts of an aviation engine, and a small change in the blade geometry may significantly affect the dynamics performance of the aviation engine. Rapid advancements in 3D scanning techniques have enabled the inspection of the blade shape using a dense and accurate point cloud. This paper proposes a new method to achieving two common tasks in blade inspection: section curve reconstruction and mean-camber curve extraction with the representation of a point cloud. The mathematical morphology is expanded and applied to restrain the effect of the measuring defects and generate an ordered sequence of 2D measured points in the section plane. Then, the energy and distance are minimized to iteratively smoothen the measured points, approximate the section curve and extract the mean-camber curve. In addition, a turbine blade is machined and scanned to observe the curvature variation, energy variation and approximation error, which demonstrates the availability of the proposed method. The proposed method is simple to implement and can be applied in aviation casting-blade finish inspection, large forging-blade allowance inspection and visual-guided robot grinding localization.

  14. The link between the baryonic mass distribution and the rotation curve shape

    NARCIS (Netherlands)

    Swaters, R. A.; Sancisi, R.; van der Hulst, J. M.; van Albada, T. S.

    The observed rotation curves of disc galaxies, ranging from late-type dwarf galaxies to early-type spirals, can be fitted remarkably well simply by scaling up the contributions of the stellar and H?i discs. This baryonic scaling model can explain the full breadth of observed rotation curves with

  15. Genetic sexing strains in Mediterranean fruit fly, an example for other species amenable to large-scale rearing for the sterile insect technique

    International Nuclear Information System (INIS)

    Franz, G.

    2005-01-01

    Through genetic and molecular manipulations, strains can be developed that are more suitable for the sterile insect technique (SIT). In this chapter the development of genetic sexing strains (GSSs) is given as an example. GSSs increase the effectiveness of area-wide integrated pest management (AW-IPM) programmes that use the SIT by enabling the large-scale release of only sterile males. For species that transmit disease, the removal of females is mandatory. For the Mediterranean fruit fly Ceratitis capitata (Wiedemann), genetic sexing systems have been developed; they are stable enough to be used in operational programmes for extended periods of time. Until recently, the only way to generate such strains was through Mendelian genetics. In this chapter, the basic principle of translocation-based sexing strains is described, and Mediterranean fruit fly strains are used as examples to indicate the problems encountered in such strains. Furthermore, the strategies used to solve these problems are described. The advantages of following molecular strategies in the future development of sexing strains are outlined, especially for species where little basic knowledge of genetics exists. (author)

  16. Effects of processing parameters on the caffeine extraction yield during decaffeination of black tea using pilot-scale supercritical carbon dioxide extraction technique.

    Science.gov (United States)

    Ilgaz, Saziye; Sat, Ihsan Gungor; Polat, Atilla

    2018-04-01

    In this pilot-scale study supercritical carbon dioxide (SCCO 2 ) extraction technique was used for decaffeination of black tea. Pressure (250, 375, 500 bar), extraction time (60, 180, 300 min), temperature (55, 62.5, 70 °C), CO 2 flow rate (1, 2, 3 L/min) and modifier quantity (0, 2.5, 5 mol%) were selected as extraction parameters. Three-level and five-factor response surface methodology experimental design with a Box-Behnken type was employed to generate 46 different processing conditions. 100% of caffeine from black tea was removed under two different extraction conditions; one of which was consist of 375 bar pressure, 62.5 °C temperature, 300 min extraction time, 2 L/min CO 2 flow rate and 5 mol% modifier concentration and the other was composed of same temperature, pressure and extraction time conditions with 3 L/min CO 2 flow rate and 2.5 mol% modifier concentration. Results showed that extraction time, pressure, CO 2 flow rate and modifier quantity had great impact on decaffeination yield.

  17. Dual Smarandache Curves of a Timelike Curve lying on Unit dual Lorentzian Sphere

    OpenAIRE

    Kahraman, Tanju; Hüseyin Ugurlu, Hasan

    2016-01-01

    In this paper, we give Darboux approximation for dual Smarandache curves of time like curve on unit dual Lorentzian sphere. Firstly, we define the four types of dual Smarandache curves of a timelike curve lying on dual Lorentzian sphere.

  18. Construction of molecular potential energy curves by an optimization method

    Science.gov (United States)

    Wang, J.; Blake, A. J.; McCoy, D. G.; Torop, L.

    1991-01-01

    A technique for determining the potential energy curves for diatomic molecules from measurements of diffused or continuum spectra is presented. It is based on a numerical procedure which minimizes the difference between the calculated spectra and the experimental measurements and can be used in cases where other techniques, such as the conventional RKR method, are not applicable. With the aid of suitable spectral data, the associated dipole electronic transition moments can be simultaneously obtained. The method is illustrated by modeling the "longest band" of molecular oxygen to extract the E 3Σ u- and B 3Σ u- potential curves in analytical form.

  19. Electro-Mechanical Resonance Curves

    Science.gov (United States)

    Greenslade, Thomas B., Jr.

    2018-01-01

    Recently I have been investigating the frequency response of galvanometers. These are direct-current devices used to measure small currents. By using a low-frequency function generator to supply the alternating-current signal and a stopwatch smartphone app to measure the period, I was able to take data to allow a resonance curve to be drawn. This…

  20. Texas curve margin of safety.

    Science.gov (United States)

    2013-01-01

    This software can be used to assist with the assessment of margin of safety for a horizontal curve. It is intended for use by engineers and technicians responsible for safety analysis or management of rural highway pavement or traffic control devices...

  1. Principal Curves on Riemannian Manifolds.

    Science.gov (United States)

    Hauberg, Soren

    2016-09-01

    Euclidean statistics are often generalized to Riemannian manifolds by replacing straight-line interpolations with geodesic ones. While these Riemannian models are familiar-looking, they are restricted by the inflexibility of geodesics, and they rely on constructions which are optimal only in Euclidean domains. We consider extensions of Principal Component Analysis (PCA) to Riemannian manifolds. Classic Riemannian approaches seek a geodesic curve passing through the mean that optimizes a criteria of interest. The requirements that the solution both is geodesic and must pass through the mean tend to imply that the methods only work well when the manifold is mostly flat within the support of the generating distribution. We argue that instead of generalizing linear Euclidean models, it is more fruitful to generalize non-linear Euclidean models. Specifically, we extend the classic Principal Curves from Hastie & Stuetzle to data residing on a complete Riemannian manifold. We show that for elliptical distributions in the tangent of spaces of constant curvature, the standard principal geodesic is a principal curve. The proposed model is simple to compute and avoids many of the pitfalls of traditional geodesic approaches. We empirically demonstrate the effectiveness of the Riemannian principal curves on several manifolds and datasets.

  2. Elliptic curves and primality proving

    Science.gov (United States)

    Atkin, A. O. L.; Morain, F.

    1993-07-01

    The aim of this paper is to describe the theory and implementation of the Elliptic Curve Primality Proving algorithm. Problema, numeros primos a compositis dignoscendi, hosque in factores suos primos resolvendi, ad gravissima ac utilissima totius arithmeticae pertinere, et geometrarum tum veterum tum recentiorum industriam ac sagacitatem occupavisse, tam notum est, ut de hac re copiose loqui superfluum foret.

  3. A Curve for all Reasons

    Indian Academy of Sciences (India)

    from biology, feel that every pattern in the living world, ranging from the folding of ... curves band c have the same rate of increase but reach different asymptotes. If these .... not at x = 0, but at xo' which is the minimum size at birth that will permit ...

  4. Survival curves for irradiated cells

    International Nuclear Information System (INIS)

    Gibson, D.K.

    1975-01-01

    The subject of the lecture is the probability of survival of biological cells which have been subjected to ionising radiation. The basic mathematical theories of cell survival as a function of radiation dose are developed. A brief comparison with observed survival curves is made. (author)

  5. Learning curves for solid oxide fuel cells

    International Nuclear Information System (INIS)

    Rivera-Tinoco, Rodrigo; Schoots, Koen; Zwaan, Bob van der

    2012-01-01

    Highlights: ► We present learning curves for fuel cells based on empirical data. ► We disentangle different cost reduction mechanisms for SOFCs. ► We distinguish between learning-by-doing, R and D, economies-of-scale and automation. - Abstract: In this article we present learning curves for solid oxide fuel cells (SOFCs). With data from fuel cell manufacturers we derive a detailed breakdown of their production costs. We develop a bottom-up model that allows for determining overall SOFC manufacturing costs with their respective cost components, among which material, energy, labor and capital charges. The results obtained from our model prove to deviate by at most 13% from total cost figures quoted in the literature. For the R and D stage of development and diffusion, we find local learning rates between 13% and 17% and we demonstrate that the corresponding cost reductions result essentially from learning-by-searching effects. When considering periods in time that focus on the pilot and early commercial production stages, we find regional learning rates of 27% and 1%, respectively, which we assume derive mainly from genuine learning phenomena. These figures turnout significantly higher, approximately 44% and 12% respectively, if also effects of economies-of-scale and automation are included. When combining all production stages we obtain lr = 35%, which represents a mix of cost reduction phenomena. This high learning rate value and the potential to scale up production suggest that continued efforts in the development of SOFC manufacturing processes, as well as deployment and use of SOFCs, may lead to substantial further cost reductions.

  6. Rational points, rational curves, and entire holomorphic curves on projective varieties

    CERN Document Server

    Gasbarri, Carlo; Roth, Mike; Tschinkel, Yuri

    2015-01-01

    This volume contains papers from the Short Thematic Program on Rational Points, Rational Curves, and Entire Holomorphic Curves and Algebraic Varieties, held from June 3-28, 2013, at the Centre de Recherches Mathématiques, Université de Montréal, Québec, Canada. The program was dedicated to the study of subtle interconnections between geometric and arithmetic properties of higher-dimensional algebraic varieties. The main areas of the program were, among others, proving density of rational points in Zariski or analytic topology on special varieties, understanding global geometric properties of rationally connected varieties, as well as connections between geometry and algebraic dynamics exploring new geometric techniques in Diophantine approximation.

  7. Experience curve for natural gas production by hydraulic fracturing

    International Nuclear Information System (INIS)

    Fukui, Rokuhei; Greenfield, Carl; Pogue, Katie; Zwaan, Bob van der

    2017-01-01

    From 2007 to 2012 shale gas production in the US expanded at an astounding average growth rate of over 50%/yr, and thereby increased nearly tenfold over this short time period alone. Hydraulic fracturing technology, or “fracking”, as well as new directional drilling techniques, played key roles in this shale gas revolution, by allowing for extraction of natural gas from previously unviable shale resources. Although hydraulic fracturing technology had been around for decades, it only recently became commercially attractive for large-scale implementation. As the production of shale gas rapidly increased in the US over the past decade, the wellhead price of natural gas dropped substantially. In this paper we express the relationship between wellhead price and cumulative natural gas output in terms of an experience curve, and obtain a learning rate of 13% for the industry using hydraulic fracturing technology. This learning rate represents a measure for the know-how and skills accumulated thus far by the US shale gas industry. The use of experience curves for renewable energy options such as solar and wind power has allowed analysts, practitioners, and policy makers to assess potential price reductions, and underlying cost decreases, for these technologies in the future. The reasons for price reductions of hydraulic fracturing are fundamentally different from those behind renewable energy technologies – hence they cannot be directly compared – and hydraulic fracturing may soon reach, or maybe has already attained, a lower bound for further price reductions, for instance as a result of its water requirements or environmental footprint. Yet, understanding learning-by-doing phenomena as expressed by an industry-wide experience curve for shale gas production can be useful for strategic planning in the gas sector, as well as assist environmental policy design, and serve more broadly as input for projections of energy system developments. - Highlights: • Hydraulic

  8. Testing the validity of stock-recruitment curve fits

    International Nuclear Information System (INIS)

    Christensen, S.W.; Goodyear, C.P.

    1988-01-01

    The utilities relied heavily on the Ricker stock-recruitment model as the basis for quantifying biological compensation in the Hudson River power case. They presented many fits of the Ricker model to data derived from striped bass catch and effort records compiled by the National Marine Fisheries Service. Based on this curve-fitting exercise, a value of 4 was chosen for the parameter alpha in the Ricker model, and this value was used to derive the utilities' estimates of the long-term impact of power plants on striped bass populations. A technique was developed and applied to address a single fundamental question: if the Ricker model were applicable to the Hudson River striped bass population, could the estimates of alpha from the curve-fitting exercise be considered reliable. The technique involved constructing a simulation model that incorporated the essential biological features of the population and simulated the characteristics of the available actual catch-per-unit-effort data through time. The ability or failure to retrieve the known parameter values underlying the simulation model via the curve-fitting exercise was a direct test of the reliability of the results of fitting stock-recruitment curves to the real data. The results demonstrated that estimates of alpha from the curve-fitting exercise were not reliable. The simulation-modeling technique provides an effective way to identify whether or not particular data are appropriate for use in fitting such models. 39 refs., 2 figs., 3 tabs

  9. A catalog of special plane curves

    CERN Document Server

    Lawrence, J Dennis

    2014-01-01

    Among the largest, finest collections available-illustrated not only once for each curve, but also for various values of any parameters present. Covers general properties of curves and types of derived curves. Curves illustrated by a CalComp digital incremental plotter. 12 illustrations.

  10. Computation of undulator tuning curves

    International Nuclear Information System (INIS)

    Dejus, Roger J.

    1997-01-01

    Computer codes for fast computation of on-axis brilliance tuning curves and flux tuning curves have been developed. They are valid for an ideal device (regular planar device or a helical device) using the Bessel function formalism. The effects of the particle beam emittance and the beam energy spread on the spectrum are taken into account. The applicability of the codes and the importance of magnetic field errors of real insertion devices are addressed. The validity of the codes has been experimentally verified at the APS and observed discrepancies are in agreement with predicted reduction of intensities due to magnetic field errors. The codes are distributed as part of the graphical user interface XOP (X-ray OPtics utilities), which simplifies execution and viewing of the results

  11. Invariance for Single Curved Manifold

    KAUST Repository

    Castro, Pedro Machado Manhaes de

    2012-01-01

    Recently, it has been shown that, for Lambert illumination model, solely scenes composed by developable objects with a very particular albedo distribution produce an (2D) image with isolines that are (almost) invariant to light direction change. In this work, we provide and investigate a more general framework, and we show that, in general, the requirement for such in variances is quite strong, and is related to the differential geometry of the objects. More precisely, it is proved that single curved manifolds, i.e., manifolds such that at each point there is at most one principal curvature direction, produce invariant is surfaces for a certain relevant family of energy functions. In the three-dimensional case, the associated energy function corresponds to the classical Lambert illumination model with albedo. This result is also extended for finite-dimensional scenes composed by single curved objects. © 2012 IEEE.

  12. Invariance for Single Curved Manifold

    KAUST Repository

    Castro, Pedro Machado Manhaes de

    2012-08-01

    Recently, it has been shown that, for Lambert illumination model, solely scenes composed by developable objects with a very particular albedo distribution produce an (2D) image with isolines that are (almost) invariant to light direction change. In this work, we provide and investigate a more general framework, and we show that, in general, the requirement for such in variances is quite strong, and is related to the differential geometry of the objects. More precisely, it is proved that single curved manifolds, i.e., manifolds such that at each point there is at most one principal curvature direction, produce invariant is surfaces for a certain relevant family of energy functions. In the three-dimensional case, the associated energy function corresponds to the classical Lambert illumination model with albedo. This result is also extended for finite-dimensional scenes composed by single curved objects. © 2012 IEEE.

  13. Identification and Prioritization of Important Attributes of Disease-Modifying Drugs in Decision Making among Patients with Multiple Sclerosis: A Nominal Group Technique and Best-Worst Scaling.

    Science.gov (United States)

    Kremer, Ingrid E H; Evers, Silvia M A A; Jongen, Peter J; van der Weijden, Trudy; van de Kolk, Ilona; Hiligsmann, Mickaël

    2016-01-01

    Understanding the preferences of patients with multiple sclerosis (MS) for disease-modifying drugs and involving these patients in clinical decision making can improve the concordance between medical decisions and patient values and may, subsequently, improve adherence to disease-modifying drugs. This study aims first to identify which characteristics-or attributes-of disease-modifying drugs influence patients´ decisions about these treatments and second to quantify the attributes' relative importance among patients. First, three focus groups of relapsing-remitting MS patients were formed to compile a preliminary list of attributes using a nominal group technique. Based on this qualitative research, a survey with several choice tasks (best-worst scaling) was developed to prioritize attributes, asking a larger patient group to choose the most and least important attributes. The attributes' mean relative importance scores (RIS) were calculated. Nineteen patients reported 34 attributes during the focus groups and 185 patients evaluated the importance of the attributes in the survey. The effect on disease progression received the highest RIS (RIS = 9.64, 95% confidence interval: [9.48-9.81]), followed by quality of life (RIS = 9.21 [9.00-9.42]), relapse rate (RIS = 7.76 [7.39-8.13]), severity of side effects (RIS = 7.63 [7.33-7.94]) and relapse severity (RIS = 7.39 [7.06-7.73]). Subgroup analyses showed heterogeneity in preference of patients. For example, side effect-related attributes were statistically more important for patients who had no experience in using disease-modifying drugs compared to experienced patients (p decision making would be needed and requires eliciting individual preferences.

  14. Multi-scale model of the ionosphere from the combination of modern space-geodetic satellite techniques - project status and first results

    Science.gov (United States)

    Schmidt, M.; Hugentobler, U.; Jakowski, N.; Dettmering, D.; Liang, W.; Limberger, M.; Wilken, V.; Gerzen, T.; Hoque, M.; Berdermann, J.

    2012-04-01

    Near real-time high resolution and high precision ionosphere models are needed for a large number of applications, e.g. in navigation, positioning, telecommunications or astronautics. Today these ionosphere models are mostly empirical, i.e., based purely on mathematical approaches. In the DFG project 'Multi-scale model of the ionosphere from the combination of modern space-geodetic satellite techniques (MuSIK)' the complex phenomena within the ionosphere are described vertically by combining the Chapman electron density profile with a plasmasphere layer. In order to consider the horizontal and temporal behaviour the fundamental target parameters of this physics-motivated approach are modelled by series expansions in terms of tensor products of localizing B-spline functions depending on longitude, latitude and time. For testing the procedure the model will be applied to an appropriate region in South America, which covers relevant ionospheric processes and phenomena such as the Equatorial Anomaly. The project connects the expertise of the three project partners, namely Deutsches Geodätisches Forschungsinstitut (DGFI) Munich, the Institute of Astronomical and Physical Geodesy (IAPG) of the Technical University Munich (TUM) and the German Aerospace Center (DLR), Neustrelitz. In this presentation we focus on the current status of the project. In the first year of the project we studied the behaviour of the ionosphere in the test region, we setup appropriate test periods covering high and low solar activity as well as winter and summer and started the data collection, analysis, pre-processing and archiving. We developed partly the mathematical-physical modelling approach and performed first computations based on simulated input data. Here we present information on the data coverage for the area and the time periods of our investigations and we outline challenges of the multi-dimensional mathematical-physical modelling approach. We show first results, discuss problems

  15. Curved Folded Plate Timber Structures

    OpenAIRE

    Buri, Hans Ulrich; Stotz, Ivo; Weinand, Yves

    2011-01-01

    This work investigates the development of a Curved Origami Prototype made with timber panels. In the last fifteen years the timber industry has developed new, large size, timber panels. Composition and dimensions of these panels and the possibility of milling them with Computer Numerical Controlled machines shows great potential for folded plate structures. To generate the form of these structures we were inspired by Origami, the Japanese art of paper folding. Common paper tessellations are c...

  16. Projection-based curve clustering

    International Nuclear Information System (INIS)

    Auder, Benjamin; Fischer, Aurelie

    2012-01-01

    This paper focuses on unsupervised curve classification in the context of nuclear industry. At the Commissariat a l'Energie Atomique (CEA), Cadarache (France), the thermal-hydraulic computer code CATHARE is used to study the reliability of reactor vessels. The code inputs are physical parameters and the outputs are time evolution curves of a few other physical quantities. As the CATHARE code is quite complex and CPU time-consuming, it has to be approximated by a regression model. This regression process involves a clustering step. In the present paper, the CATHARE output curves are clustered using a k-means scheme, with a projection onto a lower dimensional space. We study the properties of the empirically optimal cluster centres found by the clustering method based on projections, compared with the 'true' ones. The choice of the projection basis is discussed, and an algorithm is implemented to select the best projection basis among a library of orthonormal bases. The approach is illustrated on a simulated example and then applied to the industrial problem. (authors)

  17. Growth curves for Laron syndrome.

    Science.gov (United States)

    Laron, Z; Lilos, P; Klinger, B

    1993-01-01

    Growth curves for children with Laron syndrome were constructed on the basis of repeated measurements made throughout infancy, childhood, and puberty in 24 (10 boys, 14 girls) of the 41 patients with this syndrome investigated in our clinic. Growth retardation was already noted at birth, the birth length ranging from 42 to 46 cm in the 12/20 available measurements. The postnatal growth curves deviated sharply from the normal from infancy on. Both sexes showed no clear pubertal spurt. Girls completed their growth between the age of 16-19 years to a final mean (SD) height of 119 (8.5) cm whereas the boys continued growing beyond the age of 20 years, achieving a final height of 124 (8.5) cm. At all ages the upper to lower body segment ratio was more than 2 SD above the normal mean. These growth curves constitute a model not only for primary, hereditary insulin-like growth factor-I (IGF-I) deficiency (Laron syndrome) but also for untreated secondary IGF-I deficiencies such as growth hormone gene deletion and idiopathic congenital isolated growth hormone deficiency. They should also be useful in the follow up of children with Laron syndrome treated with biosynthetic recombinant IGF-I. PMID:8333769

  18. Elementary particles in curved spaces

    International Nuclear Information System (INIS)

    Lazanu, I.

    2004-01-01

    The theories in particle physics are developed currently, in Minkowski space-time starting from the Poincare group. A physical theory in flat space can be seen as the limit of a more general physical theory in a curved space. At the present time, a theory of particles in curved space does not exist, and thus the only possibility is to extend the existent theories in these spaces. A formidable obstacle to the extension of physical models is the absence of groups of motion in more general Riemann spaces. A space of constant curvature has a group of motion that, although differs from that of a flat space, has the same number of parameters and could permit some generalisations. In this contribution we try to investigate some physical implications of the presumable existence of elementary particles in curved space. In de Sitter space (dS) the invariant rest mass is a combination of the Poincare rest mass and the generalised angular momentum of a particle and it permits to establish a correlation with the vacuum energy and with the cosmological constant. The consequences are significant because in an experiment the local structure of space-time departs from the Minkowski space and becomes a dS or AdS space-time. Discrete symmetry characteristics of the dS/AdS group suggest some arguments for the possible existence of the 'mirror matter'. (author)

  19. Earthquake induced rock shear through a deposition hole. Modelling of three model tests scaled 1:10. Verification of the bentonite material model and the calculation technique

    Energy Technology Data Exchange (ETDEWEB)

    Boergesson, Lennart (Clay Technology AB, Lund (Sweden)); Hernelind, Jan (5T Engineering AB, Vaesteraas (Sweden))

    2010-11-15

    Three model shear tests of very high quality simulating a horizontal rock shear through a deposition hole in the centre of a canister were performed 1986. The tests and the results are described by /Boergesson 1986/. The tests simulated a deposition hole in the scale 1:10 with reference density of the buffer, very stiff confinement simulating the rock, and a solid bar of copper simulating the canister. The three tests were almost identical with exception of the rate of shear, which was varied between 0.031 and 160 mm/s, i.e. with a factor of more than 5,000 and the density of the bentonite, which differed slightly. The tests were very well documented. Shear force, shear rate, total stress in the bentonite, strain in the copper and the movement of the top of the simulated canister were measured continuously during the shear. After finished shear the equipment was dismantled and careful sampling of the bentonite with measurement of water ratio and density were made. The deformed copper 'canister' was also carefully measured after the test. The tests have been modelled with the finite element code Abaqus with the same models and techniques that were used for the full scale scenarios in SR-Site. The results have been compared with the measured results, which has yielded very valuable information about the relevancy of the material models and the modelling technique. An elastic-plastic material model was used for the bentonite where the stress-strain relations have been derived from laboratory tests. The material model is made a function of both the density and the strain rate at shear. Since the shear is fast and takes place under undrained conditions, the density is not changed during the tests. However, strain rate varies largely with both the location of the elements and time. This can be taken into account in Abaqus by making the material model a function of the strain rate for each element. A similar model, based on tensile tests on the copper used in

  20. Equilibrium spherically curved two-dimensional Lennard-Jones systems

    NARCIS (Netherlands)

    Voogd, J.M.; Sloot, P.M.A.; van Dantzig, R.

    2005-01-01

    To learn about basic aspects of nano-scale spherical molecular shells during their formation, spherically curved two-dimensional N-particle Lennard-Jones systems are simulated, studying curvature evolution paths at zero-temperature. For many N-values (N < 800) equilibrium configu- rations are traced

  1. A measurable Lawson criterion and hydro-equivalent curves for inertial confinement fusion

    International Nuclear Information System (INIS)

    Zhou, C. D.; Betti, R.

    2008-01-01

    It is shown that the ignition condition (Lawson criterion) for inertial confinement fusion (ICF) can be cast in a form dependent on the only two parameters of the compressed fuel assembly that can be measured with existing techniques: the hot spot ion temperature (T i h ) and the total areal density (ρR tot ), which includes the cold shell contribution. A marginal ignition curve is derived in the ρR tot , T i h plane and current implosion experiments are compared with the ignition curve. On this plane, hydrodynamic equivalent curves show how a given implosion would perform with respect to the ignition condition when scaled up in the laser-driver energy. For 3 i h > n i h > n 2.6 · tot > n >50 keV 2.6 · g/cm 2 , where tot > n and i h > n are the burn-averaged total areal density and hot spot ion temperature, respectively. Both quantities are calculated without accounting for the alpha-particle energy deposition. Such a criterion can be used to determine how surrogate D 2 and subignited DT target implosions perform with respect to the one-dimensional ignition threshold.

  2. Spiral blood flows in an idealized 180-degree curved artery model

    Science.gov (United States)

    Bulusu, Kartik V.; Kulkarni, Varun; Plesniak, Michael W.

    2017-11-01

    Understanding of cardiovascular flows has been greatly advanced by the Magnetic Resonance Velocimetry (MRV) technique and its potential for three-dimensional velocity encoding in regions of anatomic interest. The MRV experiments were performed on a 180-degree curved artery model using a Newtonian blood analog fluid at the Richard M. Lucas Center at Stanford University employing a 3 Tesla General Electric (Discovery 750 MRI system) whole body scanner with an eight-channel cardiac coil. Analysis in two regions of the model-artery was performed for flow with Womersley number=4.2. In the entrance region (or straight-inlet pipe) the unsteady pressure drop per unit length, in-plane vorticity and wall shear stress for the pulsatile, carotid artery-based flow rate waveform were calculated. Along the 180-degree curved pipe (curvature ratio =1/7) the near-wall vorticity and the stretching of the particle paths in the vorticity field are visualized. The resultant flow behavior in the idealized curved artery model is associated with parameters such as Dean number and Womersley number. Additionally, using length scales corresponding to the axial and secondary flow we attempt to understand the mechanisms leading to the formation of various structures observed during the pulsatile flow cycle. Supported by GW Center for Biomimetics and Bioinspired Engineering (COBRE), MRV measurements in collaboration with Prof. John K. Eaton and, Dr. Chris Elkins at Stanford University.

  3. Dual Smarandache Curves and Smarandache Ruled Surfaces

    OpenAIRE

    Tanju KAHRAMAN; Mehmet ÖNDER; H. Hüseyin UGURLU

    2013-01-01

    In this paper, by considering dual geodesic trihedron (dual Darboux frame) we define dual Smarandache curves lying fully on dual unit sphere S^2 and corresponding to ruled surfaces. We obtain the relationships between the elements of curvature of dual spherical curve (ruled surface) x(s) and its dual Smarandache curve (Smarandache ruled surface) x1(s) and we give an example for dual Smarandache curves of a dual spherical curve.

  4. Aspherical Supernovae: Effects on Early Light Curves

    Science.gov (United States)

    Afsariardchi, Niloufar; Matzner, Christopher D.

    2018-04-01

    Early light from core-collapse supernovae, now detectable in high-cadence surveys, holds clues to a star and its environment just before it explodes. However, effects that alter the early light have not been fully explored. We highlight the possibility of nonradial flows at the time of shock breakout. These develop in sufficiently nonspherical explosions if the progenitor is not too diffuse. When they do develop, nonradial flows limit ejecta speeds and cause ejecta–ejecta collisions. We explore these phenomena and their observational implications using global, axisymmetric, nonrelativistic FLASH simulations of simplified polytropic progenitors, which we scale to representative stars. We develop a method to track photon production within the ejecta, enabling us to estimate band-dependent light curves from adiabatic simulations. Immediate breakout emission becomes hidden as an oblique flow develops. Nonspherical effects lead the shock-heated ejecta to release a more constant luminosity at a higher, evolving color temperature at early times, effectively mixing breakout light with the early light curve. Collisions between nonradial ejecta thermalize a small fraction of the explosion energy; we will address emission from these collisions in a subsequent paper.

  5. Injury risk curves for the WorldSID 50th male dummy.

    Science.gov (United States)

    Petitjean, Audrey; Trosseille, Xavier; Petit, Philippe; Irwin, Annette; Hassan, Joe; Praxl, Norbert

    2009-11-01

    The development of the WorldSID 50th percentile male dummy was initiated in 1997 by the International Organisation for Standardisation (ISO/SC12/TC22/WG5) with the objective of developing a more biofidelic side impact dummy and supporting the adoption of a harmonised dummy into regulations. More than 45 organizations from all around the world have contributed to this effort including governmental agencies, research institutes, car manufacturers and dummy manufacturers. The first production version of the WorldSID 50th male dummy was released in March 2004 and demonstrated an improved biofidelity over existing side impact dummies. Full scale vehicle tests covering a wide range of side impact test procedures were performed worldwide with the WorldSID dummy. However, the vehicle safety performance could not be assessed due to lack of injury risk curves for this dummy. The development of these curves was initiated in 2004 within the framework of ISO/SC12/TC22/WG6 (Injury criteria). In 2008, the ACEA- Dummy Task Force (TFD) decided to contribute to this work and offered resources for a project manager to coordinate of the effort of a group of volunteer biomechanical experts from international institutions (ISO, EEVC, VRTC/NHTSA, JARI, Transport Canada), car manufacturers (ACEA, Ford, General Motors, Honda, Toyota, Chrysler) and universities (Wayne State University, Ohio State University, John Hopkins University, Medical College of Wisconsin) to develop harmonized injury risk curves. An in-depth literature review was conducted. All the available PMHS datasets were identified, the test configurations and the quality of the results were checked. Criteria were developed for inclusion or exclusion of PMHS tests in the development of the injury risk curves. Data were processed to account for differences in mass and age of the subjects. Finally, injury risk curves were developed using the following statistical techniques, the certainty method, the Mertz/Weber method, the

  6. Reduced Calibration Curve for Proton Computed Tomography

    International Nuclear Information System (INIS)

    Yevseyeva, Olga; Assis, Joaquim de; Evseev, Ivan; Schelin, Hugo; Paschuk, Sergei; Milhoretto, Edney; Setti, Joao; Diaz, Katherin; Hormaza, Joel; Lopes, Ricardo

    2010-01-01

    The pCT deals with relatively thick targets like the human head or trunk. Thus, the fidelity of pCT as a tool for proton therapy planning depends on the accuracy of physical formulas used for proton interaction with thick absorbers. Although the actual overall accuracy of the proton stopping power in the Bethe-Bloch domain is about 1%, the analytical calculations and the Monte Carlo simulations with codes like TRIM/SRIM, MCNPX and GEANT4 do not agreed with each other. A tentative to validate the codes against experimental data for thick absorbers bring some difficulties: only a few data is available and the existing data sets have been acquired at different initial proton energies, and for different absorber materials. In this work we compare the results of our Monte Carlo simulations with existing experimental data in terms of reduced calibration curve, i.e. the range - energy dependence normalized on the range scale by the full projected CSDA range for given initial proton energy in a given material, taken from the NIST PSTAR database, and on the final proton energy scale - by the given initial energy of protons. This approach is almost energy and material independent. The results of our analysis are important for pCT development because the contradictions observed at arbitrary low initial proton energies could be easily scaled now to typical pCT energies.

  7. Melting curve of materials: theory versus experiments

    International Nuclear Information System (INIS)

    Alfe, D; Vocadlo, L; Price, G D; Gillan, M J

    2004-01-01

    A number of melting curves of various materials have recently been measured experimentally and calculated theoretically, but the agreement between different groups is not always good. We discuss here some of the problems which may arise in both experiments and theory. We also report the melting curves of Fe and Al calculated recently using quantum mechanics techniques, based on density functional theory with generalized gradient approximations. For Al our results are in very good agreement with both low pressure diamond-anvil-cell experiments (Boehler and Ross 1997 Earth Planet. Sci. Lett. 153 223, Haenstroem and Lazor 2000 J. Alloys Compounds 305 209) and high pressure shock wave experiments (Shaner et al 1984 High Pressure in Science and Technology ed Homan et al (Amsterdam: North-Holland) p 137). For Fe our results agree with the shock wave experiments of Brown and McQueen (1986 J. Geophys. Res. 91 7485) and Nguyen and Holmes (2000 AIP Shock Compression of Condensed Matter 505 81) and the recent diamond-anvil-cell experiments of Shen et al (1998 Geophys. Res. Lett. 25 373). Our results are at variance with the recent calculations of Laio et al (2000 Science 287 1027) and, to a lesser extent, with the calculations of Belonoshko et al (2000 Phys. Rev. Lett. 84 3638). The reasons for these disagreements are discussed

  8. The GO Cygni system: photoelectric observations and light curves analysis

    International Nuclear Information System (INIS)

    Rovithis, P.; Rovithis-Livaniou, H.; Niarchos, P.G.

    1990-01-01

    Photoelectric observations, in B and V, of the system GO Cygni obtained during 1985 at the Kryonerion Astronomical Station of the National Observatory of Greece are given. The corresponding light curves (typical β Lyrae) are analysed using Frequency Domain techniques. New photoelectric and absolute elements for the system are given, and its period was found to continue its increasing

  9. Estimating daily flow duration curves from monthly streamflow data

    CSIR Research Space (South Africa)

    Smakhtin, VU

    2000-01-01

    Full Text Available The paper describes two techniques by which to establish 1-day (1d) flow duration curves at an ungauged site where only a simulated or calculated monthly flow time series is available. Both methods employ the straightforward relationships between...

  10. Developing an empirical Environmental Kuznets Curve

    Directory of Open Access Journals (Sweden)

    Ferry Purnawan

    2015-04-01

    Full Text Available This study aims to develop a model of Environmental Kuznets Curve (EKC that relates between environmental pollution level and the prosperity level in Tangerang City. The method uses two models of pooled data regression technique namely, Random Effect Model (REM, and Fixed Effects Model (FEM both quadratic and cubic. The period of observation is 2002-2012. The results suggest that relationship between per capita income and the level of environment quality, reflected as the BOD concentration (Oxygen Biological damage and COD (Chemical Oxygen Damage can be explained by the quadratic FEM model and follow the EKC hypothesis even though the turning point is not identified.

  11. A note on families of fragility curves

    International Nuclear Information System (INIS)

    Kaplan, S.; Bier, V.M.; Bley, D.C.

    1989-01-01

    In the quantitative assessment of seismic risk, uncertainty in the fragility of a structural component is usually expressed by putting forth a family of fragility curves, with probability serving as the parameter of the family. Commonly, a lognormal shape is used both for the individual curves and for the expression of uncertainty over the family. A so-called composite single curve can also be drawn and used for purposes of approximation. This composite curve is often regarded as equivalent to the mean curve of the family. The equality seems intuitively reasonable, but according to the authors has never been proven. The paper presented proves this equivalence hypothesis mathematically. Moreover, the authors show that this equivalence hypothesis between fragility curves is itself equivalent to an identity property of the standard normal probability curve. Thus, in the course of proving the fragility curve hypothesis, the authors have also proved a rather obscure, but interesting and perhaps previously unrecognized, property of the standard normal curve

  12. Flood damage curves for consistent global risk assessments

    Science.gov (United States)

    de Moel, Hans; Huizinga, Jan; Szewczyk, Wojtek

    2016-04-01

    Assessing potential damage of flood events is an important component in flood risk management. Determining direct flood damage is commonly done using depth-damage curves, which denote the flood damage that would occur at specific water depths per asset or land-use class. Many countries around the world have developed flood damage models using such curves which are based on analysis of past flood events and/or on expert judgement. However, such damage curves are not available for all regions, which hampers damage assessments in those regions. Moreover, due to different methodologies employed for various damage models in different countries, damage assessments cannot be directly compared with each other, obstructing also supra-national flood damage assessments. To address these problems, a globally consistent dataset of depth-damage curves has been developed. This dataset contains damage curves depicting percent of damage as a function of water depth as well as maximum damage values for a variety of assets and land use classes (i.e. residential, commercial, agriculture). Based on an extensive literature survey concave damage curves have been developed for each continent, while differentiation in flood damage between countries is established by determining maximum damage values at the country scale. These maximum damage values are based on construction cost surveys from multinational construction companies, which provide a coherent set of detailed building cost data across dozens of countries. A consistent set of maximum flood damage values for all countries was computed using statistical regressions with socio-economic World Development Indicators from the World Bank. Further, based on insights from the literature survey, guidance is also given on how the damage curves and maximum damage values can be adjusted for specific local circumstances, such as urban vs. rural locations, use of specific building material, etc. This dataset can be used for consistent supra

  13. Observable Zitterbewegung in curved spacetimes

    Science.gov (United States)

    Kobakhidze, Archil; Manning, Adrian; Tureanu, Anca

    2016-06-01

    Zitterbewegung, as it was originally described by Schrödinger, is an unphysical, non-observable effect. We verify whether the effect can be observed in non-inertial reference frames/curved spacetimes, where the ambiguity in defining particle states results in a mixing of positive and negative frequency modes. We explicitly demonstrate that such a mixing is in fact necessary to obtain the correct classical value for a particle's velocity in a uniformly accelerated reference frame, whereas in cosmological spacetime a particle does indeed exhibit Zitterbewegung.

  14. Observable Zitterbewegung in curved spacetimes

    Energy Technology Data Exchange (ETDEWEB)

    Kobakhidze, Archil, E-mail: archilk@physics.usyd.edu.au [ARC Centre of Excellence for Particle Physics at the Terascale, School of Physics, The University of Sydney, NSW 2006 (Australia); Manning, Adrian, E-mail: a.manning@physics.usyd.edu.au [ARC Centre of Excellence for Particle Physics at the Terascale, School of Physics, The University of Sydney, NSW 2006 (Australia); Tureanu, Anca, E-mail: anca.tureanu@helsinki.fi [Department of Physics, University of Helsinki, P.O. Box 64, 00014 Helsinki (Finland)

    2016-06-10

    Zitterbewegung, as it was originally described by Schrödinger, is an unphysical, non-observable effect. We verify whether the effect can be observed in non-inertial reference frames/curved spacetimes, where the ambiguity in defining particle states results in a mixing of positive and negative frequency modes. We explicitly demonstrate that such a mixing is in fact necessary to obtain the correct classical value for a particle's velocity in a uniformly accelerated reference frame, whereas in cosmological spacetime a particle does indeed exhibit Zitterbewegung.

  15. Differential geometry curves, surfaces, manifolds

    CERN Document Server

    Kohnel, Wolfgang

    2002-01-01

    This carefully written book is an introduction to the beautiful ideas and results of differential geometry. The first half covers the geometry of curves and surfaces, which provide much of the motivation and intuition for the general theory. Special topics that are explored include Frenet frames, ruled surfaces, minimal surfaces and the Gauss-Bonnet theorem. The second part is an introduction to the geometry of general manifolds, with particular emphasis on connections and curvature. The final two chapters are insightful examinations of the special cases of spaces of constant curvature and Einstein manifolds. The text is illustrated with many figures and examples. The prerequisites are undergraduate analysis and linear algebra.

  16. LINS Curve in Romanian Economy

    Directory of Open Access Journals (Sweden)

    Emilian Dobrescu

    2016-02-01

    Full Text Available The paper presents theoretical considerations and empirical evidence to test the validity of the Laffer in Narrower Sense (LINS curve as a parabola with a maximum. Attention is focused on the so-called legal-effective tax gap (letg. The econometric application is based on statistical data (1990-2013 for Romania as an emerging European economy. Three cointegrating regressions (fully modified least squares, canonical cointegrating regression and dynamic least squares and three algorithms, which are based on instrumental variables (two-stage least squares, generalized method of moments, and limited information maximum likelihood, are involved.

  17. Interactions of cosmic rays in the atmosphere: growth curves revisited

    Energy Technology Data Exchange (ETDEWEB)

    Obermeier, A.; Boyle, P.; Müller, D. [Enrico Fermi Institute, University of Chicago, Chicago, IL 60637 (United States); Hörandel, J., E-mail: a.obermeier@astro.ru.nl [Radboud Universiteit Nijmegen, 6525-HP Nijmegen (Netherlands)

    2013-12-01

    Measurements of cosmic-ray abundances on balloons are affected by interactions in the residual atmosphere above the balloon. Corrections for such interactions are particularly important for observations of rare secondary particles such as boron, antiprotons, and positrons. These corrections either can be calculated if the relevant cross sections in the atmosphere are known or may be empirically determined by extrapolation of the 'growth curves', i.e., the individual particle intensities as functions of atmospheric depth. The growth-curve technique is particularly attractive for long-duration balloon flights where the periodic daily altitude variations permit rather precise determinations of the corresponding particle intensity variations. We determine growth curves for nuclei from boron (Z = 5) to iron (Z = 26) using data from the 2006 Arctic balloon flight of the TRACER detector for cosmic-ray nuclei, and we compare the growth curves with predictions from published cross section values. In general, good agreement is observed. We then study the boron/carbon abundance ratio and derive a simple and energy-independent correction term for this ratio. We emphasize that the growth-curve technique can be developed further to provide highly accurate tests of published interaction cross section values.

  18. Hot-blade cutting of EPS foam for double-curved surfaces—numerical simulation and experiments

    DEFF Research Database (Denmark)

    Petkov, Kiril P.; Hattel, Jesper Henri

    2017-01-01

    In the present paper, experimental and numerical studies of a newly developed process of Hot-Blade Cutting used for free forming of double-curved surfaces and cost effective rapid prototyping of expanded polystyrene foam is carried out. The experimental part of the study falls in two parts...... during the cutting process. A novel measurement method for determination of kerfwidth (i.e., the gap space after material removal) applying a commercially available large-scale optical 3D scanning technique was developed and used. A one-dimensional thermo-electro-mechanical numerical model for Hot...

  19. DECIPHERING THERMAL PHASE CURVES OF DRY, TIDALLY LOCKED TERRESTRIAL PLANETS

    Energy Technology Data Exchange (ETDEWEB)

    Koll, Daniel D. B.; Abbot, Dorian S., E-mail: dkoll@uchicago.edu [Department of the Geophysical Sciences, University of Chicago, Chicago, IL 60637 (United States)

    2015-03-20

    Next-generation space telescopes will allow us to characterize terrestrial exoplanets. To do so effectively it will be crucial to make use of all available data. We investigate which atmospheric properties can, and cannot, be inferred from the broadband thermal phase curve of a dry and tidally locked terrestrial planet. First, we use dimensional analysis to show that phase curves are controlled by six nondimensional parameters. Second, we use an idealized general circulation model to explore the relative sensitivity of phase curves to these parameters. We find that the feature of phase curves most sensitive to atmospheric parameters is the peak-to-trough amplitude. Moreover, except for hot and rapidly rotating planets, the phase amplitude is primarily sensitive to only two nondimensional parameters: (1) the ratio of dynamical to radiative timescales and (2) the longwave optical depth at the surface. As an application of this technique, we show how phase curve measurements can be combined with transit or emission spectroscopy to yield a new constraint for the surface pressure and atmospheric mass of terrestrial planets. We estimate that a single broadband phase curve, measured over half an orbit with the James Webb Space Telescope, could meaningfully constrain the atmospheric mass of a nearby super-Earth. Such constraints will be important for studying the atmospheric evolution of terrestrial exoplanets as well as characterizing the surface conditions on potentially habitable planets.

  20. Surgical treatment of double thoracic adolescent idiopathic scoliosis with a rigid proximal thoracic curve.

    Science.gov (United States)

    Sudo, Hideki; Abe, Yuichiro; Abumi, Kuniyoshi; Iwasaki, Norimasa; Ito, Manabu

    2016-02-01

    There is limited consensus on the optimal surgical strategy for double thoracic adolescent idiopathic scoliosis (AIS). Recent studies have reported that pedicle screw constructs to maximize scoliosis correction cause further thoracic spine lordosis. The objective of this study was to apply a new surgical technique for double thoracic AIS with rigid proximal thoracic (PT) curves and assess its clinical outcomes. Twenty one consecutive patients with Lenke 2 AIS and a rigid PT curve (Cobb angle ≥30º on side-bending radiographs, flexibility ≤30 %) treated with the simultaneous double-rod rotation technique (SDRRT) were included. In this technique, a temporary rod is placed at the concave side of the PT curve. Then, distraction force is applied to correct the PT curve, which reforms a sigmoid double thoracic curve into an approximate single thoracic curve. As a result, the PT curve is typically converted from an apex left to an apex right curve before applying the correction rod for PT and main thoracic curve. All patients were followed for at least 2 years (average 2.7 years). The average main thoracic and PT Cobb angle correction rate at the final follow-up was 74.7 and 58.0 %, respectively. The average preoperative T5-T12 thoracic kyphosis was 9.3°, which improved significantly to 19.0° (p corrected using SDRRT for Lenke 2 AIS with a rigid PT curve.

  1. Differential geometry and topology of curves

    CERN Document Server

    Animov, Yu

    2001-01-01

    Differential geometry is an actively developing area of modern mathematics. This volume presents a classical approach to the general topics of the geometry of curves, including the theory of curves in n-dimensional Euclidean space. The author investigates problems for special classes of curves and gives the working method used to obtain the conditions for closed polygonal curves. The proof of the Bakel-Werner theorem in conditions of boundedness for curves with periodic curvature and torsion is also presented. This volume also highlights the contributions made by great geometers. past and present, to differential geometry and the topology of curves.

  2. Test for the statistical significance of differences between ROC curves

    International Nuclear Information System (INIS)

    Metz, C.E.; Kronman, H.B.

    1979-01-01

    A test for the statistical significance of observed differences between two measured Receiver Operating Characteristic (ROC) curves has been designed and evaluated. The set of observer response data for each ROC curve is assumed to be independent and to arise from a ROC curve having a form which, in the absence of statistical fluctuations in the response data, graphs as a straight line on double normal-deviate axes. To test the significance of an apparent difference between two measured ROC curves, maximum likelihood estimates of the two parameters of each curve and the associated parameter variances and covariance are calculated from the corresponding set of observer response data. An approximate Chi-square statistic with two degrees of freedom is then constructed from the differences between the parameters estimated for each ROC curve and from the variances and covariances of these estimates. This statistic is known to be truly Chi-square distributed only in the limit of large numbers of trials in the observer performance experiments. Performance of the statistic for data arising from a limited number of experimental trials was evaluated. Independent sets of rating scale data arising from the same underlying ROC curve were paired, and the fraction of differences found (falsely) significant was compared to the significance level, α, used with the test. Although test performance was found to be somewhat dependent on both the number of trials in the data and the position of the underlying ROC curve in the ROC space, the results for various significance levels showed the test to be reliable under practical experimental conditions

  3. Testing MONDian dark matter with galactic rotation curves

    International Nuclear Information System (INIS)

    Edmonds, Doug; Farrah, Duncan; Minic, Djordje; Takeuchi, Tatsu; Ho, Chiu Man; Ng, Y. Jack

    2014-01-01

    MONDian dark matter (MDM) is a new form of dark matter quantum that naturally accounts for Milgrom's scaling, usually associated with modified Newtonian dynamics (MOND), and theoretically behaves like cold dark matter (CDM) at cluster and cosmic scales. In this paper, we provide the first observational test of MDM by fitting rotation curves to a sample of 30 local spiral galaxies (z ≈ 0.003). For comparison, we also fit the galactic rotation curves using MOND and CDM. We find that all three models fit the data well. The rotation curves predicted by MDM and MOND are virtually indistinguishable over the range of observed radii (∼1 to 30 kpc). The best-fit MDM and CDM density profiles are compared. We also compare with MDM the dark matter density profiles arising from MOND if Milgrom's formula is interpreted as Newtonian gravity with an extra source term instead of as a modification of inertia. We find that discrepancies between MDM and MOND will occur near the center of a typical spiral galaxy. In these regions, instead of continuing to rise sharply, the MDM mass density turns over and drops as we approach the center of the galaxy. Our results show that MDM, which restricts the nature of the dark matter quantum by accounting for Milgrom's scaling, accurately reproduces observed rotation curves.

  4. Generation of response functions of a NaI detector by using an interpolation technique

    International Nuclear Information System (INIS)

    Tominaga, Shoji

    1983-01-01

    A computer method is developed for generating response functions of a NaI detector to monoenergetic γ-rays. The method is based on an interpolation between measured response curves by a detector. The computer programs are constructed for Heath's response spectral library. The principle of the basic mathematics used for interpolation, which was reported previously by the author, et al., is that response curves can be decomposed into a linear combination of intrinsic-component patterns, and thereby the interpolation of curves is reduced to a simple interpolation of weighting coefficients needed to combine the component patterns. This technique has some advantages of data compression, reduction in computation time, and stability of the solution, in comparison with the usual functional fitting method. The processing method of segmentation of a spectrum is devised to generate useful and precise response curves. A spectral curve, obtained for each γ-ray source, is divided into some regions defined by the physical processes, such as the photopeak area, the Compton continuum area, the backscatter peak area, and so on. Each segment curve then is processed separately for interpolation. Lastly the estimated curves to the respective areas are connected on one channel scale. The generation programs are explained briefly. It is shown that the generated curve represents the overall shape of a response spectrum including not only its photopeak but also the corresponding Compton area, with a sufficient accuracy. (author)

  5. Improved capacitive melting curve measurements

    International Nuclear Information System (INIS)

    Sebedash, Alexander; Tuoriniemi, Juha; Pentti, Elias; Salmela, Anssi

    2009-01-01

    Sensitivity of the capacitive method for determining the melting pressure of helium can be enhanced by loading the empty side of the capacitor with helium at a pressure nearly equal to that desired to be measured and by using a relatively thin and flexible membrane in between. This way one can achieve a nanobar resolution at the level of 30 bar, which is two orders of magnitude better than that of the best gauges with vacuum reference. This extends the applicability of melting curve thermometry to lower temperatures and would allow detecting tiny anomalies in the melting pressure, which must be associated with any phenomena contributing to the entropy of the liquid or solid phases. We demonstrated this principle in measurements of the crystallization pressure of isotopic helium mixtures at millikelvin temperatures by using partly solid pure 4 He as the reference substance providing the best possible universal reference pressure. The achieved sensitivity was good enough for melting curve thermometry on mixtures down to 100 μK. Similar system can be used on pure isotopes by virtue of a blocked capillary giving a stable reference condition with liquid slightly below the melting pressure in the reference volume. This was tested with pure 4 He at temperatures 0.08-0.3 K. To avoid spurious heating effects, one must carefully choose and arrange any dielectric materials close to the active capacitor. We observed some 100 pW loading at moderate excitation voltages.

  6. Classical optics and curved spaces

    International Nuclear Information System (INIS)

    Bailyn, M.; Ragusa, S.

    1976-01-01

    In the eikonal approximation of classical optics, the unit polarization 3-vector of light satisfies an equation that depends only on the index, n, of refraction. It is known that if the original 3-space line element is d sigma 2 , then this polarization direction propagates parallely in the fictitious space n 2 d sigma 2 . Since the equation depends only on n, it is possible to invent a fictitious curved 4-space in which the light performs a null geodesic, and the polarization 3-vector behaves as the 'shadow' of a parallely propagated 4-vector. The inverse, namely, the reduction of Maxwell's equation, on a curve 'dielectric free) space, to a classical space with dielectric constant n=(-g 00 ) -1 / 2 is well known, but in the latter the dielectric constant epsilon and permeability μ must also equal (-g 00 ) -1 / 2 . The rotation of polarization as light bends around the sun by utilizing the reduction to the classical space, is calculated. This (non-) rotation may then be interpreted as parallel transport in the 3-space n 2 d sigma 2 [pt

  7. W-curve alignments for HIV-1 genomic comparisons.

    Directory of Open Access Journals (Sweden)

    Douglas J Cork

    2010-06-01

    Full Text Available The W-curve was originally developed as a graphical visualization technique for viewing DNA and RNA sequences. Its ability to render features of DNA also makes it suitable for computational studies. Its main advantage in this area is utilizing a single-pass algorithm for comparing the sequences. Avoiding recursion during sequence alignments offers advantages for speed and in-process resources. The graphical technique also allows for multiple models of comparison to be used depending on the nucleotide patterns embedded in similar whole genomic sequences. The W-curve approach allows us to compare large numbers of samples quickly.We are currently tuning the algorithm to accommodate quirks specific to HIV-1 genomic sequences so that it can be used to aid in diagnostic and vaccine efforts. Tracking the molecular evolution of the virus has been greatly hampered by gap associated problems predominantly embedded within the envelope gene of the virus. Gaps and hypermutation of the virus slow conventional string based alignments of the whole genome. This paper describes the W-curve algorithm itself, and how we have adapted it for comparison of similar HIV-1 genomes. A treebuilding method is developed with the W-curve that utilizes a novel Cylindrical Coordinate distance method and gap analysis method. HIV-1 C2-V5 env sequence regions from a Mother/Infant cohort study are used in the comparison.The output distance matrix and neighbor results produced by the W-curve are functionally equivalent to those from Clustal for C2-V5 sequences in the mother/infant pairs infected with CRF01_AE.Significant potential exists for utilizing this method in place of conventional string based alignment of HIV-1 genomes, such as Clustal X. With W-curve heuristic alignment, it may be possible to obtain clinically useful results in a short time-short enough to affect clinical choices for acute treatment. A description of the W-curve generation process, including a comparison

  8. W-curve alignments for HIV-1 genomic comparisons.

    Science.gov (United States)

    Cork, Douglas J; Lembark, Steven; Tovanabutra, Sodsai; Robb, Merlin L; Kim, Jerome H

    2010-06-01

    The W-curve was originally developed as a graphical visualization technique for viewing DNA and RNA sequences. Its ability to render features of DNA also makes it suitable for computational studies. Its main advantage in this area is utilizing a single-pass algorithm for comparing the sequences. Avoiding recursion during sequence alignments offers advantages for speed and in-process resources. The graphical technique also allows for multiple models of comparison to be used depending on the nucleotide patterns embedded in similar whole genomic sequences. The W-curve approach allows us to compare large numbers of samples quickly. We are currently tuning the algorithm to accommodate quirks specific to HIV-1 genomic sequences so that it can be used to aid in diagnostic and vaccine efforts. Tracking the molecular evolution of the virus has been greatly hampered by gap associated problems predominantly embedded within the envelope gene of the virus. Gaps and hypermutation of the virus slow conventional string based alignments of the whole genome. This paper describes the W-curve algorithm itself, and how we have adapted it for comparison of similar HIV-1 genomes. A treebuilding method is developed with the W-curve that utilizes a novel Cylindrical Coordinate distance method and gap analysis method. HIV-1 C2-V5 env sequence regions from a Mother/Infant cohort study are used in the comparison. The output distance matrix and neighbor results produced by the W-curve are functionally equivalent to those from Clustal for C2-V5 sequences in the mother/infant pairs infected with CRF01_AE. Significant potential exists for utilizing this method in place of conventional string based alignment of HIV-1 genomes, such as Clustal X. With W-curve heuristic alignment, it may be possible to obtain clinically useful results in a short time-short enough to affect clinical choices for acute treatment. A description of the W-curve generation process, including a comparison technique of

  9. Modeling of Triangular Lattice Space Structures with Curved Battens

    Science.gov (United States)

    Chen, Tzikang; Wang, John T.

    2005-01-01

    Techniques for simulating an assembly process of lattice structures with curved battens were developed. The shape of the curved battens, the tension in the diagonals, and the compression in the battens were predicted for the assembled model. To be able to perform the assembly simulation, a cable-pulley element was implemented, and geometrically nonlinear finite element analyses were performed. Three types of finite element models were created from assembled lattice structures for studying the effects of design and modeling variations on the load carrying capability. Discrepancies in the predictions from these models were discussed. The effects of diagonal constraint failure were also studied.

  10. Curves and surfaces for CAGD a practical guide

    CERN Document Server

    Farin, Gerald

    2002-01-01

    This fifth edition has been fully updated to cover the many advances made in CAGD and curve and surface theory since 1997, when the fourth edition appeared. Material has been restructured into theory and applications chapters. The theory material has been streamlined using the blossoming approach; the applications material includes least squares techniques in addition to the traditional interpolation methods. In all other respects, it is, thankfully, the same. This means you get the informal, friendly style and unique approach that has made Curves and Surfaces for CAGD: A Practical Gui

  11. Atlas of stress-strain curves

    CERN Document Server

    2002-01-01

    The Atlas of Stress-Strain Curves, Second Edition is substantially bigger in page dimensions, number of pages, and total number of curves than the previous edition. It contains over 1,400 curves, almost three times as many as in the 1987 edition. The curves are normalized in appearance to aid making comparisons among materials. All diagrams include metric (SI) units, and many also include U.S. customary units. All curves are captioned in a consistent format with valuable information including (as available) standard designation, the primary source of the curve, mechanical properties (including hardening exponent and strength coefficient), condition of sample, strain rate, test temperature, and alloy composition. Curve types include monotonic and cyclic stress-strain, isochronous stress-strain, and tangent modulus. Curves are logically arranged and indexed for fast retrieval of information. The book also includes an introduction that provides background information on methods of stress-strain determination, on...

  12. Transition curves for highway geometric design

    CERN Document Server

    Kobryń, Andrzej

    2017-01-01

    This book provides concise descriptions of the various solutions of transition curves, which can be used in geometric design of roads and highways. It presents mathematical methods and curvature functions for defining transition curves. .

  13. Comparison and evaluation of mathematical lactation curve ...

    African Journals Online (AJOL)

    p2492989

    A mathematical model of the lactation curve provides summary information about culling and milking strategies ..... Table 2 Statistics of the edited data for first lactation Holstein cows ..... Application of different models to the lactation curves of.

  14. Folding of non-Euclidean curved shells

    Science.gov (United States)

    Bende, Nakul; Evans, Arthur; Innes-Gold, Sarah; Marin, Luis; Cohen, Itai; Santangelo, Christian; Hayward, Ryan

    2015-03-01

    Origami-based folding of 2D sheets has been of recent interest for a variety of applications ranging from deployable structures to self-folding robots. Though folding of planar sheets follows well-established principles, folding of curved shells involves an added level of complexity due to the inherent influence of curvature on mechanics. In this study, we use principles from differential geometry and thin shell mechanics to establish fundamental rules that govern folding of prototypical creased shells. In particular, we show how the normal curvature of a crease line controls whether the deformation is smooth or discontinuous, and investigate the influence of shell thickness and boundary conditions. We show that snap-folding of shells provides a route to rapid actuation on time-scales dictated by the speed of sound. The simple geometric design principles developed can be applied at any length-scale, offering potential for bio-inspired soft actuators for tunable optics, microfluidics, and robotics. This work was funded by the National Science Foundation through EFRI ODISSEI-1240441 with additional support to S.I.-G. through the UMass MRSEC DMR-0820506 REU program.

  15. Statistics for products of traces of high powers of the frobenius class of hyperelliptic curves

    OpenAIRE

    Roditty-Gershon, Edva

    2011-01-01

    We study the averages of products of traces of high powers of the Frobenius class of hyperelliptic curves of genus g over a fixed finite field. We show that for increasing genus g, the limiting expectation of these products equals to the expectation when the curve varies over the unitary symplectic group USp(2g). We also consider the scaling limit of linear statistics for eigenphases of the Frobenius class of hyperelliptic curves, and show that their first few moments are Gaussian.

  16. Gelfond–Bézier curves

    KAUST Repository

    Ait-Haddou, Rachid; Sakane, Yusuke; Nomura, Taishin

    2013-01-01

    We show that the generalized Bernstein bases in Müntz spaces defined by Hirschman and Widder (1949) and extended by Gelfond (1950) can be obtained as pointwise limits of the Chebyshev–Bernstein bases in Müntz spaces with respect to an interval [a,1][a,1] as the positive real number a converges to zero. Such a realization allows for concepts of curve design such as de Casteljau algorithm, blossom, dimension elevation to be transferred from the general theory of Chebyshev blossoms in Müntz spaces to these generalized Bernstein bases that we termed here as Gelfond–Bernstein bases. The advantage of working with Gelfond–Bernstein bases lies in the simplicity of the obtained concepts and algorithms as compared to their Chebyshev–Bernstein bases counterparts.

  17. Bubble Collision in Curved Spacetime

    International Nuclear Information System (INIS)

    Hwang, Dong-il; Lee, Bum-Hoon; Lee, Wonwoo; Yeom, Dong-han

    2014-01-01

    We study vacuum bubble collisions in curved spacetime, in which vacuum bubbles were nucleated in the initial metastable vacuum state by quantum tunneling. The bubbles materialize randomly at different times and then start to grow. It is known that the percolation by true vacuum bubbles is not possible due to the exponential expansion of the space among the bubbles. In this paper, we consider two bubbles of the same size with a preferred axis and assume that two bubbles form very near each other to collide. The two bubbles have the same field value. When the bubbles collide, the collided region oscillates back-and-forth and then the collided region eventually decays and disappears. We discuss radiation and gravitational wave resulting from the collision of two bubbles

  18. Bacterial streamers in curved microchannels

    Science.gov (United States)

    Rusconi, Roberto; Lecuyer, Sigolene; Guglielmini, Laura; Stone, Howard

    2009-11-01

    Biofilms, generally identified as microbial communities embedded in a self-produced matrix of extracellular polymeric substances, are involved in a wide variety of health-related problems ranging from implant-associated infections to disease transmissions and dental plaque. The usual picture of these bacterial films is that they grow and develop on surfaces. However, suspended biofilm structures, or streamers, have been found in natural environments (e.g., rivers, acid mines, hydrothermal hot springs) and are always suggested to stem from a turbulent flow. We report the formation of bacterial streamers in curved microfluidic channels. By using confocal laser microscopy we are able to directly image and characterize the spatial and temporal evolution of these filamentous structures. Such streamers, which always connect the inner corners of opposite sides of the channel, are always located in the middle plane. Numerical simulations of the flow provide evidences for an underlying hydrodynamic mechanism behind the formation of the streamers.

  19. Gelfond–Bézier curves

    KAUST Repository

    Ait-Haddou, Rachid

    2013-02-01

    We show that the generalized Bernstein bases in Müntz spaces defined by Hirschman and Widder (1949) and extended by Gelfond (1950) can be obtained as pointwise limits of the Chebyshev–Bernstein bases in Müntz spaces with respect to an interval [a,1][a,1] as the positive real number a converges to zero. Such a realization allows for concepts of curve design such as de Casteljau algorithm, blossom, dimension elevation to be transferred from the general theory of Chebyshev blossoms in Müntz spaces to these generalized Bernstein bases that we termed here as Gelfond–Bernstein bases. The advantage of working with Gelfond–Bernstein bases lies in the simplicity of the obtained concepts and algorithms as compared to their Chebyshev–Bernstein bases counterparts.

  20. [Evaluation of the learning curve of residents in localizing a phantom target with ultrasonography].

    Science.gov (United States)

    Dessieux, T; Estebe, J-P; Bloc, S; Mercadal, L; Ecoffey, C

    2008-10-01

    Few information are available regarding the learning curve in ultrasonography and even less for ultrasound-guided regional anesthesia. This study aimed to evaluate in a training program the learning curve on a phantom of 12 residents novice in ultrasonography. Twelve trainees inexperienced in ultrasonography were given introductory training consisting of didactic formation on the various components of the portable ultrasound machine (i.e. on/off button, gain, depth, resolution, and image storage). Then, students performed three trials, in two sets of increased difficulty, at executing these predefined tasks: adjustments of the machine, then localization of a small plastic piece introduced into roasting pork (3 cm below the surface). At the end of the evaluation, the residents were asked to insert a 22 G needle into an exact predetermined target (i.e. point of fascia intersection). The progression of the needle was continuously controlled by ultrasound visualization using injection of a small volume of water (needle perpendicular to the longitudinal plane of the ultrasound beam). Two groups of two different examiners evaluated for each three trials the skill of the residents (quality, time to perform the machine adjustments, to localize the plastic target, and to hydrolocalize, and volume used for hydrolocalization). After each trial, residents evaluated their performance using a difficulty scale (0: easy to 10: difficult). All residents performed the adjustments from the last trial of each set, with a learning curve observed in terms of duration. Localization of the plastic piece was achieved by all residents at the 6th trial, with a shorter duration of localization. Hydrolocalization was achieved after the 4th trial by all subjects. Difficulty scale was correlated to the number of trials. All these results were independent of the experience of residents in regional anesthesia. Four trials were necessary to adjust correctly the machine, to localize a target, and to