WorldWideScience

Sample records for obtain accurate source

  1. Accurate shear measurement with faint sources

    International Nuclear Information System (INIS)

    Zhang, Jun; Foucaud, Sebastien; Luo, Wentao

    2015-01-01

    For cosmic shear to become an accurate cosmological probe, systematic errors in the shear measurement method must be unambiguously identified and corrected for. Previous work of this series has demonstrated that cosmic shears can be measured accurately in Fourier space in the presence of background noise and finite pixel size, without assumptions on the morphologies of galaxy and PSF. The remaining major source of error is source Poisson noise, due to the finiteness of source photon number. This problem is particularly important for faint galaxies in space-based weak lensing measurements, and for ground-based images of short exposure times. In this work, we propose a simple and rigorous way of removing the shear bias from the source Poisson noise. Our noise treatment can be generalized for images made of multiple exposures through MultiDrizzle. This is demonstrated with the SDSS and COSMOS/ACS data. With a large ensemble of mock galaxy images of unrestricted morphologies, we show that our shear measurement method can achieve sub-percent level accuracy even for images of signal-to-noise ratio less than 5 in general, making it the most promising technique for cosmic shear measurement in the ongoing and upcoming large scale galaxy surveys

  2. Devices for obtaining information about radiation sources

    International Nuclear Information System (INIS)

    Tosswill, C.H.

    1981-01-01

    The invention provides a sensitive, fast high-resolution device for obtaining information about the distribution of gamma and X-radiation sources and provides a radiation detector useful in such a device. It comprises a slit collimator with a multiplicity of slits each with slit-defining walls of material and thickness to absorb beam components impinging on them. The slits extend further in one direction than the other. The detector for separately detecting beam components passing through the slits also provides data output signals. It comprises a plurality of radiation transducing portions which are not photoconductor elements each at the end of a slit. A positioner operates to change the transverse position of the slits and radiation transducing portions relative to the source, wherein each radiation transducing element is positioned within its respective slit between the slit defining walls. Full details and preferred embodiments are given. (U.K.)

  3. Devices for obtaining information about radiation sources

    International Nuclear Information System (INIS)

    Tosswill, C.H.

    1981-01-01

    The invention provides a sensitive, fast, high-resolution device for obtaining information about the distribution of gamma and X-radiation sources and provides a radiation detector useful in such a device. It comprises a slit collimator with a multiplicity of slits each with slit-defining walls of material and thickness to absorb beam components impinging on them. The slits extend further in one transverse direction than the other. The detector for separately detecting beam components passing through the slits also provides data output signals. It comprises a plurality of radiation transducing portions, each at the end of a slit. A positioner changes the transverse position of the slits and radiation transducer (a photoconductor) relative to the source. Applications are in nuclear medicine and industry. Full details and preferred embodiments are given. (U.K.)

  4. Electrolytic Hydrogen obtaining by a photovoltaic source

    International Nuclear Information System (INIS)

    Pasculete, E.; Condrea, F.; Stanoiu, L.

    2005-01-01

    At present, the developed countries allocate large funds for the financing of some global programs for fundamental and applicative research for development of hydrogen non-conventional production technologies. One of these technologies is the photo-assisted electrolysis. This technology is adopted in the research, which results are presented in this paper. The experimental model includes as basic equipment 100 W photovoltaic source, electrolysis battery press filter type, control unit of the electric energy discharged, accumulator, hydrogen storage unit. Five types of material have been tested for the electrolysis cell diaphragm: asbestos; Netrom- unwoven material from fibers of polypropylene; ion changing composite membrane - polysulfone support with an active layer of sulfonated poly-sulfone (PSS/PSJ) and poly-sulfone support with an active layer of sulfonated poly-eter cetone (SPEEK/PSf); ion-exchange membrane made from sulfonated poly-eter cetone (SPEEK). The graphics and results from the test system are presented. The analysis of the experimental results lead to the establishment of the optimal configuration of battery and of the operational conditions of the assembly. The experimental results give the opportunity to obtain electrolytic hydrogen with a photovoltaic source, in an efficient system, and promote the Romanian research at a level of a demonstrative installation

  5. An efficient and accurate method to obtain the energy-dependent Green function for general potentials

    International Nuclear Information System (INIS)

    Kramer, T; Heller, E J; Parrott, R E

    2008-01-01

    Time-dependent quantum mechanics provides an intuitive picture of particle propagation in external fields. Semiclassical methods link the classical trajectories of particles with their quantum mechanical propagation. Many analytical results and a variety of numerical methods have been developed to solve the time-dependent Schroedinger equation. The time-dependent methods work for nearly arbitrarily shaped potentials, including sources and sinks via complex-valued potentials. Many quantities are measured at fixed energy, which is seemingly not well suited for a time-dependent formulation. Very few methods exist to obtain the energy-dependent Green function for complicated potentials without resorting to ensemble averages or using certain lead-in arrangements. Here, we demonstrate in detail a time-dependent approach, which can accurately and effectively construct the energy-dependent Green function for very general potentials. The applications of the method are numerous, including chemical, mesoscopic, and atomic physics

  6. Latest Developments on Obtaining Accurate Measurements with Pitot Tubes in ZPG Turbulent Boundary Layers

    Science.gov (United States)

    Nagib, Hassan; Vinuesa, Ricardo

    2013-11-01

    Ability of available Pitot tube corrections to provide accurate mean velocity profiles in ZPG boundary layers is re-examined following the recent work by Bailey et al. Measurements by Bailey et al., carried out with probes of diameters ranging from 0.2 to 1.89 mm, together with new data taken with larger diameters up to 12.82 mm, show deviations with respect to available high-quality datasets and hot-wire measurements in the same Reynolds number range. These deviations are significant in the buffer region around y+ = 30 - 40 , and lead to disagreement in the von Kármán coefficient κ extracted from profiles. New forms for shear, near-wall and turbulence corrections are proposed, highlighting the importance of the latest one. Improved agreement in mean velocity profiles is obtained with new forms, where shear and near-wall corrections contribute with around 85%, and remaining 15% of the total correction comes from turbulence correction. Finally, available algorithms to correct wall position in profile measurements of wall-bounded flows are tested, using as benchmark the corrected Pitot measurements with artificially simulated probe shifts and blockage effects. We develop a new scheme, κB - Musker, which is able to accurately locate wall position.

  7. Towards an accurate real-time locator of infrasonic sources

    Science.gov (United States)

    Pinsky, V.; Blom, P.; Polozov, A.; Marcillo, O.; Arrowsmith, S.; Hofstetter, A.

    2017-11-01

    Infrasonic signals propagate from an atmospheric source via media with stochastic and fast space-varying conditions. Hence, their travel time, the amplitude at sensor recordings and even manifestation in the so-called "shadow zones" are random. Therefore, the traditional least-squares technique for locating infrasonic sources is often not effective, and the problem for the best solution must be formulated in probabilistic terms. Recently, a series of papers has been published about Bayesian Infrasonic Source Localization (BISL) method based on the computation of the posterior probability density function (PPDF) of the source location, as a convolution of a priori probability distribution function (APDF) of the propagation model parameters with likelihood function (LF) of observations. The present study is devoted to the further development of BISL for higher accuracy and stability of the source location results and decreasing of computational load. We critically analyse previous algorithms and propose several new ones. First of all, we describe the general PPDF formulation and demonstrate that this relatively slow algorithm might be among the most accurate algorithms, provided the adequate APDF and LF are used. Then, we suggest using summation instead of integration in a general PPDF calculation for increased robustness, but this leads us to the 3D space-time optimization problem. Two different forms of APDF approximation are considered and applied for the PPDF calculation in our study. One of them is previously suggested, but not yet properly used is the so-called "celerity-range histograms" (CRHs). Another is the outcome from previous findings of linear mean travel time for the four first infrasonic phases in the overlapping consecutive distance ranges. This stochastic model is extended here to the regional distance of 1000 km, and the APDF introduced is the probabilistic form of the junction between this travel time model and range-dependent probability

  8. Obtaining accurate amounts of mercury from mercury compounds via electrolytic methods

    Science.gov (United States)

    Grossman, M.W.; George, W.A.

    1987-07-07

    A process is described for obtaining pre-determined, accurate rate amounts of mercury. In one embodiment, predetermined, precise amounts of Hg are separated from HgO and plated onto a cathode wire. The method for doing this involves dissolving a precise amount of HgO which corresponds to a pre-determined amount of Hg desired in an electrolyte solution comprised of glacial acetic acid and H[sub 2]O. The mercuric ions are then electrolytically reduced and plated onto a cathode producing the required pre-determined quantity of Hg. In another embodiment, pre-determined, precise amounts of Hg are obtained from Hg[sub 2]Cl[sub 2]. The method for doing this involves dissolving a precise amount of Hg[sub 2]Cl[sub 2] in an electrolyte solution comprised of concentrated HCl and H[sub 2]O. The mercurous ions in solution are then electrolytically reduced and plated onto a cathode wire producing the required, pre-determined quantity of Hg. 1 fig.

  9. Seven Golden Rules for heuristic filtering of molecular formulas obtained by accurate mass spectrometry

    Directory of Open Access Journals (Sweden)

    Fiehn Oliver

    2007-03-01

    Full Text Available Abstract Background Structure elucidation of unknown small molecules by mass spectrometry is a challenge despite advances in instrumentation. The first crucial step is to obtain correct elemental compositions. In order to automatically constrain the thousands of possible candidate structures, rules need to be developed to select the most likely and chemically correct molecular formulas. Results An algorithm for filtering molecular formulas is derived from seven heuristic rules: (1 restrictions for the number of elements, (2 LEWIS and SENIOR chemical rules, (3 isotopic patterns, (4 hydrogen/carbon ratios, (5 element ratio of nitrogen, oxygen, phosphor, and sulphur versus carbon, (6 element ratio probabilities and (7 presence of trimethylsilylated compounds. Formulas are ranked according to their isotopic patterns and subsequently constrained by presence in public chemical databases. The seven rules were developed on 68,237 existing molecular formulas and were validated in four experiments. First, 432,968 formulas covering five million PubChem database entries were checked for consistency. Only 0.6% of these compounds did not pass all rules. Next, the rules were shown to effectively reducing the complement all eight billion theoretically possible C, H, N, S, O, P-formulas up to 2000 Da to only 623 million most probable elemental compositions. Thirdly 6,000 pharmaceutical, toxic and natural compounds were selected from DrugBank, TSCA and DNP databases. The correct formulas were retrieved as top hit at 80–99% probability when assuming data acquisition with complete resolution of unique compounds and 5% absolute isotope ratio deviation and 3 ppm mass accuracy. Last, some exemplary compounds were analyzed by Fourier transform ion cyclotron resonance mass spectrometry and by gas chromatography-time of flight mass spectrometry. In each case, the correct formula was ranked as top hit when combining the seven rules with database queries. Conclusion The

  10. Examining ERP correlates of recognition memory: Evidence of accurate source recognition without recollection

    Science.gov (United States)

    Addante, Richard, J.; Ranganath, Charan; Yonelinas, Andrew, P.

    2012-01-01

    Recollection is typically associated with high recognition confidence and accurate source memory. However, subjects sometimes make accurate source memory judgments even for items that are not confidently recognized, and it is not known whether these responses are based on recollection or some other memory process. In the current study, we measured event related potentials (ERPs) while subjects made item and source memory confidence judgments in order to determine whether recollection supported accurate source recognition responses for items that were not confidently recognized. In line with previous studies, we found that recognition memory was associated with two ERP effects: an early on-setting FN400 effect, and a later parietal old-new effect [Late Positive Component (LPC)], which have been associated with familiarity and recollection, respectively. The FN400 increased gradually with item recognition confidence, whereas the LPC was only observed for highly confident recognition responses. The LPC was also related to source accuracy, but only for items that had received a high confidence item recognition response; accurate source judgments to items that were less confidently recognized did not exhibit the typical ERP correlate of recollection or familiarity, but rather showed a late, broadly distributed negative ERP difference. The results indicate that accurate source judgments of episodic context can occur even when recollection fails. PMID:22548808

  11. Comparison of Vespula germanica venoms obtained from different sources.

    Science.gov (United States)

    Sanchez, F; Blanca, M; Miranda, A; Carmona, M J; Garcia, J; Fernandez, J; Torres, M J; Rondon, M C; Juarez, C

    1994-08-01

    This study was carried out to compare the allergenic potency of Vespula germanica (VG) venoms extracted by different methods and commercially available venoms from Vespula species currently used for in vivo and in vitro studies including immunotherapy. Pure VG venom was used as the reference material. Protein content and enzymatic and allergenic properties of all venoms studied were determined by dye stain reagent, hyaluronidase and phospholipase A1B enzyme activities, and radioallergosorbent test inhibition studies, respectively. Radioallergosorbent test discs sensitized with commercial and pure VG venom were compared using specific IgE antibodies from subjects allergic to VG venom. The data obtained indicate that there were important differences in the allergenic potency between the Vespula species venoms employed for in vivo and/or in vitro assays, VG venom obtained by sac dissection, and pure VG venom. These results indicate that venoms from Vespula species used for in vitro and in vivo tests have a lower concentration of allergens and contain nonvenom proteins. These data should be taken into account when these vespid venoms are used for diagnostic purposes and also when evaluating immunotherapy studies.

  12. LIGNOCELLULOSE AS AN ALTERNATIVE SOURCE FOR OBTAINING OF BIOBUTANOL

    Directory of Open Access Journals (Sweden)

    S. M. Shulga

    2013-04-01

    Full Text Available Energy and environmental crisis facing the world force us to reconsider the effectiveness or find an alternative use of renewable natural resources, especially organic «waste» by using environmentally friendly technologies. Microbial conversion of renewable resources of biosphere to produce useful products, including biofuels, currently is an actual biotech problem. Anaerobic bacteria of Clostridiaceae family are known as butanol producers, but unfortunately, the microbiological synthesis is currently not economical one. In order to make cost-effective aceton-butanol-ethanol fermentation, solventproducing strains using available cheap raw materials, such as agricultural waste or plant biomass, are required. Opportunities and ways to obtaine economic and ecological processing of lignocellulosic wastes for biobutanol creation are described in the review .

  13. GENERATING ACCURATE 3D MODELS OF ARCHITECTURAL HERITAGE STRUCTURES USING LOW-COST CAMERA AND OPEN SOURCE ALGORITHMS

    Directory of Open Access Journals (Sweden)

    M. Zacharek

    2017-05-01

    Full Text Available These studies have been conductedusing non-metric digital camera and dense image matching algorithms, as non-contact methods of creating monuments documentation.In order toprocess the imagery, few open-source software and algorithms of generating adense point cloud from images have been executed. In the research, the OSM Bundler, VisualSFM software, and web application ARC3D were used. Images obtained for each of the investigated objects were processed using those applications, and then dense point clouds and textured 3D models were created. As a result of post-processing, obtained models were filtered and scaled.The research showedthat even using the open-source software it is possible toobtain accurate 3D models of structures (with an accuracy of a few centimeters, but for the purpose of documentation and conservation of cultural and historical heritage, such accuracy can be insufficient.

  14. THE ARECIBO METHANOL MASER GALACTIC PLANE SURVEY. IV. ACCURATE ASTROMETRY AND SOURCE MORPHOLOGIES

    International Nuclear Information System (INIS)

    Pandian, J. D.; Momjian, E.; Xu, Y.; Menten, K. M.; Goldsmith, P. F.

    2011-01-01

    We present accurate absolute astrometry of 6.7 GHz methanol masers detected in the Arecibo Methanol Maser Galactic Plane Survey using MERLIN and the Expanded Very Large Array (EVLA). We estimate the absolute astrometry to be accurate to better than 15 and 80 mas for the MERLIN and EVLA observations, respectively. We also derive the morphologies of the maser emission distributions for sources stronger than ∼1 Jy. The median spatial extent along the major axis of the regions showing maser emission is ∼775 AU. We find a majority of methanol maser morphologies to be complex with some sources previously determined to have regular morphologies in fact being embedded within larger structures. This suggests that some maser spots do not have a compact core, which leads to them being resolved in high angular resolution observations. This also casts doubt on interpretations of the origin of methanol maser emission solely based on source morphologies. We also investigate the association of methanol masers with mid-infrared emission and find very close correspondence between methanol masers and 24 μm point sources. This adds further credence to theoretical models that predict methanol masers to be pumped by warm dust emission and firmly reinforces the finding that Class II methanol masers are unambiguous tracers of embedded high-mass protostars.

  15. Virtual Reality Based Accurate Radioactive Source Representation and Dosimetry for Training Applications

    International Nuclear Information System (INIS)

    Molto-Caracena, T.; Vendrell Vidal, E.; Goncalves, J.G.M.; Peerani, P.; )

    2015-01-01

    Virtual Reality (VR) technologies have much potential for training applications. Success relies on the capacity to provide a real-time immersive effect to a trainee. For a training application to be an effective/meaningful tool, 3D realistic scenarios are not enough. Indeed, it is paramount having sufficiently accurate models of the behaviour of the instruments to be used by a trainee. This will enable the required level of user's interactivity. Specifically, when dealing with simulation of radioactive sources, a VR model based application must compute the dose rate with equivalent accuracy and in about the same time as a real instrument. A conflicting requirement is the need to provide a smooth visual rendering enabling spatial interactivity and interaction. This paper presents a VR based prototype which accurately computes the dose rate of radioactive and nuclear sources that can be selected from a wide library. Dose measurements reflect local conditions, i.e., presence of (a) shielding materials with any shape and type and (b) sources with any shape and dimension. Due to a novel way of representing radiation sources, the system is fast enough to grant the necessary user interactivity. The paper discusses the application of this new method and its advantages in terms of time setting, cost and logistics. (author)

  16. Disambiguating past events: Accurate source memory for time and context depends on different retrieval processes.

    Science.gov (United States)

    Persson, Bjorn M; Ainge, James A; O'Connor, Akira R

    2016-07-01

    Current animal models of episodic memory are usually based on demonstrating integrated memory for what happened, where it happened, and when an event took place. These models aim to capture the testable features of the definition of human episodic memory which stresses the temporal component of the memory as a unique piece of source information that allows us to disambiguate one memory from another. Recently though, it has been suggested that a more accurate model of human episodic memory would include contextual rather than temporal source information, as humans' memory for time is relatively poor. Here, two experiments were carried out investigating human memory for temporal and contextual source information, along with the underlying dual process retrieval processes, using an immersive virtual environment paired with a 'Remember-Know' memory task. Experiment 1 (n=28) showed that contextual information could only be retrieved accurately using recollection, while temporal information could be retrieved using either recollection or familiarity. Experiment 2 (n=24), which used a more difficult task, resulting in reduced item recognition rates and therefore less potential for contamination by ceiling effects, replicated the pattern of results from Experiment 1. Dual process theory predicts that it should only be possible to retrieve source context from an event using recollection, and our results are consistent with this prediction. That temporal information can be retrieved using familiarity alone suggests that it may be incorrect to view temporal context as analogous to other typically used source contexts. This latter finding supports the alternative proposal that time since presentation may simply be reflected in the strength of memory trace at retrieval - a measure ideally suited to trace strength interrogation using familiarity, as is typically conceptualised within the dual process framework. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Fast and Accurate Rat Head Motion Tracking With Point Sources for Awake Brain PET.

    Science.gov (United States)

    Miranda, Alan; Staelens, Steven; Stroobants, Sigrid; Verhaeghe, Jeroen

    2017-07-01

    To avoid the confounding effects of anesthesia and immobilization stress in rat brain positron emission tomography (PET), motion tracking-based unrestrained awake rat brain imaging is being developed. In this paper, we propose a fast and accurate rat headmotion tracking method based on small PET point sources. PET point sources (3-4) attached to the rat's head are tracked in image space using 15-32-ms time frames. Our point source tracking (PST) method was validated using a manually moved microDerenzo phantom that was simultaneously tracked with an optical tracker (OT) for comparison. The PST method was further validated in three awake [ 18 F]FDG rat brain scans. Compared with the OT, the PST-based correction at the same frame rate (31.2 Hz) reduced the reconstructed FWHM by 0.39-0.66 mm for the different tested rod sizes of the microDerenzo phantom. The FWHM could be further reduced by another 0.07-0.13 mm when increasing the PST frame rate (66.7 Hz). Regional brain [ 18 F]FDG uptake in the motion corrected scan was strongly correlated ( ) with that of the anesthetized reference scan for all three cases ( ). The proposed PST method allowed excellent and reproducible motion correction in awake in vivo experiments. In addition, there is no need of specialized tracking equipment or additional calibrations to be performed, the point sources are practically imperceptible to the rat, and PST is ideally suitable for small bore scanners, where optical tracking might be challenging.

  18. Accurate nonlinear modeling for flexible manipulators using mixed finite element formulation in order to obtain maximum allowable load

    International Nuclear Information System (INIS)

    Esfandiar, Habib; KoraYem, Moharam Habibnejad

    2015-01-01

    In this study, the researchers try to examine nonlinear dynamic analysis and determine Dynamic load carrying capacity (DLCC) in flexible manipulators. Manipulator modeling is based on Timoshenko beam theory (TBT) considering the effects of shear and rotational inertia. To get rid of the risk of shear locking, a new procedure is presented based on mixed finite element formulation. In the method proposed, shear deformation is free from the risk of shear locking and independent of the number of integration points along the element axis. Dynamic modeling of manipulators will be done by taking into account small and large deformation models and using extended Hamilton method. System motion equations are obtained by using nonlinear relationship between displacements-strain and 2nd PiolaKirchoff stress tensor. In addition, a comprehensive formulation will be developed to calculate DLCC of the flexible manipulators during the path determined considering the constraints end effector accuracy, maximum torque in motors and maximum stress in manipulators. Simulation studies are conducted to evaluate the efficiency of the method proposed taking two-link flexible and fixed base manipulators for linear and circular paths into consideration. Experimental results are also provided to validate the theoretical model. The findings represent the efficiency and appropriate performance of the method proposed.

  19. Accurate nonlinear modeling for flexible manipulators using mixed finite element formulation in order to obtain maximum allowable load

    Energy Technology Data Exchange (ETDEWEB)

    Esfandiar, Habib; KoraYem, Moharam Habibnejad [Islamic Azad University, Tehran (Iran, Islamic Republic of)

    2015-09-15

    In this study, the researchers try to examine nonlinear dynamic analysis and determine Dynamic load carrying capacity (DLCC) in flexible manipulators. Manipulator modeling is based on Timoshenko beam theory (TBT) considering the effects of shear and rotational inertia. To get rid of the risk of shear locking, a new procedure is presented based on mixed finite element formulation. In the method proposed, shear deformation is free from the risk of shear locking and independent of the number of integration points along the element axis. Dynamic modeling of manipulators will be done by taking into account small and large deformation models and using extended Hamilton method. System motion equations are obtained by using nonlinear relationship between displacements-strain and 2nd PiolaKirchoff stress tensor. In addition, a comprehensive formulation will be developed to calculate DLCC of the flexible manipulators during the path determined considering the constraints end effector accuracy, maximum torque in motors and maximum stress in manipulators. Simulation studies are conducted to evaluate the efficiency of the method proposed taking two-link flexible and fixed base manipulators for linear and circular paths into consideration. Experimental results are also provided to validate the theoretical model. The findings represent the efficiency and appropriate performance of the method proposed.

  20. Accurate source location from waves scattered by surface topography: Applications to the Nevada and North Korean test sites

    Science.gov (United States)

    Shen, Y.; Wang, N.; Bao, X.; Flinders, A. F.

    2016-12-01

    Scattered waves generated near the source contains energy converted from the near-field waves to the far-field propagating waves, which can be used to achieve location accuracy beyond the diffraction limit. In this work, we apply a novel full-wave location method that combines a grid-search algorithm with the 3D Green's tensor database to locate the Non-Proliferation Experiment (NPE) at the Nevada test site and the North Korean nuclear tests. We use the first arrivals (Pn/Pg) and their immediate codas, which are likely dominated by waves scattered at the surface topography near the source, to determine the source location. We investigate seismograms in the frequency of [1.0 2.0] Hz to reduce noises in the data and highlight topography scattered waves. High resolution topographic models constructed from 10 and 90 m grids are used for Nevada and North Korea, respectively. The reference velocity model is based on CRUST 1.0. We use the collocated-grid finite difference method on curvilinear grids to calculate the strain Green's tensor and obtain synthetic waveforms using source-receiver reciprocity. The `best' solution is found based on the least-square misfit between the observed and synthetic waveforms. To suppress random noises, an optimal weighting method for three-component seismograms is applied in misfit calculation. Our results show that the scattered waves are crucial in improving resolution and allow us to obtain accurate solutions with a small number of stations. Since the scattered waves depends on topography, which is known at the wavelengths of regional seismic waves, our approach yields absolute, instead of relative, source locations. We compare our solutions with those of USGS and other studies. Moreover, we use differential waveforms to locate pairs of the North Korea tests from years 2006, 2009, 2013 and 2016 to further reduce the effects of unmodeled heterogeneities and errors in the reference velocity model.

  1. Reciprocity Method for Obtaining the Far Fields Generated by a Source Inside or Near a Microparticle

    National Research Council Canada - National Science Library

    Hill, Steven

    1997-01-01

    We show that the far fields generated by a source inside or near a microparticle can be obtained readily by using the reciprocity theorem along with the internal or near fields generated by plane wave illumination...

  2. Anatomically constrained dipole adjustment (ANACONDA) for accurate MEG/EEG focal source localizations

    Science.gov (United States)

    Im, Chang-Hwan; Jung, Hyun-Kyo; Fujimaki, Norio

    2005-10-01

    This paper proposes an alternative approach to enhance localization accuracy of MEG and EEG focal sources. The proposed approach assumes anatomically constrained spatio-temporal dipoles, initial positions of which are estimated from local peak positions of distributed sources obtained from a pre-execution of distributed source reconstruction. The positions of the dipoles are then adjusted on the cortical surface using a novel updating scheme named cortical surface scanning. The proposed approach has many advantages over the conventional ones: (1) as the cortical surface scanning algorithm uses spatio-temporal dipoles, it is robust with respect to noise; (2) it requires no a priori information on the numbers and initial locations of the activations; (3) as the locations of dipoles are restricted only on a tessellated cortical surface, it is physiologically more plausible than the conventional ECD model. To verify the proposed approach, it was applied to several realistic MEG/EEG simulations and practical experiments. From the several case studies, it is concluded that the anatomically constrained dipole adjustment (ANACONDA) approach will be a very promising technique to enhance accuracy of focal source localization which is essential in many clinical and neurological applications of MEG and EEG.

  3. Anatomically constrained dipole adjustment (ANACONDA) for accurate MEG/EEG focal source localizations

    International Nuclear Information System (INIS)

    Im, Chang-Hwan; Jung, Hyun-Kyo; Fujimaki, Norio

    2005-01-01

    This paper proposes an alternative approach to enhance localization accuracy of MEG and EEG focal sources. The proposed approach assumes anatomically constrained spatio-temporal dipoles, initial positions of which are estimated from local peak positions of distributed sources obtained from a pre-execution of distributed source reconstruction. The positions of the dipoles are then adjusted on the cortical surface using a novel updating scheme named cortical surface scanning. The proposed approach has many advantages over the conventional ones: (1) as the cortical surface scanning algorithm uses spatio-temporal dipoles, it is robust with respect to noise; (2) it requires no a priori information on the numbers and initial locations of the activations; (3) as the locations of dipoles are restricted only on a tessellated cortical surface, it is physiologically more plausible than the conventional ECD model. To verify the proposed approach, it was applied to several realistic MEG/EEG simulations and practical experiments. From the several case studies, it is concluded that the anatomically constrained dipole adjustment (ANACONDA) approach will be a very promising technique to enhance accuracy of focal source localization which is essential in many clinical and neurological applications of MEG and EEG

  4. A simplified approach to characterizing a kilovoltage source spectrum for accurate dose computation

    Energy Technology Data Exchange (ETDEWEB)

    Poirier, Yannick; Kouznetsov, Alexei; Tambasco, Mauro [Department of Physics and Astronomy, University of Calgary, Calgary, Alberta T2N 4N2 (Canada); Department of Physics and Astronomy and Department of Oncology, University of Calgary and Tom Baker Cancer Centre, Calgary, Alberta T2N 4N2 (Canada)

    2012-06-15

    2% for the homogeneous and heterogeneous block phantoms, and agreement for the transverse dose profiles was within 6%. Conclusions: The HVL and kVp are sufficient for characterizing a kV x-ray source spectrum for accurate dose computation. As these parameters can be easily and accurately measured, they provide for a clinically feasible approach to characterizing a kV energy spectrum to be used for patient specific x-ray dose computations. Furthermore, these results provide experimental validation of our novel hybrid dose computation algorithm.

  5. Study and realisation of an ion source obtained by electronic bombardment - experimentation with phosphorus

    International Nuclear Information System (INIS)

    Schneider, Philippe

    1979-01-01

    This research thesis reports the study and development of an ion source by electronic bombardment. In order to solve some practical difficulties (cathode destruction, source instability, and so on), the design of each component has been very careful, notably for the electron gun. The author first briefly discusses the exiting ionisation processes, gives a list of ion which can be produced, with a focus on phosphorus for which the ionisation cross section is defined and assessed. After an assessment of different ionisation processes, and an indication of performance of the best existing sources, the author explains the choice for a totally different process. In the second part, he describes the experimental device, and particularly the electron gun as its design has been an important part of this research work. The source operation is described and its characteristics and performance are studied. Finally, the author outlines that some improvements are still possible to obtain a totally exploitable source [fr

  6. Theoretical galactic cosmic ray electron spectrum obtained for sources of varying geometry

    International Nuclear Information System (INIS)

    Cohen, M.E.

    1969-01-01

    Jokipii and Meyer have recently obtained an electron density energy spectrum of the cosmic rays, originating in the Galaxy, using integral solutions of the steady state transfer equations, by considering a circular cylindric galactic disc as source and approximating the resulting fourth order integral. In this report, we present general results, obtained by using an arbitrary circular cylindric source, without restricting ourselves to the galactic disc. The integrals are treated exactly. The conclusions of Jokipii and Meyer form special cases of these results. We also obtain an exponential energy variation which, at the moment, is not observed experimentally. The second part of this work deals with more complicated, but perhaps more realistic models of elliptic cylindric and ellipsoidal galactic disc sources. One may also note that a very large source concentrated in a very small region gives a spectrum not unlike that for a small source distributed throughout a large volume. Finally, it may be remarked that the model adopted is much less restrictive than the artificial conception of 'leakage time' followed by other workers. (author) [fr

  7. Fast accurate MEG source localization using a multilayer perceptron trained with real brain noise

    International Nuclear Information System (INIS)

    Jun, Sung Chan; Pearlmutter, Barak A.; Nolte, Guido

    2002-01-01

    Iterative gradient methods such as Levenberg-Marquardt (LM) are in widespread use for source localization from electroencephalographic (EEG) and magnetoencephalographic (MEG) signals. Unfortunately, LM depends sensitively on the initial guess, necessitating repeated runs. This, combined with LM's high per-step cost, makes its computational burden quite high. To reduce this burden, we trained a multilayer perceptron (MLP) as a real-time localizer. We used an analytical model of quasistatic electromagnetic propagation through a spherical head to map randomly chosen dipoles to sensor activities according to the sensor geometry of a 4D Neuroimaging Neuromag-122 MEG system, and trained a MLP to invert this mapping in the absence of noise or in the presence of various sorts of noise such as white Gaussian noise, correlated noise, or real brain noise. A MLP structure was chosen to trade off computation and accuracy. This MLP was trained four times, with each type of noise. We measured the effects of initial guesses on LM performance, which motivated a hybrid MLP-start-LM method, in which the trained MLP initializes LM. We also compared the localization performance of LM, MLPs, and hybrid MLP-start-LMs for realistic brain signals. Trained MLPs are much faster than other methods, while the hybrid MLP-start-LMs are faster and more accurate than fixed-4-start-LM. In particular, the hybrid MLP-start-LM initialized by a MLP trained with the real brain noise dataset is 60 times faster and is comparable in accuracy to random-20-start-LM, and this hybrid system (localization error: 0.28 cm, computation time: 36 ms) shows almost as good performance as optimal-1-start-LM (localization error: 0.23 cm, computation time: 22 ms), which initializes LM with the correct dipole location. MLPs trained with noise perform better than the MLP trained without noise, and the MLP trained with real brain noise is almost as good an initial guesser for LM as the correct dipole location. (author) )

  8. Application of the source term code package to obtain a specific source term for the Laguna Verde Nuclear Power Plant

    International Nuclear Information System (INIS)

    Souto, F.J.

    1991-06-01

    The main objective of the project was to use the Source Term Code Package (STCP) to obtain a specific source term for those accident sequences deemed dominant as a result of probabilistic safety analyses (PSA) for the Laguna Verde Nuclear Power Plant (CNLV). The following programme has been carried out to meet this objective: (a) implementation of the STCP, (b) acquisition of specific data for CNLV to execute the STCP, and (c) calculations of specific source terms for accident sequences at CNLV. The STCP has been implemented and validated on CDC 170/815 and CDC 180/860 main frames as well as on a Micro VAX 3800 system. In order to get a plant-specific source term, data on the CNLV including initial core inventory, burn-up, primary containment structures, and materials used for the calculations have been obtained. Because STCP does not explicitly model containment failure, dry well failure in the form of a catastrophic rupture has been assumed. One of the most significant sequences from the point of view of possible off-site risk is the loss of off-site power with failure of the diesel generators and simultaneous loss of high pressure core spray and reactor core isolation cooling systems. The probability for that event is approximately 4.5 x 10 -6 . This sequence has been analysed in detail and the release fractions of radioisotope groups are given in the full report. 18 refs, 4 figs, 3 tabs

  9. The Remote Food Photography Method accurately estimates dry powdered foods—the source of calories for many infants

    Science.gov (United States)

    Duhé, Abby F.; Gilmore, L. Anne; Burton, Jeffrey H.; Martin, Corby K.; Redman, Leanne M.

    2016-01-01

    Background Infant formula is a major source of nutrition for infants with over half of all infants in the United States consuming infant formula exclusively or in combination with breast milk. The energy in infant powdered formula is derived from the powder and not the water making it necessary to develop methods that can accurately estimate the amount of powder used prior to reconstitution. Objective To assess the use of the Remote Food Photography Method (RFPM) to accurately estimate the weight of infant powdered formula before reconstitution among the standard serving sizes. Methods For each serving size (1-scoop, 2-scoop, 3-scoop, and 4-scoop), a set of seven test bottles and photographs were prepared including the recommended gram weight of powdered formula of the respective serving size by the manufacturer, three bottles and photographs containing 15%, 10%, and 5% less powdered formula than recommended, and three bottles and photographs containing 5%, 10%, and 15% more powdered formula than recommended (n=28). Ratio estimates of the test photographs as compared to standard photographs were obtained using standard RFPM analysis procedures. The ratio estimates and the United States Department of Agriculture (USDA) data tables were used to generate food and nutrient information to provide the RFPM estimates. Statistical Analyses Performed Equivalence testing using the two one-sided t- test (TOST) approach was used to determine equivalence between the actual gram weights and the RFPM estimated weights for all samples, within each serving size, and within under-prepared and over-prepared bottles. Results For all bottles, the gram weights estimated by the RFPM were within 5% equivalence bounds with a slight under-estimation of 0.05 g (90% CI [−0.49, 0.40]; p<0.001) and mean percent error ranging between 0.32% and 1.58% among the four serving sizes. Conclusion The maximum observed mean error was an overestimation of 1.58% of powdered formula by the RFPM under

  10. The Remote Food Photography Method Accurately Estimates Dry Powdered Foods-The Source of Calories for Many Infants.

    Science.gov (United States)

    Duhé, Abby F; Gilmore, L Anne; Burton, Jeffrey H; Martin, Corby K; Redman, Leanne M

    2016-07-01

    Infant formula is a major source of nutrition for infants, with more than half of all infants in the United States consuming infant formula exclusively or in combination with breast milk. The energy in infant powdered formula is derived from the powder and not the water, making it necessary to develop methods that can accurately estimate the amount of powder used before reconstitution. Our aim was to assess the use of the Remote Food Photography Method to accurately estimate the weight of infant powdered formula before reconstitution among the standard serving sizes. For each serving size (1 scoop, 2 scoops, 3 scoops, and 4 scoops), a set of seven test bottles and photographs were prepared as follow: recommended gram weight of powdered formula of the respective serving size by the manufacturer; three bottles and photographs containing 15%, 10%, and 5% less powdered formula than recommended; and three bottles and photographs containing 5%, 10%, and 15% more powdered formula than recommended (n=28). Ratio estimates of the test photographs as compared to standard photographs were obtained using standard Remote Food Photography Method analysis procedures. The ratio estimates and the US Department of Agriculture data tables were used to generate food and nutrient information to provide the Remote Food Photography Method estimates. Equivalence testing using the two one-sided t tests approach was used to determine equivalence between the actual gram weights and the Remote Food Photography Method estimated weights for all samples, within each serving size, and within underprepared and overprepared bottles. For all bottles, the gram weights estimated by the Remote Food Photography Method were within 5% equivalence bounds with a slight underestimation of 0.05 g (90% CI -0.49 to 0.40; P<0.001) and mean percent error ranging between 0.32% and 1.58% among the four serving sizes. The maximum observed mean error was an overestimation of 1.58% of powdered formula by the Remote

  11. Accurately computing the optical pathlength difference for a michelson interferometer with minimal knowledge of the source spectrum.

    Science.gov (United States)

    Milman, Mark H

    2005-12-01

    Astrometric measurements using stellar interferometry rely on precise measurement of the central white light fringe to accurately obtain the optical pathlength difference of incoming starlight to the two arms of the interferometer. One standard approach to stellar interferometry uses a channeled spectrum to determine phases at a number of different wavelengths that are then converted to the pathlength delay. When throughput is low these channels are broadened to improve the signal-to-noise ratio. Ultimately the ability to use monochromatic models and algorithms in each of the channels to extract phase becomes problematic and knowledge of the spectrum must be incorporated to achieve the accuracies required of the astrometric measurements. To accomplish this an optimization problem is posed to estimate simultaneously the pathlength delay and spectrum of the source. Moreover, the nature of the parameterization of the spectrum that is introduced circumvents the need to solve directly for these parameters so that the optimization problem reduces to a scalar problem in just the pathlength delay variable. A number of examples are given to show the robustness of the approach.

  12. From global to local statistical shape priors novel methods to obtain accurate reconstruction results with a limited amount of training shapes

    CERN Document Server

    Last, Carsten

    2017-01-01

    This book proposes a new approach to handle the problem of limited training data. Common approaches to cope with this problem are to model the shape variability independently across predefined segments or to allow artificial shape variations that cannot be explained through the training data, both of which have their drawbacks. The approach presented uses a local shape prior in each element of the underlying data domain and couples all local shape priors via smoothness constraints. The book provides a sound mathematical foundation in order to embed this new shape prior formulation into the well-known variational image segmentation framework. The new segmentation approach so obtained allows accurate reconstruction of even complex object classes with only a few training shapes at hand.

  13. Charging and discharging tests for obtaining an accurate dynamic electro-thermal model of high power lithium-ion pack system for hybrid and EV applications

    DEFF Research Database (Denmark)

    Mihet-Popa, Lucian; Camacho, Oscar Mauricio Forero; Nørgård, Per Bromand

    2013-01-01

    This paper presents a battery test platform including two Li-ion battery designed for hybrid and EV applications, and charging/discharging tests under different operating conditions carried out for developing an accurate dynamic electro-thermal model of a high power Li-ion battery pack system....... The aim of the tests has been to study the impact of the battery degradation and to find out the dynamic characteristics of the cells including nonlinear open circuit voltage, series resistance and parallel transient circuit at different charge/discharge currents and cell temperature. An equivalent...... circuit model, based on the runtime battery model and the Thevenin circuit model, with parameters obtained from the tests and depending on SOC, current and temperature has been implemented in MATLAB/Simulink and Power Factory. A good alignment between simulations and measurements has been found....

  14. About possibilities of obtaining focused beams of thermal neutrons of radionuclide source

    International Nuclear Information System (INIS)

    Aripov, G.A.; Kurbanov, B.I.; Sulaymanov, N.T.; Ergashev, A.

    2004-01-01

    Full text: In the last years significant progress is achieved in development of neutron focusing methods (concentrating neutrons in a given direction and a small area). In this, main attention is given to focusing of neutron beams of reactor, particularly cold neutrons and their applications. [1,2]. However, isotope sources also let obtain intensive neutron beams and solve quite important (tasks) problems (e.g. neutron capture therapy for malignant tumors) [3], and an actual problems is focusing of neutrons. We developed a device on the basis of californium source of neutrons, allowing to obtain focused (preliminarily) beam of thermal neutrons with the aid of respective choice of moderators, reflectors and geometry of their disposition. Here, fast neutrons and gamma rays in the beam are minimized. With the aid of the model we developed on the basis of Monte-Carlo method, it is possible to modify aforementioned device and dynamics of output neutrons in wide energy range and analyze ways of optimization of neutron beams of isotope sources with different neutron outputs. Device of preliminary focusing of thermal neutrons can serve as a basis for further focus of neutrons using micro- and nano-capillar systems. It is known that, capillary systems performed with certain technology can form beam of thermal neutrons increasing its density by more than two orders of magnitude and effectively divert beams up to 20 o with length of system 15 cm

  15. About possibilities of obtaining focused beams of thermal neutrons of radionuclide source

    International Nuclear Information System (INIS)

    Aripov, G.A.; Kurbanov, B.I.; Sulaymanov, N.T.; Ergashev, A.

    2004-01-01

    In the last years significant progress is achieved in development of neutron focusing methods (concentrating neutrons in a given direction and a small area). In this, main attention is given to focusing of neutron beams of reactor, particularly cold neutrons and their applications. [1,2]. However, isotope sources also let obtain intensive neutron beams and solve quite important (tasks) problems (e.g. neutron capture therapy for malignant tumors) [3], and an actual problems is focusing of neutrons. We developed a device on the basis of californium source of neutrons, allowing to obtain focused (preliminarily) beam of thermal neutrons with the aid of respective choice of moderators, reflectors and geometry of their disposition. Here, fast neutrons and gamma rays in the beam are minimized. With the aid of the model we developed on the basis of Monte-Carlo method, it is possible to modify aforementioned device and dynamics of output neutrons in wide energy range and analyze ways of optimization of neutron beams of isotope sources with different neutron outputs. Device of preliminary focusing of thermal neutrons can serve as a basis for further focus of neutrons using micro- and nano-capillary systems. It is known that, capillary systems performed with certain technology can form beam of thermal neutrons increasing its density by more than two orders of magnitude and effectively divert beams up to 20 o with length of system 15 cm. (author)

  16. Polymeric polyelectrolytes obtained from renewable sources for biodiesel wastewater treatment by dual-flocculation

    Directory of Open Access Journals (Sweden)

    E. A. M. Ribeiro

    2017-06-01

    Full Text Available Biodiesel wastewater generally contains high levels of oils, soaps and glycerol residues. This needs wastewater treatment. In this study, the biodiesel wastewater treatment was tested (industrial wastewater (EFID and laboratory wastewater (EFLB from biodiesel by performing flocculation and dual-flocculation with renewable polymers. Tannin and cationic hemicellulose (CH were used as cationic flocculant, and cellulose acetate sulfate (CAS was used as an anionic flocculant. Polyacrylamide (PAM was used as a reference anionic flocculant for result efficiencies analysis obtained with CAS (renewable source flocculant. The treatment efficacy in wastewater was evaluated by: turbidity removal, sludge volume formed, chemical oxygen demand (COD and total suspended solids (TSS. The obtained sludge was studied using thermogravimetric analysis (TG. The dual-flocculation application condition of the 25% proportion of tannin (T and 75% proportion of cationic hemicelluloses (i.e., T25/CH75 showed EFLB turbidity removal of 89.1% and 89.5% for CAS and PAM additions respectively, and for EFID of 67% and 41% for CAS and PAM additions respectively. The dual-flocculation performance suggested that the polyelectrolytes obtained from renewable sources can be used for treating biodiesel wastewater.

  17. Disambiguating past events: accurate source memory for time and context depends on different retrieval processes

    OpenAIRE

    Persson, Bjorn Martin; Ainge, James Alexander; O'Connor, Akira Robert

    2016-01-01

    Participant payment was provided by the School of Psychology and Neuroscience ResPay scheme. Current animal models of episodic memory are usually based on demonstrating integrated memory for what happened, where it happened, and when an event took place. These models aim to capture the testable features of the definition of human episodic memory which stresses the temporal component of the memory as a unique piece of source information that allows us to disambiguate one memory from another...

  18. Fast and accurate detection of spread source in large complex networks.

    Science.gov (United States)

    Paluch, Robert; Lu, Xiaoyan; Suchecki, Krzysztof; Szymański, Bolesław K; Hołyst, Janusz A

    2018-02-06

    Spread over complex networks is a ubiquitous process with increasingly wide applications. Locating spread sources is often important, e.g. finding the patient one in epidemics, or source of rumor spreading in social network. Pinto, Thiran and Vetterli introduced an algorithm (PTVA) to solve the important case of this problem in which a limited set of nodes act as observers and report times at which the spread reached them. PTVA uses all observers to find a solution. Here we propose a new approach in which observers with low quality information (i.e. with large spread encounter times) are ignored and potential sources are selected based on the likelihood gradient from high quality observers. The original complexity of PTVA is O(N α ), where α ∈ (3,4) depends on the network topology and number of observers (N denotes the number of nodes in the network). Our Gradient Maximum Likelihood Algorithm (GMLA) reduces this complexity to O (N 2 log (N)). Extensive numerical tests performed on synthetic networks and real Gnutella network with limitation that id's of spreaders are unknown to observers demonstrate that for scale-free networks with such limitation GMLA yields higher quality localization results than PTVA does.

  19. Accurate Reconstruction of the Roman Circus in Milan by Georeferencing Heterogeneous Data Sources with GIS

    Directory of Open Access Journals (Sweden)

    Gabriele Guidi

    2017-09-01

    Full Text Available This paper presents the methodological approach and the actual workflow for creating the 3D digital reconstruction in time of the ancient Roman Circus of Milan, which is presently covered completely by the urban fabric of the modern city. The diachronic reconstruction is based on a proper mix of quantitative data originated by current 3D surveys and historical sources, such as ancient maps, drawings, archaeological reports, restrictions decrees, and old photographs. When possible, such heterogeneous sources have been georeferenced and stored in a GIS system. In this way the sources have been analyzed in depth, allowing the deduction of geometrical information not explicitly revealed by the material available. A reliable reconstruction of the area in different historical periods has been therefore hypothesized. This research has been carried on in the framework of the project Cultural Heritage Through Time—CHT2, funded by the Joint Programming Initiative on Cultural Heritage (JPI-CH, supported by the Italian Ministry for Cultural Heritage (MiBACT, the Italian Ministry for University and Research (MIUR, and the European Commission.

  20. An Improved Cambridge Filter Pad Extraction Methodology to Obtain More Accurate Water and “Tar” Values: In Situ Cambridge Filter Pad Extraction Methodology

    Directory of Open Access Journals (Sweden)

    Ghosh David

    2014-07-01

    conventional cigarettes is required the in situ extraction methodology must be used for the aerosol of the PMI HTP to obtain accurate NFDPM/”tar” values. This would be for example the case if there were a need to print “tar” yields on packs or compare yields to ceilings. Failure to use the in situ extraction methodology will result in erroneous and overestimated NFDPM/”tar” values.

  1. Minimally processed beetroot waste as an alternative source to obtain functional ingredients.

    Science.gov (United States)

    Costa, Anne Porto Dalla; Hermes, Vanessa Stahl; Rios, Alessandro de Oliveira; Flôres, Simone Hickmann

    2017-06-01

    Large amounts of waste are generated by the minimally processed vegetables industry, such as those from beetroot processing. The aim of this study was to determine the best method to obtain flour from minimally processed beetroot waste dried at different temperatures, besides producing a colorant from such waste and assessing its stability along 45 days. Beetroot waste dried at 70 °C originates flour with significant antioxidant activity and higher betalain content than flour produced from waste dried at 60 and 80 °C, while chlorination had no impact on the process since microbiological results were consistent for its application. The colorant obtained from beetroot waste showed color stability for 20 days and potential antioxidant activity over the analysis period, thus it can be used as a functional additive to improve nutritional characteristics and appearance of food products. These results are promising since minimally processed beetroot waste can be used as an alternative source of natural and functional ingredients with high antioxidant activity and betalain content.

  2. CHANGES IN THE QUALITY OF DRESSED CHICKEN OBTAINED FROM DIFFERENT SOURCES DURING FROZEN STORAGE

    Directory of Open Access Journals (Sweden)

    Santosh Kumar HT

    2014-06-01

    Full Text Available This present study examines the preservation quality of dressed chicken procured from different sources of processing during storage at –18±1ºC. Breast portion of the dressed birds obtained from three different sources, viz. market/road side slaughtered chicken (MSC, retail slaughtered chicken (RSC, and scientifically slaughtered chicken (SSC, were cut into chunks, divided into 250 g portions, packed in polyethylene bags, stored at –18±1ºC and evaluated at 30 days intervals for changes in quality attributes. Frozen storage had no marked influence on pH change of the samples. SSC samples had higher extract release volume (15.34±0.08 to 13.45±0.93 ml than MSC (13.00±0.19 to 9.91±0.97 ml and RSC samples (13.65±0.24 to 11.70±1.21ml. There was significant increase (P<0.05 in thiobarbituric acid of all three sample types during storage but values were well below the threshold level of spoilage. SSC samples showed lower tyrosine content throughout frozen storage compared to MSC and RSC samples. A significant decline in microbial load, viz. total viable count, coliform count, psychrophilic count and yeast and moulds count were noticed during frozen storage. Organoleptic attributes, viz. appearance, flavour, texture and overall palatability were not affected due to frozen storage except juiciness in MSC samples which decreased (P<0.05 from 6.53±0.13 to 5.96±0.11 on 90 days of storage. Although the scientifically slaughtered chicken had better quality, all the sample types could be stored at –18±1ºC till 90 days without much deterioration in their quality.

  3. The contribution of an asthma diagnostic consultation service in obtaining an accurate asthma diagnosis for primary care patients: results of a real-life study.

    Science.gov (United States)

    Gillis, R M E; van Litsenburg, W; van Balkom, R H; Muris, J W; Smeenk, F W

    2017-05-19

    Previous studies showed that general practitioners have problems in diagnosing asthma accurately, resulting in both under and overdiagnosis. To support general practitioners in their diagnostic process, an asthma diagnostic consultation service was set up. We evaluated the performance of this asthma diagnostic consultation service by analysing the (dis)concordance between the general practitioners working hypotheses and the asthma diagnostic consultation service diagnoses and possible consequences this had on the patients' pharmacotherapy. In total 659 patients were included in this study. At this service the patients' medical history was taken and a physical examination and a histamine challenge test were carried out. We compared the general practitioners working hypotheses with the asthma diagnostic consultation service diagnoses and the change in medication that was incurred. In 52% (n = 340) an asthma diagnosis was excluded. The diagnosis was confirmed in 42% (n = 275). Furthermore, chronic rhinitis was diagnosed in 40% (n = 261) of the patients whereas this was noted in 25% (n = 163) by their general practitioner. The adjusted diagnosis resulted in a change of medication for more than half of all patients. In 10% (n = 63) medication was started because of a new asthma diagnosis. The 'one-stop-shop' principle was met with 53% of patients and 91% (n = 599) were referred back to their general practitioner, mostly within 6 months. Only 6% (n = 41) remained under control of the asthma diagnostic consultation service because of severe unstable asthma. In conclusion, the asthma diagnostic consultation service helped general practitioners significantly in setting accurate diagnoses for their patients with an asthma hypothesis. This may contribute to diminish the problem of over and underdiagnosis and may result in more appropriate treatment regimens. SERVICE HELPS GENERAL PRACTITIONERS MAKE ACCURATE DIAGNOSES: A consultation service can

  4. Means for obtaining a metal ion beam from a heavy-ion cyclotron source

    Science.gov (United States)

    Hudson, E.D.; Mallory, M.L.

    1975-08-01

    A description is given of a modification to a cyclotron ion source used in producing a high intensity metal ion beam. A small amount of an inert support gas maintains the usual plasma arc, except that it is necessary for the support gas to have a heavy mass, e.g., xenon or krypton as opposed to neon. A plate, fabricated from the metal (or anything that can be sputtered) to be ionized, is mounted on the back wall of the ion source arc chamber and is bombarded by returning energetic low-charged gas ions that fail to cross the initial accelerating gap between the ion source and the accelerating electrode. Some of the atoms that are dislodged from the plate by the returning gas ions become ionized and are extracted as a useful beam of heavy ions. (auth)

  5. An accurate discontinuous Galerkin method for solving point-source Eikonal equation in 2-D heterogeneous anisotropic media

    Science.gov (United States)

    Le Bouteiller, P.; Benjemaa, M.; Métivier, L.; Virieux, J.

    2018-03-01

    Accurate numerical computation of wave traveltimes in heterogeneous media is of major interest for a large range of applications in seismics, such as phase identification, data windowing, traveltime tomography and seismic imaging. A high level of precision is needed for traveltimes and their derivatives in applications which require quantities such as amplitude or take-off angle. Even more challenging is the anisotropic case, where the general Eikonal equation is a quartic in the derivatives of traveltimes. Despite their efficiency on Cartesian meshes, finite-difference solvers are inappropriate when dealing with unstructured meshes and irregular topographies. Moreover, reaching high orders of accuracy generally requires wide stencils and high additional computational load. To go beyond these limitations, we propose a discontinuous-finite-element-based strategy which has the following advantages: (1) the Hamiltonian formalism is general enough for handling the full anisotropic Eikonal equations; (2) the scheme is suitable for any desired high-order formulation or mixing of orders (p-adaptivity); (3) the solver is explicit whatever Hamiltonian is used (no need to find the roots of the quartic); (4) the use of unstructured meshes provides the flexibility for handling complex boundary geometries such as topographies (h-adaptivity) and radiation boundary conditions for mimicking an infinite medium. The point-source factorization principles are extended to this discontinuous Galerkin formulation. Extensive tests in smooth analytical media demonstrate the high accuracy of the method. Simulations in strongly heterogeneous media illustrate the solver robustness to realistic Earth-sciences-oriented applications.

  6. Obtaining laser safety at a synchrotron radiation user facility: The Advanced Light Source

    International Nuclear Information System (INIS)

    Barat, K.

    1996-01-01

    The Advanced Light Source (ALS) is a US national facility for scientific research and development located at the Lawrence Berkeley National Laboratory in California. The ALS delivers the world's brightest synchrotron radiation in the far ultraviolet and soft X-ray regions of the spectrum. As a user facility it is available to researchers from industry, academia, and laboratories from around the world. Subsequently, a wide range of safety concerns become involved. This article relates not only to synchrotron facilities but to any user facility. A growing number of US centers are attracting organizations and individuals to use the equipment on site, for a fee. This includes synchrotron radiation and/or free electron facilities, specialty research centers, and laser job shops. Personnel coming to such a facility bring with them a broad spectrum of safety cultures. Upon entering, the guests must accommodate to the host facility safety procedures. This article describes a successful method to deal with that responsibility

  7. Development of unfolding method to obtain pin-wise source strength distribution from PWR spent fuel assembly measurement

    International Nuclear Information System (INIS)

    Sitompul, Yos Panagaman; Shin, Hee-Sung; Park, Se-Hwan; Oh, Jong Myeong; Seo, Hee; Kim, Ho Dong

    2013-01-01

    An unfolding method has been developed to obtain a pin-wise source strength distribution of a 14 × 14 pressurized water reactor (PWR) spent fuel assembly. Sixteen measured gamma dose rates at 16 control rod guide tubes of an assembly are unfolded to 179 pin-wise source strengths of the assembly. The method calculates and optimizes five coefficients of the quadratic fitting function for X-Y source strength distribution, iteratively. The pin-wise source strengths are obtained at the sixth iteration, with a maximum difference between two sequential iterations of about 0.2%. The relative distribution of pin-wise source strength from the unfolding is checked using a comparison with the design code (Westinghouse APA code). The result shows that the relative distribution from the unfolding and design code is consistent within a 5% difference. The absolute value of the pin-wise source strength is also checked by reproducing the dose rates at the measurement points. The result shows that the pin-wise source strengths from the unfolding reproduce the dose rates within a 2% difference. (author)

  8. Do Skilled Elementary Teachers Hold Scientific Conceptions and Can They Accurately Predict the Type and Source of Students' Preconceptions of Electric Circuits?

    Science.gov (United States)

    Lin, Jing-Wen

    2016-01-01

    Holding scientific conceptions and having the ability to accurately predict students' preconceptions are a prerequisite for science teachers to design appropriate constructivist-oriented learning experiences. This study explored the types and sources of students' preconceptions of electric circuits. First, 438 grade 3 (9 years old) students were…

  9. Obtaining source current density related to irregularly structured electromagnetic target field inside human body using hybrid inverse/FDTD method.

    Science.gov (United States)

    Han, Jijun; Yang, Deqiang; Sun, Houjun; Xin, Sherman Xuegang

    2017-01-01

    Inverse method is inherently suitable for calculating the distribution of source current density related with an irregularly structured electromagnetic target field. However, the present form of inverse method cannot calculate complex field-tissue interactions. A novel hybrid inverse/finite-difference time domain (FDTD) method that can calculate the complex field-tissue interactions for the inverse design of source current density related with an irregularly structured electromagnetic target field is proposed. A Huygens' equivalent surface is established as a bridge to combine the inverse and FDTD method. Distribution of the radiofrequency (RF) magnetic field on the Huygens' equivalent surface is obtained using the FDTD method by considering the complex field-tissue interactions within the human body model. The obtained magnetic field distributed on the Huygens' equivalent surface is regarded as the next target. The current density on the designated source surface is derived using the inverse method. The homogeneity of target magnetic field and specific energy absorption rate are calculated to verify the proposed method.

  10. Design, operational experiences and beam results obtained with the SNS H- ion source and LEBT at Berkeley Lab

    International Nuclear Information System (INIS)

    Keller, R.; Thomae, R.; Stockli, M.; Welton, R.

    2002-01-01

    The ion source and Low-Energy Transport (LEBT) system that will provide H - ion beams to the Spallation Neutron Source (SNS)** Front End and the accelerator chain have been developed into a mature unit that fully satisfies the operational requirements through the commissioning and early operating phases of SNS. Compared to the early R and D version, many features of the ion source have been improved, and reliable operation at 6% duty factor has been achieved producing beam currents in the 35-mA range and above. LEBT operation proved that the purely electrostatic focusing principle is well suited to inject the ion beam into the RFQ accelerator, including the steering and pre-chopping functions. This paper will discuss the latest design features of the ion source and LEBT, give performance data for the integrated system, and report on commissioning results obtained with the SNS RFQ and Medium-Energy Beam Transport (MEBT) system. Prospects for further improvements will be outlined in concluding remarks

  11. Vasorelaxant activity of extracts obtained from Apium graveolens:Possible source for vasorelaxant molecules isolation with potential antihypertensive effect

    Institute of Scientific and Technical Information of China (English)

    Vergara-Galicia Jorge; Jimenez-Ramirez Luis ngel; Tun-Suarez Adrin; Aguirre-Crespo Francisco; Salazar-Gmez Anuar; Estrada-Soto Samuel; Sierra-Ovando ngel; Hernandez-Nuez Emmanuel

    2013-01-01

    Objective:To investigate the vasorelaxant effect of organic extracts from Apium graveolens (A. graveolens) which is a part of a group of plants subjected to pharmacological and phytochemical study with the purpose of offering it as an ideal source for obtaining lead compounds for designing new therapeutic agents with potential vasorelaxant and antihypertensive effects. Methods:An ex vivo method was employed to assess the vasorelaxant activity. This consisted of using rat aortic rings with and without endothelium precontracted with norepinephrine. Results:All extracts caused concentration-dependent relaxation in precontracted aortic rings with and without endothelium;the most active extracts were Dichloromethane and Ethyl Acetate extracts from A. graveolens. These results suggested that secondary metabolites responsible for the vasorelaxant activity belong to a group of compounds of medium polarity. Also, our evidence showed that effect induced by dichloromethane and ethyl acetate extracts from A. graveolens is mediated probably by calcium antagonism. Conclusions: A. graveolens represents an ideal source for obtaining lead compounds for designing new therapeutic agents with potential vasorelaxant and antihypertensive effects.

  12. Comparative energy content and amino acid digestibility of barley obtained from diverse sources fed to growing pigs

    Directory of Open Access Journals (Sweden)

    Hong Liang Wang

    2017-07-01

    Full Text Available Objective Two experiments were conducted to determine the content of digestible energy (DE and metabolizable energy (ME as well as the apparent ileal digestibility (AID and standardized ileal digestibility (SID of crude protein (CP and amino acids (AA in barley grains obtained from Australia, France or Canada. Methods In Exp. 1, 18 growing barrows (Duroc×Landrace×Yorkshire; 31.5±3.2 kg were individually placed in stainless-steel metabolism crates (1.4×0.7×0.6 m and randomly allotted to 1 of 3 test diets. In Exp. 2, eight crossbred pigs (30.9±1.8 kg were allotted to a replicate 3×4 Youden Square designed experiment with three periods and four diets. Two pigs received each diet during each test period. The diets included one nitrogen-free diet and three test diets. Results The relative amounts of gross energy (GE, CP, and all AA in the Canadian barley were higher than those in Australian and French barley while higher concentrations of neutral detergent fiber, acid detergent fiber, total dietary fiber, insoluble dietary fiber and β-glucan as well as lower concentrations of GE and ether extract were observed in the French barley compared with the other two barley sources. The DE and ME as well as the SID of histidine, isoleucine, leucine and phenylalanine in Canadian barley were higher (p<0.05 than those in French barley but did not differ from Australian barley. Conclusion Differences in the chemical composition, energy content and the SID and AID of AA were observed among barley sources obtained from three countries. The feeding value of barley from Canada and Australia was superior to barley obtained from France which is important information in developing feeding systems for growing pigs where imported grains are used.

  13. Comparative energy content and amino acid digestibility of barley obtained from diverse sources fed to growing pigs.

    Science.gov (United States)

    Wang, Hong Liang; Shi, Meng; Xu, Xiao; Ma, Xiao Kang; Liu, Ling; Piao, Xiang Shu

    2017-07-01

    Two experiments were conducted to determine the content of digestible energy (DE) and metabolizable energy (ME) as well as the apparent ileal digestibility (AID) and standardized ileal digestibility (SID) of crude protein (CP) and amino acids (AA) in barley grains obtained from Australia, France or Canada. In Exp. 1, 18 growing barrows (Duroc×Landrace×Yorkshire; 31.5±3.2 kg) were individually placed in stainless-steel metabolism crates (1.4×0.7×0.6 m) and randomly allotted to 1 of 3 test diets. In Exp. 2, eight crossbred pigs (30.9±1.8 kg) were allotted to a replicate 3×4 Youden Square designed experiment with three periods and four diets. Two pigs received each diet during each test period. The diets included one nitrogen-free diet and three test diets. The relative amounts of gross energy (GE), CP, and all AA in the Canadian barley were higher than those in Australian and French barley while higher concentrations of neutral detergent fiber, acid detergent fiber, total dietary fiber, insoluble dietary fiber and β-glucan as well as lower concentrations of GE and ether extract were observed in the French barley compared with the other two barley sources. The DE and ME as well as the SID of histidine, isoleucine, leucine and phenylalanine in Canadian barley were higher (penergy content and the SID and AID of AA were observed among barley sources obtained from three countries. The feeding value of barley from Canada and Australia was superior to barley obtained from France which is important information in developing feeding systems for growing pigs where imported grains are used.

  14. Preparation and electrical properties of boron and boron phosphide films obtained by gas source molecular beam deposition

    Energy Technology Data Exchange (ETDEWEB)

    Kumashiro, Y.; Yokoyama, T.; Sakamoto, T.; Fujita, T. [Yokohama National Univ. (Japan)

    1997-10-01

    Boron and boron phosphide films were prepared by gas source molecular beam deposition on sapphire crystal at various substrate temperatures up to 800{degrees}C using cracked B{sub 2}H{sub 6} (2% in H{sub 2}) at 300{degrees}C and cracked PH{sub 3} (20% in H{sub 2}) at 900{degrees}C. The substrate temperatures and gas flow rates of the reactant gases determined the film growth. The boron films with amorphous structure are p type. Increasing growth times lead to increasing mobilities and decreasing carrier concentrations. Boron phosphide film with maximum P/B ratio is obtained at a substrate temperature of 600{degrees}C, below and above which they become phosphorous deficient due to insufficient supply of phosphorus and thermal desorption of the phosphorus as P{sub 2}, respectively, but they are all n type conductors due to phosphorus vacancies.

  15. Comparison of Keratometry Obtained by a Swept Source OCT-Based Biometer with a Standard Optical Biometer and Scheimpflug Imaging.

    Science.gov (United States)

    Asena, Leyla; Akman, Ahmet; Güngör, Sirel Gür; Dursun Altınörs, Dilek

    2018-04-09

    To assess agreement of a swept source-optical coherence tomography (SS-OCT) based Biometer with a standard IOLMaster device and Scheimpflug Imaging (SI) to acquire keratometric measurements in cataract patients. In this prospective comparative study, 101 eyes of 101 cataract surgery candidates, aged 24-81 years, were sequentially examined using three devices. Keratometry values at the flat (K1) and steep (K2) axis, mean corneal power (Km) and magnitude of corneal astigmatism as well as J0 and J45 vectoral components of astigmatism obtained with the SS-OCT based biometer (IOLMaster 700) were compared with those obtained with the IOLMaster 500 and SI. The agreement between measurements was evaluated by the Bland-Altman method, intraclass correlation coefficients (ICCs) and repeated-measures analysis of variance. Mean K1 values from the three devices were similar (p = 0.09). Mean K2 and Km values of IOLMaster 700 were higher than SI and lower than IOLMaster 500 (p = 0.04 for K2 and p = 0.02 for Km). There was a strong correlation between K1, K2, Km and magnitude of astigmatism obtained with all devices (r ≥ 0.80 and p devices was excellent for keratometric measurements. Mean K2, Km and astigmatism measurements from IOLMaster 700 were lower than IOLMaster 500 and higher than SI. However, the differences were quite small and are not expected to affect the final IOL power.

  16. Obtained and evaluation of antisera raised against irradiated crotalic whole venom or crotoxin in 60 Co source

    International Nuclear Information System (INIS)

    Paula, Regina A. de.

    1995-01-01

    Snake bite is a great Public Health problem in our country. The accidents with snakes from Crotalus genus are the most severe. About 1% of the victims die without seratherapy. The antivenons are obtained from hyper immune horse plasma. During the production these animals present signs of envenoming that result in a decrease of organic resistance besides the horses maintenance is very expensive and the producers are fewer, so the sera production is restrict. Many techniques which could reduce the venoms toxicity and increase the sera production using chemical and physical agents have been studied. The gamma rays are excellent tool to detoxify venoms and toxins. It is able to modify protein structures that decrease lethally, toxic and enzymatic activities without modifying the immunogenicity. So, it is important evaluate the sera production in rabbits using gamma rays detoxified venom and crotoxin as immunogen and their power as reagents in immuno assays. In order to obtain the antisera, Crotalus durissus terrificus whole venom or isolated crotoxin was irradiated with 2.000 Gy in 60 Co source, in a 150 mM NaCl solution, and inoculated in rabbits. The sera production were screened by immunoprecipitation, immuno enzymatic (ELISA) and immunoradiometric (IRMA) assays. The specificity was studied by immuno-electrophoresis, ELISA and western blot techniques. The neutralizing power was evaluated by neutralization of phospholipase A 2 activity of toxin in vitro. The antisera were used as reagents in antigen capture assays ELISA and IRMA immuno assays to detect circulant antigens in sera of mice experimentally inoculated with crotalic venom or crotoxin. The results showed that both detoxified venom or crotoxin were good immunogens, and they were able to induce antibodies that could recognize non-irradiated venom or isolated crotoxin. The data suggest that those antibodies present more specificity and higher in vitro neutralizing power, when compared with commercial

  17. DEEP WIDEBAND SINGLE POINTINGS AND MOSAICS IN RADIO INTERFEROMETRY: HOW ACCURATELY DO WE RECONSTRUCT INTENSITIES AND SPECTRAL INDICES OF FAINT SOURCES?

    Energy Technology Data Exchange (ETDEWEB)

    Rau, U.; Bhatnagar, S.; Owen, F. N., E-mail: rurvashi@nrao.edu [National Radio Astronomy Observatory, Socorro, NM-87801 (United States)

    2016-11-01

    Many deep wideband wide-field radio interferometric surveys are being designed to accurately measure intensities, spectral indices, and polarization properties of faint source populations. In this paper, we compare various wideband imaging methods to evaluate the accuracy to which intensities and spectral indices of sources close to the confusion limit can be reconstructed. We simulated a wideband single-pointing (C-array, L-Band (1–2 GHz)) and 46-pointing mosaic (D-array, C-Band (4–8 GHz)) JVLA observation using a realistic brightness distribution ranging from 1 μ Jy to 100 mJy and time-, frequency-, polarization-, and direction-dependent instrumental effects. The main results from these comparisons are (a) errors in the reconstructed intensities and spectral indices are larger for weaker sources even in the absence of simulated noise, (b) errors are systematically lower for joint reconstruction methods (such as Multi-Term Multi-Frequency-Synthesis (MT-MFS)) along with A-Projection for accurate primary beam correction, and (c) use of MT-MFS for image reconstruction eliminates Clean-bias (which is present otherwise). Auxiliary tests include solutions for deficiencies of data partitioning methods (e.g., the use of masks to remove clean bias and hybrid methods to remove sidelobes from sources left un-deconvolved), the effect of sources not at pixel centers, and the consequences of various other numerical approximations within software implementations. This paper also demonstrates the level of detail at which such simulations must be done in order to reflect reality, enable one to systematically identify specific reasons for every trend that is observed, and to estimate scientifically defensible imaging performance metrics and the associated computational complexity of the algorithms/analysis procedures.

  18. A generalized operational formula based on total electronic densities to obtain 3D pictures of the dual descriptor to reveal nucleophilic and electrophilic sites accurately on closed-shell molecules.

    Science.gov (United States)

    Martínez-Araya, Jorge I

    2016-09-30

    By means of the conceptual density functional theory, the so-called dual descriptor (DD) has been adapted to be used in any closed-shell molecule that presents degeneracy in its frontier molecular orbitals. The latter is of paramount importance because a correct description of local reactivity will allow to predict the most favorable sites on a molecule to undergo nucleophilic or electrophilic attacks; on the contrary, an incomplete description of local reactivity might have serio us consequences, particularly for those experimental chemists that have the need of getting an insight about reactivity of chemical reagents before using them in synthesis to obtain a new compound. In the present work, the old approach based only on electronic densities of frontier molecular orbitals is replaced by the most accurate procedure that implies the use of total electronic densities thus keeping consistency with the essential principle of the DFT in which the electronic density is the fundamental variable and not the molecular orbitals. As a result of the present work, the DD will be able to properly describe local reactivities only in terms of total electronic densities. To test the proposed operational formula, 12 very common molecules were selected as the original definition of the DD was not able to describe their local reactivities properly. The ethylene molecule was additionally used to test the capability of the proposed operational formula to reveal a correct local reactivity even in absence of degeneracy in frontier molecular orbitals. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  19. Comparison of TG‐43 dosimetric parameters of brachytherapy sources obtained by three different versions of MCNP codes

    Science.gov (United States)

    Zaker, Neda; Sina, Sedigheh; Koontz, Craig; Meigooni1, Ali S.

    2016-01-01

    Monte Carlo simulations are widely used for calculation of the dosimetric parameters of brachytherapy sources. MCNP4C2, MCNP5, MCNPX, EGS4, EGSnrc, PTRAN, and GEANT4 are among the most commonly used codes in this field. Each of these codes utilizes a cross‐sectional library for the purpose of simulating different elements and materials with complex chemical compositions. The accuracies of the final outcomes of these simulations are very sensitive to the accuracies of the cross‐sectional libraries. Several investigators have shown that inaccuracies of some of the cross section files have led to errors in  125I and  103Pd parameters. The purpose of this study is to compare the dosimetric parameters of sample brachytherapy sources, calculated with three different versions of the MCNP code — MCNP4C, MCNP5, and MCNPX. In these simulations for each source type, the source and phantom geometries, as well as the number of the photons, were kept identical, thus eliminating the possible uncertainties. The results of these investigations indicate that for low‐energy sources such as  125I and  103Pd there are discrepancies in gL(r) values. Discrepancies up to 21.7% and 28% are observed between MCNP4C and other codes at a distance of 6 cm for  103Pd and 10 cm for  125I from the source, respectively. However, for higher energy sources, the discrepancies in gL(r) values are less than 1.1% for  192Ir and less than 1.2% for  137Cs between the three codes. PACS number(s): 87.56.bg PMID:27074460

  20. Comparison of TG-43 dosimetric parameters of brachytherapy sources obtained by three different versions of MCNP codes.

    Science.gov (United States)

    Zaker, Neda; Zehtabian, Mehdi; Sina, Sedigheh; Koontz, Craig; Meigooni, Ali S

    2016-03-08

    Monte Carlo simulations are widely used for calculation of the dosimetric parameters of brachytherapy sources. MCNP4C2, MCNP5, MCNPX, EGS4, EGSnrc, PTRAN, and GEANT4 are among the most commonly used codes in this field. Each of these codes utilizes a cross-sectional library for the purpose of simulating different elements and materials with complex chemical compositions. The accuracies of the final outcomes of these simulations are very sensitive to the accuracies of the cross-sectional libraries. Several investigators have shown that inaccuracies of some of the cross section files have led to errors in 125I and 103Pd parameters. The purpose of this study is to compare the dosimetric parameters of sample brachytherapy sources, calculated with three different versions of the MCNP code - MCNP4C, MCNP5, and MCNPX. In these simulations for each source type, the source and phantom geometries, as well as the number of the photons, were kept identical, thus eliminating the possible uncertainties. The results of these investigations indicate that for low-energy sources such as 125I and 103Pd there are discrepancies in gL(r) values. Discrepancies up to 21.7% and 28% are observed between MCNP4C and other codes at a distance of 6 cm for 103Pd and 10 cm for 125I from the source, respectively. However, for higher energy sources, the discrepancies in gL(r) values are less than 1.1% for 192Ir and less than 1.2% for 137Cs between the three codes.

  1. Study of the precision of the gamma-ray burst source locations obtained with the Ulysses/PVO/CGRO network

    International Nuclear Information System (INIS)

    Cline, T.L.; Hurley, K.C.; Sommer, M.; Boer, M.; Niel, M.; Fishman, G.; Kouveliotou, C.; Meegan, C.; Paciesas, W.S.; Wilson, R.B.; Laros, J.G.; Klebesadel, R.W.

    1994-01-01

    The interplanetary gamma-ray burst network of the Ulysses, Compton-GRO, and Pioneer-Venus Orbiter missions has made source localizations with fractional-arc-minute precision for a number of events, and with auxiliary data, will provide useful annular-segment loci for many more. These studies have, thus far, yielded one possible counterpart, a Rosat x-ray association with the 92 May 1 burst. Similar to the historic 1978 November 19 burst/Einstein association, this possibility gives hope that network studies will provide a fundamental source clue for 'classical' bursts, just as a second supernova remnant in a network-defined source field has done for sgr events

  2. Agronomic Performance of Flue-Cured Tobacco F1 Hybrids Obtained with Different Sources of Male Sterile Cytoplasm

    Directory of Open Access Journals (Sweden)

    Berbec A

    2014-12-01

    Full Text Available Four cytoplasmic male sterile (cms F1flue-cured hybrids cv. Wiaelica × cv. Virginia Golta (VG, the male fertile analogue and the parental varieties were tested at two locations in Poland in a replicated field trial. The cms sources in the hybrids wereN. suaveolens,N. amplexicaulis,N. bigeloviiand aN. tabacumcms mutant. Under the slight to moderate pressure from black root rot present at the trial sites the hybrids showed a moderate tolerance of the disease characteristic of VG as opposed to medium strong susceptibility of Wislica. Apart from the effect of black root rot tolerance the vegetative vigor of the hybrids (plant height, leaf size, earliness was affected by cytoplasm source. The F1hybrid withN. suaveolens cytoplasm flowered approximately three days later than the remaining hybrids. Of the cms hybrids tested cmsN. bigelovii produced the tallest plants with largest mid-position leaves. Yields of cured leaves were largely influenced by black root rot and were generally higher in VG and in the hybrids than in Wislica. Leaf yields and curability were generally little affected by cms source under low pressure from black root rot. At the site with a relatively high level of black root rot infestation the yields of cmsN. suaveolens were slightly lower but the percentage of light grades slightly higher compared to those of other cms hybrids. CmsN. suaveolens was the best hybrid in terms of money returns at the low black root rot field but it was the poorest hybrid performer under high pressure from the disease. Contents of nitrogen, sugars, nicotine and ash was little affected by source of cms. There was an increased incidence of potato virus Y (PVY and white spots in cmsN. suaveolens and, to a lesser extent, in cmsN. bigelovii as compared to the remaining disease-free entries.

  3. Sources

    International Nuclear Information System (INIS)

    Duffy, L.P.

    1991-01-01

    This paper discusses the sources of radiation in the narrow perspective of radioactivity and the even narrow perspective of those sources that concern environmental management and restoration activities at DOE facilities, as well as a few related sources. Sources of irritation, Sources of inflammatory jingoism, and Sources of information. First, the sources of irritation fall into three categories: No reliable scientific ombudsman to speak without bias and prejudice for the public good, Technical jargon with unclear definitions exists within the radioactive nomenclature, and Scientific community keeps a low-profile with regard to public information. The next area of personal concern are the sources of inflammation. This include such things as: Plutonium being described as the most dangerous substance known to man, The amount of plutonium required to make a bomb, Talk of transuranic waste containing plutonium and its health affects, TMI-2 and Chernobyl being described as Siamese twins, Inadequate information on low-level disposal sites and current regulatory requirements under 10 CFR 61, Enhanced engineered waste disposal not being presented to the public accurately. Numerous sources of disinformation regarding low level radiation high-level radiation, Elusive nature of the scientific community, The Federal and State Health Agencies resources to address comparative risk, and Regulatory agencies speaking out without the support of the scientific community

  4. [Accuracy of attenuation coefficient obtained by 137Cs single-transmission scanning in PET: comparison with conventional germanium line source].

    Science.gov (United States)

    Matsumoto, Keiichi; Kitamura, Keishi; Mizuta, Tetsuro; Shimizu, Keiji; Murase, Kenya; Senda, Michio

    2006-02-20

    Transmission scanning can be successfully performed with a Cs-137 single-photon-emitting point source for three-dimensional PET imaging. This method was effective for postinjection transmission scanning because of differences in physical energy. However, scatter contamination in the transmission data lowers measured attenuation coefficients. The purpose of this study was to investigate the accuracy of the influence of object scattering by measuring the attenuation coefficients on the transmission images. We also compared the results with the conventional germanium line source method. Two different types of PET scanner, the SET-3000 G/X (Shimadzu Corp.) and ECAT EXACT HR(+) (Siemens/CTI) , were used. For the transmission scanning, the SET-3000 G/X and ECAT HR(+) were the Cs-137 point source and Ge-68/Ga-68 line source, respectively. With the SET-3000 G/X, we performed transmission measurement at two energy gate settings, the standard 600-800 keV as well as 500-800 keV. The energy gate setting of the ECAT HR(+) was 350-650 keV. The effects of scattering in a uniform phantom with different cross-sectional areas ranging from 201 cm(2) to 314 cm(2) to 628 cm(2) (apposition of the two 20 cm diameter phantoms) and 943 cm(2) (stacking of the three 20 cm diameter phantoms) were acquired without emission activity. First, we evaluated the attenuation coefficients of the two different types of transmission scanning using region of interest (ROI) analysis. In addition, we evaluated the attenuation coefficients with and without segmentation for Cs-137 transmission images using the same analysis. The segmentation method was a histogram-based soft-tissue segmentation process that can also be applied to reconstructed transmission images. In the Cs-137 experiment, the maximum underestimation was 3% without segmentation, which was reduced to less than 1% with segmentation at the center of the largest phantom. In the Ge-68/Ga-68 experiment, the difference in mean attenuation

  5. Accuracy of attenuation coefficient obtained by 137Cs single-transmission scanning in PET. Comparison with conventional germanium line source

    International Nuclear Information System (INIS)

    Matsumoto, Keiichi; Shimizu, Keiji; Senda, Michio; Kitamura, Keishi; Mizuta, Tetsuro; Murase, Kenya

    2006-01-01

    Transmission scanning can be successfully performed with a Cs-137 single-photon-emitting point source for three-dimensional PET imaging. This method was effective for postinjection transmission scanning because of differences in physical energy. However, scatter contamination in the transmission data lowers measured attenuation coefficients. The purpose of this study was to investigate the accuracy of the influence of object scattering by measuring the attenuation coefficients on the transmission images. We also compared the results with the conventional germanium line source method. Two different types of PET scanner, the SET-3000 G/X (Shimadzu Corp.) and ECAT EXACT HR + (Siemens/CTI), were used. For the transmission scanning, the SET-3000 G/X and ECAT HR + were the Cs-137 point source and Ge-68/Ga-68 line source, respectively. With the SET-3000 G/X, we performed transmission measurement at two energy gate settings, the standard 600-800 keV as well as 500-800 keV. The energy gate setting of the ECAT HR 2 + was 350-650 keV. The effects of scattering in a uniform phantom with different cross-sectional areas ranging from 201 cm 2 to 314 cm 2 to 628 cm 2 (apposition of the two 20 cm diameter phantoms) and 943 cm 2 (stacking of the three 20 cm diameter phantoms) were acquired without emission activity. First, we evaluated the attenuation coefficients of the two different types of transmission scanning using region of interest (ROI) analysis. In addition, we evaluated the attenuation coefficients with and without segmentation for Cs-137 transmission images using the same analysis. The segmentation method was a histogram-based soft-tissue segmentation process that can also be applied to reconstructed transmission images. In the Cs-137 experiment, the maximum underestimation was 3% without segmentation, which was reduced to less than 1% with segmentation at the center of the largest phantom. In the Ge-68/Ga-68 experiment, the difference in mean attenuation coefficients

  6. "Right-sourcing" or obtaining the correct balance between in-house activity and the purchase of external services

    CERN Document Server

    Ninin, P

    2003-01-01

    During the last few years, and more particularly to face the LHC construction, several Information Technology activities of the ST Division have been outsourced. This concerns various domains such as desktop support, application software development, system maintenance as well as turn-key control systems. Among other motivations, this tactical approach was seen as a way to achieve higher product quality and service rationalization. The outsourcing success of IT activities resides in the mastering of a complex process that includes amongst other specification, purchasing, negotiation, contract management skills on top of advanced technical knowledge. The perception of the success of outsourcing differs also from one stakeholder to another. Nowadays, as CERN encounters a cash-flow issue, in-sourcing is investigated as an alternative path for savings. From this experience and the survey of current practice in industry, this paper analyses various parameters that should be considered to find the correct balance b...

  7. Obtaining better performance in the measurement-device-independent quantum key distribution with heralded single-photon sources

    Science.gov (United States)

    Zhou, Xing-Yu; Zhang, Chun-Hui; Zhang, Chun-Mei; Wang, Qin

    2017-11-01

    Measurement-device-independent quantum key distribution (MDI-QKD) has been widely investigated due to its remarkable advantages on the achievable transmission distance and practical security. However, the relative low key generation rate limits its real-life implementations. In this work, we adopt the newly proposed four-intensity decoy-state scheme [Phys. Rev. A 93, 042324 (2016), 10.1103/PhysRevA.93.042324] to study the performance of MDI-QKD with heralded single-photon sources (HSPS). Corresponding simulation results demonstrate that the four-intensity decoy-state scheme combining HSPS can drastically improve both the key generation rate and transmission distance in MDI-QKD, which may be very promising in future MDI-QKD systems.

  8. Temperature-responsive grafted polymer brushes obtained from renewable sources with potential application as substrates for tissue engineering

    Science.gov (United States)

    Raczkowska, Joanna; Stetsyshyn, Yurij; Awsiuk, Kamil; Lekka, Małgorzata; Marzec, Monika; Harhay, Khrystyna; Ohar, Halyna; Ostapiv, Dmytro; Sharan, Mykola; Yaremchuk, Iryna; Bodnar, Yulia; Budkowski, Andrzej

    2017-06-01

    The novel temperature-responsive poly(cholesteryl methacylate) (PChMa) coatings derived from renewable sources were synthesized and characterized. Temperature induced changes in wettability were accompanied by surface roughness modifications, traced with AFM. Topographies recorded for temperatures increasing from 5 to 25 °C showed a slight but noticeable increase of calculated root mean square (RMS) roughness by a factor of 1.5, suggesting a horizontal rearrangement in the structure of PChMa coatings. Another structural reordering was observed in the 55-85 °C temperature range. The recorded topography changed noticeably from smooth at 55 °C to very structured and rough at 60 °C and returned eventually to relatively smooth at 85 °C. In addition, temperature transitions of PChMa molecules were revealed by DSC measurements. The biocompatibility of the PChMa-grafted coatings was shown for cultures of granulosa cells and a non malignant bladder cancer cell (HCV29 line) culture.

  9. Adapting astronomical source detection software to help detect animals in thermal images obtained by unmanned aerial systems

    Science.gov (United States)

    Longmore, S. N.; Collins, R. P.; Pfeifer, S.; Fox, S. E.; Mulero-Pazmany, M.; Bezombes, F.; Goodwind, A.; de Juan Ovelar, M.; Knapen, J. H.; Wich, S. A.

    2017-02-01

    In this paper we describe an unmanned aerial system equipped with a thermal-infrared camera and software pipeline that we have developed to monitor animal populations for conservation purposes. Taking a multi-disciplinary approach to tackle this problem, we use freely available astronomical source detection software and the associated expertise of astronomers, to efficiently and reliably detect humans and animals in aerial thermal-infrared footage. Combining this astronomical detection software with existing machine learning algorithms into a single, automated, end-to-end pipeline, we test the software using aerial video footage taken in a controlled, field-like environment. We demonstrate that the pipeline works reliably and describe how it can be used to estimate the completeness of different observational datasets to objects of a given type as a function of height, observing conditions etc. - a crucial step in converting video footage to scientifically useful information such as the spatial distribution and density of different animal species. Finally, having demonstrated the potential utility of the system, we describe the steps we are taking to adapt the system for work in the field, in particular systematic monitoring of endangered species at National Parks around the world.

  10. Erythrocytes and cell line-based assays to evaluate the cytoprotective activity of antioxidant components obtained from natural sources.

    Science.gov (United States)

    Botta, Albert; Martínez, Verónica; Mitjans, Montserrat; Balboa, Elena; Conde, Enma; Vinardell, M Pilar

    2014-02-01

    Oxidative stress can damage cellular components including DNA, proteins or lipids, and may cause several skin diseases. To protect from this damage and addressing consumer's appeal to natural products, antioxidants obtained from algal and vegetal extracts are being proposed as antioxidants to be incorporated into formulations. Thus, the development of reliable, quick and economic in vitro methods to study the cytoactivity of these products is a meaningful requirement. A combination of erythrocyte and cell line-based assays was performed on two extracts from Sargassum muticum, one from Ulva lactuca, and one from Castanea sativa. Antioxidant properties were assessed in erythrocytes by the TBARS and AAPH assays, and cytotoxicity and antioxidant cytoprotection were assessed in HaCaT and 3T3 cells by the MTT assay. The extracts showed no antioxidant activity on the TBARS assay, whereas their antioxidant capacity in the AAPH assay was demonstrated. On the cytotoxicity assays, extracts showed low toxicity, with IC50 values higher than 200μg/mL. C. sativa extract showed the most favourable antioxidant properties on the antioxidant cytoprotection assays; while S. muticum and U. lactuca extracts showed a slight antioxidant activity. This battery of methods was useful to characterise the biological antioxidant properties of these natural extracts. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Correlation of lithium levels between drinking water obtained from different sources and scalp hair samples of adult male subjects.

    Science.gov (United States)

    Baloch, Shahnawaz; Kazi, Tasneem Gul; Afridi, Hassan Imran; Baig, Jameel Ahmed; Talpur, Farah Naz; Arain, Muhammad Balal

    2017-10-01

    There is some evidence that natural levels of lithium (Li) in drinking water may have a protective effect on neurological health. In present study, we evaluate the Li levels in drinking water of different origin and bottled mineral water. To evaluate the association between lithium levels in drinking water with human health, the scalp hair samples of male subjects (25-45 years) consumed drinking water obtained from ground water (GW), municipal treated water (MTW) and bottled mineral water (BMW) from rural and urban areas of Sindh, Pakistan were selected. The water samples were pre-concentrated five to tenfold at 60 °C using temperature-controlled electric hot plate. While scalp hair samples were oxidized by acid in a microwave oven, prior to determined by flame atomic absorption spectrometry. The Li content in different types of drinking water, GW, MTW and BMW was found in the range of 5.12-22.6, 4.2-16.7 and 0.0-16.3 µg/L, respectively. It was observed that Li concentration in the scalp hair samples of adult males consuming ground water was found to be higher, ranged as 292-393 μg/kg, than those who are drinking municipal treated and bottle mineral water (212-268 and 145-208 μg/kg), respectively.

  12. Nectandra grandiflora By-Products Obtained by Alternative Extraction Methods as a Source of Phytochemicals with Antioxidant and Antifungal Properties

    Directory of Open Access Journals (Sweden)

    Daniela Thomas da Silva

    2018-02-01

    Full Text Available Nectandra grandiflora Nees (Lauraceae is a Brazilian native tree recognized by its durable wood and the antioxidant compounds of its leaves. Taking into account that the forest industry offers the opportunity to recover active compounds from its residues and by-products, this study identifies and underlines the potential of natural products from Nectandra grandiflora that can add value to the forest exploitation. This study shows the effect of three different extraction methods: conventional (CE, ultrasound-assisted (UAE and microwave-assisted (MAE on Nectandra grandiflora leaf extracts (NGLE chemical yields, phenolic and flavonoid composition, physical characteristics as well as antioxidant and antifungal properties. Results indicate that CE achieves the highest extraction phytochemical yield (22.16%, but with similar chemical composition to that obtained by UAE and MAE. Moreover, CE also provided a superior thermal stability of NGLE. The phenolic composition of NGLE was confirmed firstly, by colorimetric assays and infrared spectra and then by chromatographic analysis, in which quercetin-3-O-rhamnoside was detected as the major compound (57.75–65.14%. Furthermore, the antioxidant capacity of the NGLE was not altered by the extraction methods, finding a high radical inhibition in all NGLE (>80% at 2 mg/mL. Regarding the antifungal activity, there was observed that NGLE possess effective bioactive compounds, which inhibit the Aspergillus niger growth.

  13. A new natural source for obtainment of inulin and fructo-oligosaccharides from industrial waste of Stevia rebaudiana Bertoni.

    Science.gov (United States)

    Lopes, Sheila Mara Sanches; Krausová, Gabriela; Carneiro, José Walter Pedroza; Gonçalves, José Eduardo; Gonçalves, Regina Aparecida Correia; de Oliveira, Arildo José Braz

    2017-06-15

    Fructan-type inulin and fructo-oligosaccharides (FOS) are reserve polysaccharides that offer an interesting combination of nutritional and technological properties for food industry. Stevia rebaudiana is used commercially in the sweetener industry due to the high content of steviol glycosides in its leaves. With the proposal of using industrial waste, the objective of the present study was to isolate, characterize and evaluate the prebiotic activity of inulin and FOS from S. rebaudiana stems. The chemical characterization of the samples by GC-MS, NMR and off-line ESI-MS showed that it was possible to obtain inulin molecules from the S. rebaudiana stems with a degree of polymerization (DP) of 12, and FOS with a DP<6. The in vitro prebiotic assay of these molecules indicates a strain specificity in fermentation capacity of fructans as substrate. FOS molecules with a low DP are preferably fermented by beneficial microbiota than inulin molecules with higher DP. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Review of the scientific results obtained at the research reactor-booster IBR-30 and the Program of investigations at the neutron source IREN

    CERN Document Server

    Furman, W

    2002-01-01

    Brief review of the main scientific results obtained at research reactor booster IBR-30 and its predecessor IBR and IBR-1 for the period 1960 - 2001 is presented. The thesis of the scientific program for the upgrade of IBR-30 resonance neutron source IREN are adduced

  15. Detection of a putative virulence cadF gene of Campylobacter jejuni obtained from different sources using a microfabricated PCR chip

    DEFF Research Database (Denmark)

    Poulsen, Claus Riber; El-Ali, Jamil; Perch-Nielsen, Ivan R.

    2005-01-01

    A microfabricated polymerase chain reaction (PCR) chip made of epoxy-based photoresist (SU-8) was recently designed and developed. In this study, we tested whether the PCR chip could be used for rapid detection of a potential virulence determinant, the cadF gene of Campylobacter jejuni. PCR...... was performed using published PCR conditions and primers for the C. jejuni cadF gene. DNA isolated from a C. jejuni reference strain CCUG 11284, C. jejuni isolates obtained from different sources (chicken and human), and Campylobacter whole cells were used as templates in the PCR tests. Conventional PCR in tube...... was used as the control. After optimization of the PCR chip, PCR positives on the chip were obtained from 91.0% (10/11) of the tested chips. A fast transition time was achieved with the PCR chip, and therefore a faster cycling time and a shorter PCR program were obtained. Using the PCR chip, the cadF gene...

  16. Literature review on production process to obtain extra virgin olive oil enriched in bioactive compounds. Potential use of byproducts as alternative sources of polyphenols.

    Science.gov (United States)

    Frankel, Edwin; Bakhouche, Abdelhakim; Lozano-Sánchez, Jesús; Segura-Carretero, Antonio; Fernández-Gutiérrez, Alberto

    2013-06-05

    This review describes the olive oil production process to obtain extra virgin olive oil (EVOO) enriched in polyphenol and byproducts generated as sources of antioxidants. EVOO is obtained exclusively by mechanical and physical processes including collecting, washing, and crushing of olives, malaxation of olive paste, centrifugation, storage, and filtration. The effect of each step is discussed to minimize losses of polyphenols from large quantities of wastes. Phenolic compounds including phenolic acids, alcohols, secoiridoids, lignans, and flavonoids are characterized in olive oil mill wastewater, olive pomace, storage byproducts, and filter cake. Different industrial pilot plant processes are developed to recover phenolic compounds from olive oil byproducts with antioxidant and bioactive properties. The technological information compiled in this review will help olive oil producers to improve EVOO quality and establish new processes to obtain valuable extracts enriched in polyphenols from byproducts with food ingredient applications.

  17. Benchmarking singlet and triplet excitation energies of molecular semiconductors for singlet fission: Tuning the amount of HF exchange and adjusting local correlation to obtain accurate functionals for singlet-triplet gaps

    Science.gov (United States)

    Brückner, Charlotte; Engels, Bernd

    2017-01-01

    Vertical and adiabatic singlet and triplet excitation energies of molecular p-type semiconductors calculated with various DFT functionals and wave-function based approaches are benchmarked against MS-CASPT2/cc-pVTZ reference values. A special focus lies on the singlet-triplet gaps that are very important in the process of singlet fission. Singlet fission has the potential to boost device efficiencies of organic solar cells, but the scope of existing singlet-fission compounds is still limited. A computational prescreening of candidate molecules could enlarge it; yet it requires efficient methods accurately predicting singlet and triplet excitation energies. Different DFT formulations (Tamm-Dancoff approximation, linear response time-dependent DFT, Δ-SCF) and spin scaling schemes along with several ab initio methods (CC2, ADC(2)/MP2, CIS(D), CIS) are evaluated. While wave-function based methods yield rather reliable singlet-triplet gaps, many DFT functionals are shown to systematically underestimate triplet excitation energies. To gain insight, the impact of exact exchange and correlation is in detail addressed.

  18. The use of prescription medications obtained from non-medical sources among immigrant Latinos in the rural southeastern U.S.

    Science.gov (United States)

    Song, Eun-Young; Leichliter, Jami S; Bloom, Frederick R; Vissman, Aaron T; O'Brien, Mary Claire; Rhodes, Scott D

    2012-05-01

    We explored the relationships between behavioral, socio-cultural, and psychological characteristics and the use of prescription medications obtained from non-medical sources among predominantly Spanish-speaking Latinos in the rural southeastern U.S. Respondent-driven sampling (RDS) was used to identify, recruit, and enroll immigrant Latinos to participate in an interviewer-administered assessment. A total of 164 respondents were interviewed in 2009. Average age was 34 years old, 64% of respondents were female, and nearly 85% reported being from Mexico. Unweighted and RDS-weighted prevalence estimates of any non-medical source of prescription medications were 22.6% and 15.1%, respectively. In multivariable modeling, respondents who perceived their documentation status as a barrier to health care and those with higher educational attainment were significantly more likely to report use of non-medical sources. Interventions are needed to increase knowledge of eligibility to sources of medical care and treatment and ensure culturally congruent services for immigrant communities in the U.S.

  19. Theoretical galactic cosmic ray electron spectrum obtained for sources of varying geometry; Spectre theorique des electrons du rayonnement cosmique dans la galaxie obtenu pour des sources a geometrie variable

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, M E [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1969-07-01

    Jokipii and Meyer have recently obtained an electron density energy spectrum of the cosmic rays, originating in the Galaxy, using integral solutions of the steady state transfer equations, by considering a circular cylindric galactic disc as source and approximating the resulting fourth order integral. In this report, we present general results, obtained by using an arbitrary circular cylindric source, without restricting ourselves to the galactic disc. The integrals are treated exactly. The conclusions of Jokipii and Meyer form special cases of these results. We also obtain an exponential energy variation which, at the moment, is not observed experimentally. The second part of this work deals with more complicated, but perhaps more realistic models of elliptic cylindric and ellipsoidal galactic disc sources. One may also note that a very large source concentrated in a very small region gives a spectrum not unlike that for a small source distributed throughout a large volume. Finally, it may be remarked that the model adopted is much less restrictive than the artificial conception of 'leakage time' followed by other workers. (author) [French] Jokipii et Meyer ont dernierement obtenu un spectre d'energie pour les electrons galactiques dans le rayonnement cosmique, en utilisant les solutions des equations de transfert, a l'etat stationnaire, ces dernieres etant sous forme d'integrales, en prenant une source completement diffusee dans le disque galactique, celui-ci etant hypothetiquement choisi comme circulaire et cylindrique et en faisant une approximation sur l'integrale du quatrieme degre. Dans ce rapport, nous presentons des resultats generaux obtenus en faisant appel a une source, diffusee dans un cylindre circulaire, arbitrairement choisi, c'est-a-dire sans nous restreindre au disque galactique comme source. Les integrales sont traitees d'une maniere exacte. Les conclusions de Jokipii et Meyer constituent des cas speciaux des resultats precedents. Nous obtenons

  20. Fermentation Results and Chemical Composition of Agricultural Distillates Obtained from Rye and Barley Grains and the Corresponding Malts as a Source of Amylolytic Enzymes and Starch.

    Science.gov (United States)

    Balcerek, Maria; Pielech-Przybylska, Katarzyna; Dziekońska-Kubczak, Urszula; Patelski, Piotr; Strąk, Ewelina

    2016-10-01

    The objective of this study was to determine the efficiency of rye and barley starch hydrolysis in mashing processes using cereal malts as a source of amylolytic enzymes and starch, and to establish the volatile profile of the obtained agricultural distillates. In addition, the effects of the pretreatment method of unmalted cereal grains on the physicochemical composition of the prepared mashes, fermentation results, and the composition of the obtained distillates were investigated. The raw materials used were unmalted rye and barley grains, as well as the corresponding malts. All experiments were first performed on a semi-technical scale, and then verified under industrial conditions in a Polish distillery. The fermentable sugars present in sweet mashes mostly consisted of maltose, followed by glucose and maltotriose. Pressure-thermal treatment of unmalted cereals, and especially rye grains, resulted in higher ethanol content in mashes in comparison with samples subjected to pressureless liberation of starch. All agricultural distillates originating from mashes containing rye and barley grains and the corresponding malts were characterized by low concentrations of undesirable compounds, such as acetaldehyde and methanol. The distillates obtained under industrial conditions contained lower concentrations of higher alcohols (apart from 1-propanol) than those obtained on a semi-technical scale.

  1. Fermentation Results and Chemical Composition of Agricultural Distillates Obtained from Rye and Barley Grains and the Corresponding Malts as a Source of Amylolytic Enzymes and Starch

    Directory of Open Access Journals (Sweden)

    Maria Balcerek

    2016-10-01

    Full Text Available The objective of this study was to determine the efficiency of rye and barley starch hydrolysis in mashing processes using cereal malts as a source of amylolytic enzymes and starch, and to establish the volatile profile of the obtained agricultural distillates. In addition, the effects of the pretreatment method of unmalted cereal grains on the physicochemical composition of the prepared mashes, fermentation results, and the composition of the obtained distillates were investigated. The raw materials used were unmalted rye and barley grains, as well as the corresponding malts. All experiments were first performed on a semi-technical scale, and then verified under industrial conditions in a Polish distillery. The fermentable sugars present in sweet mashes mostly consisted of maltose, followed by glucose and maltotriose. Pressure-thermal treatment of unmalted cereals, and especially rye grains, resulted in higher ethanol content in mashes in comparison with samples subjected to pressureless liberation of starch. All agricultural distillates originating from mashes containing rye and barley grains and the corresponding malts were characterized by low concentrations of undesirable compounds, such as acetaldehyde and methanol. The distillates obtained under industrial conditions contained lower concentrations of higher alcohols (apart from 1-propanol than those obtained on a semi-technical scale.

  2. A Fiji multi-coral δ18O composite approach to obtaining a more accurate reconstruction of the last two-centuries of the ocean-climate variability in the South Pacific Convergence Zone region

    Science.gov (United States)

    Dassié, Emilie P.; Linsley, Braddock K.; Corrège, Thierry; Wu, Henry C.; Lemley, Gavin M.; Howe, Steve; Cabioch, Guy

    2014-12-01

    The limited availability of oceanographic data in the tropical Pacific Ocean prior to the satellite era makes coral-based climate reconstructions a key tool for extending the instrumental record back in time, thereby providing a much needed test for climate models and projections. We have generated a unique regional network consisting of five Porites coral δ18O time series from different locations in the Fijian archipelago. Our results indicate that using a minimum of three Porites coral δ18O records from Fiji is statistically sufficient to obtain a reliable signal for climate reconstruction, and that application of an approach used in tree ring studies is a suitable tool to determine this number. The coral δ18O composite indicates that while sea surface temperature (SST) variability is the primary driver of seasonal δ18O variability in these Fiji corals, annual average coral δ18O is more closely correlated to sea surface salinity (SSS) as previously reported. Our results highlight the importance of water mass advection in controlling Fiji coral δ18O and salinity variability at interannual and decadal time scales despite being located in the heavy rainfall region of the South Pacific Convergence Zone (SPCZ). The Fiji δ18O composite presents a secular freshening and warming trend since the 1850s coupled with changes in both interannual (IA) and decadal/interdecadal (D/I) variance. The changes in IA and D/I variance suggest a re-organization of climatic variability in the SPCZ region beginning in the late 1800s to period of a more dominant interannual variability, which could correspond to a southeast expansion of the SPCZ.

  3. High accurate time system of the Low Latitude Meridian Circle.

    Science.gov (United States)

    Yang, Jing; Wang, Feng; Li, Zhiming

    In order to obtain the high accurate time signal for the Low Latitude Meridian Circle (LLMC), a new GPS accurate time system is developed which include GPS, 1 MC frequency source and self-made clock system. The second signal of GPS is synchronously used in the clock system and information can be collected by a computer automatically. The difficulty of the cancellation of the time keeper can be overcomed by using this system.

  4. A feasible, economical, and accurate analytical method for simultaneous determination of six alkaloid markers in Aconiti Lateralis Radix Praeparata from different manufacturing sources and processing ways.

    Science.gov (United States)

    Zhang, Yi-Bei; DA, Juan; Zhang, Jing-Xian; Li, Shang-Rong; Chen, Xin; Long, Hua-Li; Wang, Qiu-Rong; Cai, Lu-Ying; Yao, Shuai; Hou, Jin-Jun; Wu, Wan-Ying; Guo, De-An

    2017-04-01

    Aconiti Lateralis Radix Praeparata (Fuzi) is a commonly used traditional Chinese medicine in clinic for its potency in restoring yang and rescuing from collapse. Aconiti alkaloids, mainly including monoester-diterpenoidaconitines (MDAs) and diester-diterpenoidaconitines (DDAs), are considered to act as both bioactive and toxic constituents. In the present study, a feasible, economical, and accurate HPLC method for simultaneous determination of six alkaloid markers using the Single Standard for Determination of Multi-Components (SSDMC) method was developed and fully validated. Benzoylmesaconine was used as the unique reference standard. This method was proven as accurate (recovery varying between 97.5%-101.8%, RSD 0.999 9) over the concentration ranges, and subsequently applied to quantitative evaluation of 62 batches of samples, among which 45 batches were from good manufacturing practice (GMP) facilities and 17 batches from the drug market. The contents were then analyzed by principal component analysis (PCA) and homogeneity test. The present study provided valuable information for improving the quality standard of Aconiti Lateralis Radix Praeparata. The developed method also has the potential in analysis of other Aconitum species, such as Aconitum carmichaelii (prepared parent root) and Aconitum kusnezoffii (prepared root). Copyright © 2017 China Pharmaceutical University. Published by Elsevier B.V. All rights reserved.

  5. Accurate Evaluation of Quantum Integrals

    Science.gov (United States)

    Galant, D. C.; Goorvitch, D.; Witteborn, Fred C. (Technical Monitor)

    1995-01-01

    Combining an appropriate finite difference method with Richardson's extrapolation results in a simple, highly accurate numerical method for solving a Schrodinger's equation. Important results are that error estimates are provided, and that one can extrapolate expectation values rather than the wavefunctions to obtain highly accurate expectation values. We discuss the eigenvalues, the error growth in repeated Richardson's extrapolation, and show that the expectation values calculated on a crude mesh can be extrapolated to obtain expectation values of high accuracy.

  6. Estimating the four-factor product (ε p Pfnl Ptnl) for the accurate calculation of xenon and samarium reactivities in the Syrian Miniature Neutron Source Reactor

    International Nuclear Information System (INIS)

    Khattab, K.

    2007-01-01

    The modified 135 Xe equilibrium reactivity in the Syrian Miniature Neutron Source Reactor (MNSR) was calculated first by using the WIMSD4 and CITATION codes to estimate the four-factor product (ε p P f nl P t nl). Then, precise calculations of 135 Xe and 149 Sm concentrations and reactivities were carried out and compared during the reactor operation time and after shutdown. It was found that the 135 Xe and 149 Sm reactivities did not reach their equilibrium reactivities during the daily operating time of the reactor. The 149 Sm reactivities could be neglected compared to 135 Xe reactivities during the reactor operating time and after shutdown. (author)

  7. Estimating the four-factor product (ε p Pfnl Ptnl) for the accurate calculation of xenon and samarium reactivities in the Syrian Miniature Neutron Source Reactor

    International Nuclear Information System (INIS)

    Khattab, K.

    2007-01-01

    The modified 135 Xe equilibrium reactivity in the Syrian Miniature Neutron Source Reactor (MNSR) was calculated first by using the WIMSD4 and CITATION codes to estimate the four-factor product (ε p P fnl P tnl ). Then, precise calculations of 135 Xe and 149 Sm concentrations and reactivities were carried out and compared during the reactor operation time and after shutdown. It was found that the 135 Xe and 149 Sm reactivities did not reach their equilibrium reactivities during the daily operating time of the reactor. The 149 Sm reactivities could be neglected compared to 135 Xe reactivities during the reactor operating time and after shutdown. (author)

  8. Spectroscopic confirmation of the optical identification of X-ray sources used to determine accurate positions for the anomalous X-ray pulsars 1E 2259+58.6 and 4U 0142+61

    Science.gov (United States)

    van den Berg, M.; Verbunt, F.

    2001-03-01

    Optical spectra show that two proposed counterparts for X-ray sources detected near 1E 2259+58.6 are late G stars, and a proposed counterpart for a source near 4U 0142+61 is a dMe star. The X-ray luminosities are as expected for such stars. We thus confirm the optical identification of the three X-ray objects, and thereby the correctness of the accurate positions for 1E 2259+58.6 and 4U 0142+61 based on them. Based on observations made with the William Herschel Telescope operated on the island of La Palma by the Isaac Newton Group in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofisica de Canarias.

  9. Accurate x-ray spectroscopy

    International Nuclear Information System (INIS)

    Deslattes, R.D.

    1987-01-01

    Heavy ion accelerators are the most flexible and readily accessible sources of highly charged ions. These having only one or two remaining electrons have spectra whose accurate measurement is of considerable theoretical significance. Certain features of ion production by accelerators tend to limit the accuracy which can be realized in measurement of these spectra. This report aims to provide background about spectroscopic limitations and discuss how accelerator operations may be selected to permit attaining intrinsically limited data

  10. Radionuclide Data Centre. Tasks and problems of obtaining the most reliable values of the nuclear physics characteristics of radionuclides and radiation physics parameters of radionuclide sources

    International Nuclear Information System (INIS)

    Chechev, V.P.

    1994-01-01

    Information is provided on the establishment of the Radionuclide Data Centre under the V.G. Khlopin Radium Institute. Its functions and areas of activity are discussed. The paper focuses on the procedure of obtaining the evaluated values of the decay and radiative characteristics of the widely used radionuclides. (author)

  11. sources

    Directory of Open Access Journals (Sweden)

    Shu-Yin Chiang

    2002-01-01

    Full Text Available In this paper, we study the simplified models of the ATM (Asynchronous Transfer Mode multiplexer network with Bernoulli random traffic sources. Based on the model, the performance measures are analyzed by the different output service schemes.

  12. Agreement of Anterior Segment Parameters Obtained From Swept-Source Fourier-Domain and Time-Domain Anterior Segment Optical Coherence Tomography.

    Science.gov (United States)

    Chansangpetch, Sunee; Nguyen, Anwell; Mora, Marta; Badr, Mai; He, Mingguang; Porco, Travis C; Lin, Shan C

    2018-03-01

    To assess the interdevice agreement between swept-source Fourier-domain and time-domain anterior segment optical coherence tomography (AS-OCT). Fifty-three eyes from 41 subjects underwent CASIA2 and Visante OCT imaging. One hundred eighty-degree axis images were measured with the built-in two-dimensional analysis software for the swept-source Fourier-domain AS-OCT (CASIA2) and a customized program for the time-domain AS-OCT (Visante OCT). In both devices, we examined the angle opening distance (AOD), trabecular iris space area (TISA), angle recess area (ARA), anterior chamber depth (ACD), anterior chamber width (ACW), and lens vault (LV). Bland-Altman plots and intraclass correlation (ICC) were performed. Orthogonal linear regression assessed any proportional bias. ICC showed strong correlation for LV (0.925) and ACD (0.992) and moderate agreement for ACW (0.801). ICC suggested good agreement for all angle parameters (0.771-0.878) except temporal AOD500 (0.743) and ARA750 (nasal 0.481; temporal 0.481). There was a proportional bias in nasal ARA750 (slope 2.44, 95% confidence interval [CI]: 1.95-3.18), temporal ARA750 (slope 2.57, 95% CI: 2.04-3.40), and nasal TISA500 (slope 1.30, 95% CI: 1.12-1.54). Bland-Altman plots demonstrated in all measured parameters a minimal mean difference between the two devices (-0.089 to 0.063); however, evidence of constant bias was found in nasal AOD250, nasal AOD500, nasal AOD750, nasal ARA750, temporal AOD500, temporal AOD750, temporal ARA750, and ACD. Among the parameters with constant biases, CASIA2 tends to give the larger numbers. Both devices had generally good agreement. However, there were proportional and constant biases in most angle parameters. Thus, it is not recommended that values be used interchangeably.

  13. Source apportionment by PMF on elemental concentrations obtained by PIXE analysis of PM10 samples collected at the vicinity of lignite power plants and mines in Megalopolis, Greece

    International Nuclear Information System (INIS)

    Manousakas, M.; Diapouli, E.; Papaefthymiou, H.; Migliori, A.; Karydas, A.G.; Padilla-Alvarez, R.; Bogovac, M.; Kaiser, R.B.; Jaksic, M.; Bogdanovic-Radovic, I.; Eleftheriadis, K.

    2015-01-01

    Particulate matter (PM) is an important constituent of atmospheric pollution especially in areas under the influence of industrial emissions. Megalopolis is a small city of 10,000 inhabitants located in central Peloponnese in close proximity to three coal opencast mines and two lignite fired power plants. 50 PM 10 samples were collected in Megalopolis during the years 2009–11 for elemental and multivariate analysis. For the elemental analysis PIXE was used as one of the most effective techniques in APM analytical characterization. Altogether, the concentrations of 22 elements (Z = 11–33), whereas Black Carbon was also determined for each sample using a reflectometer. Factorization software was used (EPA PMF 3.0) for source apportionment analysis. The analysis revealed that major emission sources were soil dust 33% (7.94 ± 0.27 μg/m 3 ), biomass burning 19% (4.43 ± 0.27 μg/m 3 ), road dust 15% (3.63 ± 0.37 μg/m 3 ), power plant emissions 13% (3.01 ± 0.44 μg/m 3 ), traffic 12% (2.82 ± 0.37 μg/m 3 ), and sea spray 8% (1.99 ± 0.41 μg/m 3 ). Wind trajectories have suggested that metals associated with emission from the power plants came mainly from west and were connected with the locations of the lignite mines located in this area. Soil resuspension, road dust and power plant emissions increased during the warm season of the year, while traffic/secondary, sea spray and biomass burning become dominant during the cold season

  14. Source apportionment by PMF on elemental concentrations obtained by PIXE analysis of PM10 samples collected at the vicinity of lignite power plants and mines in Megalopolis, Greece

    Science.gov (United States)

    Manousakas, M.; Diapouli, E.; Papaefthymiou, H.; Migliori, A.; Karydas, A. G.; Padilla-Alvarez, R.; Bogovac, M.; Kaiser, R. B.; Jaksic, M.; Bogdanovic-Radovic, I.; Eleftheriadis, K.

    2015-04-01

    Particulate matter (PM) is an important constituent of atmospheric pollution especially in areas under the influence of industrial emissions. Megalopolis is a small city of 10,000 inhabitants located in central Peloponnese in close proximity to three coal opencast mines and two lignite fired power plants. 50 PM10 samples were collected in Megalopolis during the years 2009-11 for elemental and multivariate analysis. For the elemental analysis PIXE was used as one of the most effective techniques in APM analytical characterization. Altogether, the concentrations of 22 elements (Z = 11-33), whereas Black Carbon was also determined for each sample using a reflectometer. Factorization software was used (EPA PMF 3.0) for source apportionment analysis. The analysis revealed that major emission sources were soil dust 33% (7.94 ± 0.27 μg/m3), biomass burning 19% (4.43 ± 0.27 μg/m3), road dust 15% (3.63 ± 0.37 μg/m3), power plant emissions 13% (3.01 ± 0.44 μg/m3), traffic 12% (2.82 ± 0.37 μg/m3), and sea spray 8% (1.99 ± 0.41 μg/m3). Wind trajectories have suggested that metals associated with emission from the power plants came mainly from west and were connected with the locations of the lignite mines located in this area. Soil resuspension, road dust and power plant emissions increased during the warm season of the year, while traffic/secondary, sea spray and biomass burning become dominant during the cold season.

  15. Egg-yolk protein by-product as a source of ACE-inhibitory peptides obtained with using unconventional proteinase from Asian pumpkin (Cucurbita ficifolia).

    Science.gov (United States)

    Eckert, Ewelina; Zambrowicz, Aleksandra; Pokora, Marta; Setner, Bartosz; Dąbrowska, Anna; Szołtysik, Marek; Szewczuk, Zbigniew; Polanowski, Antoni; Trziszka, Tadeusz; Chrzanowska, Józefa

    2014-10-14

    In the present study angiotensin I-converting enzyme (ACE) inhibitory peptides were isolated from egg-yolk protein preparation (YP). Enzymatic hydrolysis conducted using unconventional enzyme from Cucurbita ficifolia (dose: 1000 U/mg of hydrolyzed YP (E/S (w/w)=1:7.52)) was employed to obtain protein hydrolysates. The 4-h hydrolysate exhibited a significant (IC₅₀=482.5 μg/mL) ACE inhibitory activity. Moreover, hydrolysate showed no cytotoxic activity on human and animal cell lines which makes it a very useful multifunctional method for peptide preparation. The compiled isolation procedure (ultrafiltration, size-exclusion chromatography and RP-HPLC) of bioactive peptides from YP hydrolysate resulted in obtaining peptides with the strong ACE inhibitory activity. One homogeneous and three heterogeneous peptide fractions were identified. The peptides were composed of 9-18 amino-acid residues, including mainly arginine and leucine at the N-terminal positions. To confirm the selected bioactive peptide sequences their analogs were chemically synthesized and tested. Peptide LAPSLPGKPKPD showed the strongest ACE inhibitory activity, with IC₅₀ value of 1.97 μmol/L. Peptides with specific biological activity can be used in pharmaceutical, cosmetic or food industries. Because of their potential role as physiological modulators, as well as theirhigh safety profile, they can be used as natural pharmacological compounds or functional food ingredients. The development of biotechnological solutions to obtain peptides with desired biological activity is already in progress. Studies in this area are focused on using unconventional highly specific enzymes and more efficient methods developed to conduct food process technologies. Natural peptides have many advantages. They are mainly toxicologically safe, have wide spectra of therapeutic actions, exhibit less side effects compared to synthetic drugs and are more efficiently absorbed in the intestinal tract. The complexity of

  16. A sparsity-based iterative algorithm for reconstruction of micro-CT images from highly undersampled projection datasets obtained with a synchrotron X-ray source

    Science.gov (United States)

    Melli, S. Ali; Wahid, Khan A.; Babyn, Paul; Cooper, David M. L.; Gopi, Varun P.

    2016-12-01

    Synchrotron X-ray Micro Computed Tomography (Micro-CT) is an imaging technique which is increasingly used for non-invasive in vivo preclinical imaging. However, it often requires a large number of projections from many different angles to reconstruct high-quality images leading to significantly high radiation doses and long scan times. To utilize this imaging technique further for in vivo imaging, we need to design reconstruction algorithms that reduce the radiation dose and scan time without reduction of reconstructed image quality. This research is focused on using a combination of gradient-based Douglas-Rachford splitting and discrete wavelet packet shrinkage image denoising methods to design an algorithm for reconstruction of large-scale reduced-view synchrotron Micro-CT images with acceptable quality metrics. These quality metrics are computed by comparing the reconstructed images with a high-dose reference image reconstructed from 1800 equally spaced projections spanning 180°. Visual and quantitative-based performance assessment of a synthetic head phantom and a femoral cortical bone sample imaged in the biomedical imaging and therapy bending magnet beamline at the Canadian Light Source demonstrates that the proposed algorithm is superior to the existing reconstruction algorithms. Using the proposed reconstruction algorithm to reduce the number of projections in synchrotron Micro-CT is an effective way to reduce the overall radiation dose and scan time which improves in vivo imaging protocols.

  17. Accurate radioimmunoassay of human growth hormone with separation on polyacrylamide gel electrophoresis of free antigen; antigen-anti-body complex and damaged labelled antigen: a study is performed on this last one for the purpose of obtaining long lasting labelled products

    International Nuclear Information System (INIS)

    Bartolini, P.; Assis, L.M.; Schwarz, I.; Macchione, M.; Pieroni, R.R.

    1977-01-01

    The intent of this work was the obtainement of a radioimmunoassay method, accurate and precise enough, to furnish a suitable way for determining Human Growth Hormone (HGH) in extracts or in physiological fluids, useful more for specific research purposes than for routinized clinical assays, and where the labelled products could be used as long as possible. Only one technique was found that could give an answer to these requirements, though under some aspects more laborious than others: Polyacrylamide Gel Electrophoresis (PAGE). This was used introducing some modifications to the original method of Davis, and it was possible, using tubes 11 cm long, to separate on the same gel, the free, the antibody bound, and the damaged labelled antigen. The method, being able to detect separately and independently these three components and to give a better control on the analytically dangerous 'damaged', furnished accurate and reproducible curves. An example of determination is the one on KABI-Crescormon, that compares the results obatined with the present technique to those presented by another laboratory. Thanks to this method the labelled antigen could be used up to one month of time. After this period a re-purification on Sephadex was introduced so that the same labelled product was profitable for two more months. Parallel to this work a study has been performed on the various components originating in this so called process of 'damaging', and a particular importance has also been given to a more precise knowledge of the amount of antigen, in terms of mass, present in an assay. (orig.) [de

  18. Kinetic and thermodynamic studies on the adsorption of heavy metals from aqueous solution by melanin nanopigment obtained from marine source: Pseudomonas stutzeri.

    Science.gov (United States)

    Manirethan, Vishnu; Raval, Keyur; Rajan, Reju; Thaira, Harsha; Balakrishnan, Raj Mohan

    2018-05-15

    The difficulty in removal of heavy metals at concentrations below 10 mg/L has led to the exploration of efficient adsorbents for removal of heavy metals. The adsorption capacity of biosynthesized melanin for Mercury (Hg(II)), Chromium (Cr(VI)), Lead (Pb(II)) and Copper (Cu(II)) was investigated at different operating conditions like pH, time, initial concentration and temperature. The heavy metals adsorption process was well illustrated by the Lagergren's pseudo-second-order kinetic model and the equilibrium data fitted excellently to Langmuir isotherm. Maximum adsorption capacity obtained from Langmuir isotherm for Hg(II) was 82.4 mg/g, Cr(VI) was 126.9 mg/g, Pb(II) was 147.5 mg/g and Cu(II) was 167.8 mg/g. The thermodynamic parameters revealed that the adsorption of heavy metals on melanin is favorable, spontaneous and endothermic in nature. Binding of heavy metals on melanin surface was proved by Fourier Transform Infrared Spectroscopy (FT-IR) and X-ray Photoelectron Spectroscopy (XPS). Contemplating the results, biosynthesized melanin can be a potential adsorbent for efficient removal of Hg(II), Cr(VI), Pb(II) and Cu(II) ions from aqueous solution. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. Spectrally accurate contour dynamics

    International Nuclear Information System (INIS)

    Van Buskirk, R.D.; Marcus, P.S.

    1994-01-01

    We present an exponentially accurate boundary integral method for calculation the equilibria and dynamics of piece-wise constant distributions of potential vorticity. The method represents contours of potential vorticity as a spectral sum and solves the Biot-Savart equation for the velocity by spectrally evaluating a desingularized contour integral. We use the technique in both an initial-value code and a newton continuation method. Our methods are tested by comparing the numerical solutions with known analytic results, and it is shown that for the same amount of computational work our spectral methods are more accurate than other contour dynamics methods currently in use

  20. SU-E-T-212: Comparison of TG-43 Dosimetric Parameters of Low and High Energy Brachytherapy Sources Obtained by MCNP Code Versions of 4C, X and 5

    Energy Technology Data Exchange (ETDEWEB)

    Zehtabian, M; Zaker, N; Sina, S [Shiraz University, Shiraz, Fars (Iran, Islamic Republic of); Meigooni, A Soleimani [Comprehensive Cancer Center of Nevada, Las Vegas, Nevada (United States)

    2015-06-15

    Purpose: Different versions of MCNP code are widely used for dosimetry purposes. The purpose of this study is to compare different versions of the MCNP codes in dosimetric evaluation of different brachytherapy sources. Methods: The TG-43 parameters such as dose rate constant, radial dose function, and anisotropy function of different brachytherapy sources, i.e. Pd-103, I-125, Ir-192, and Cs-137 were calculated in water phantom. The results obtained by three versions of Monte Carlo codes (MCNP4C, MCNPX, MCNP5) were compared for low and high energy brachytherapy sources. Then the cross section library of MCNP4C code was changed to ENDF/B-VI release 8 which is used in MCNP5 and MCNPX codes. Finally, the TG-43 parameters obtained using the MCNP4C-revised code, were compared with other codes. Results: The results of these investigations indicate that for high energy sources, the differences in TG-43 parameters between the codes are less than 1% for Ir-192 and less than 0.5% for Cs-137. However for low energy sources like I-125 and Pd-103, large discrepancies are observed in the g(r) values obtained by MCNP4C and the two other codes. The differences between g(r) values calculated using MCNP4C and MCNP5 at the distance of 6cm were found to be about 17% and 28% for I-125 and Pd-103 respectively. The results obtained with MCNP4C-revised and MCNPX were similar. However, the maximum difference between the results obtained with the MCNP5 and MCNP4C-revised codes was 2% at 6cm. Conclusion: The results indicate that using MCNP4C code for dosimetry of low energy brachytherapy sources can cause large errors in the results. Therefore it is recommended not to use this code for low energy sources, unless its cross section library is changed. Since the results obtained with MCNP4C-revised and MCNPX were similar, it is concluded that the difference between MCNP4C and MCNPX is their cross section libraries.

  1. A method for accurate computation of elastic and discrete inelastic scattering transfer matrix

    International Nuclear Information System (INIS)

    Garcia, R.D.M.; Santina, M.D.

    1986-05-01

    A method for accurate computation of elastic and discrete inelastic scattering transfer matrices is discussed. In particular, a partition scheme for the source energy range that avoids integration over intervals containing points where the integrand has discontinuous derivative is developed. Five-figure accurate numerical results are obtained for several test problems with the TRAMA program which incorporates the porposed method. A comparison with numerical results from existing processing codes is also presented. (author) [pt

  2. Obtaining and Investigating Unconventional Sources of Radioactivity

    Science.gov (United States)

    Lapp, David R.

    2010-01-01

    This paper provides examples of naturally radioactive items that are likely to be found in most communities. Additionally, there is information provided on how to acquire many of these items inexpensively. I have found that the presence of these materials in the classroom is not only useful for teaching about nuclear radiation and debunking the…

  3. Accurate quantum chemical calculations

    Science.gov (United States)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1989-01-01

    An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.

  4. Accurate scaling on multiplicity

    International Nuclear Information System (INIS)

    Golokhvastov, A.I.

    1989-01-01

    The commonly used formula of KNO scaling P n =Ψ(n/ ) for descrete distributions (multiplicity distributions) is shown to contradict mathematically the condition ΣP n =1. The effect is essential even at ISR energies. A consistent generalization of the concept of similarity for multiplicity distributions is obtained. The multiplicity distributions of negative particles in PP and also e + e - inelastic interactions are similar over the whole studied energy range. Collider data are discussed. 14 refs.; 8 figs

  5. Accurate Modeling of Advanced Reflectarrays

    DEFF Research Database (Denmark)

    Zhou, Min

    to the conventional phase-only optimization technique (POT), the geometrical parameters of the array elements are directly optimized to fulfill the far-field requirements, thus maintaining a direct relation between optimization goals and optimization variables. As a result, better designs can be obtained compared...... of the incident field, the choice of basis functions, and the technique to calculate the far-field. Based on accurate reference measurements of two offset reflectarrays carried out at the DTU-ESA Spherical NearField Antenna Test Facility, it was concluded that the three latter factors are particularly important...... using the GDOT to demonstrate its capabilities. To verify the accuracy of the GDOT, two offset contoured beam reflectarrays that radiate a high-gain beam on a European coverage have been designed and manufactured, and subsequently measured at the DTU-ESA Spherical Near-Field Antenna Test Facility...

  6. Keep it Accurate and Diverse

    DEFF Research Database (Denmark)

    Ali Bagheri, Mohammad; Gao, Qigang; Guerrero, Sergio Escalera

    2015-01-01

    the performance of an ensemble of action learning techniques, each performing the recognition task from a different per- spective. The underlying idea is that instead of aiming a very sophisticated and powerful representation/learning technique, we can learn action categories using a set of relatively simple...... to improve the recognition perfor- mance, a powerful combination strategy is utilized based on the Dempster-Shafer theory, which can effectively make use of diversity of base learners trained on different sources of information. The recognition results of the individual clas- sifiers are compared with those...... obtained from fusing the classifiers’ output, showing enhanced performance of the proposed methodology....

  7. Fast sweeping algorithm for accurate solution of the TTI eikonal equation using factorization

    KAUST Repository

    bin Waheed, Umair

    2017-06-10

    Traveltime computation is essential for many seismic data processing applications and velocity analysis tools. High-resolution seismic imaging requires eikonal solvers to account for anisotropy whenever it significantly affects the seismic wave kinematics. Moreover, computation of auxiliary quantities, such as amplitude and take-off angle, rely on highly accurate traveltime solutions. However, the finite-difference based eikonal solution for a point-source initial condition has an upwind source-singularity at the source position, since the wavefront curvature is large near the source point. Therefore, all finite-difference solvers, even the high-order ones, show inaccuracies since the errors due to source-singularity spread from the source point to the whole computational domain. We address the source-singularity problem for tilted transversely isotropic (TTI) eikonal solvers using factorization. We solve a sequence of factored tilted elliptically anisotropic (TEA) eikonal equations iteratively, each time by updating the right hand side function. At each iteration, we factor the unknown TEA traveltime into two factors. One of the factors is specified analytically, such that the other factor is smooth in the source neighborhood. Therefore, through the iterative procedure we obtain accurate solution to the TTI eikonal equation. Numerical tests show significant improvement in accuracy due to factorization. The idea can be easily extended to compute accurate traveltimes for models with lower anisotropic symmetries, such as orthorhombic, monoclinic or even triclinic media.

  8. More accurate picture of human body organs

    International Nuclear Information System (INIS)

    Kolar, J.

    1985-01-01

    Computerized tomography and nucler magnetic resonance tomography (NMRT) are revolutionary contributions to radiodiagnosis because they allow to obtain a more accurate image of human body organs. The principles are described of both methods. Attention is mainly devoted to NMRT which has clinically only been used for three years. It does not burden the organism with ionizing radiation. (Ha)

  9. Obtaining of inulin acetate

    OpenAIRE

    Khusenov, Arslonnazar; Rakhmanberdiev, Gappar; Rakhimov, Dilshod; Khalikov, Muzaffar

    2014-01-01

    In the article first obtained inulin ester inulin acetate, by etherification of inulin with acetic anhydride has been exposed. Obtained product has been studied using elementary analysis and IR spectroscopy.

  10. On accurate determination of contact angle

    Science.gov (United States)

    Concus, P.; Finn, R.

    1992-01-01

    Methods are proposed that exploit a microgravity environment to obtain highly accurate measurement of contact angle. These methods, which are based on our earlier mathematical results, do not require detailed measurement of a liquid free-surface, as they incorporate discontinuous or nearly-discontinuous behavior of the liquid bulk in certain container geometries. Physical testing is planned in the forthcoming IML-2 space flight and in related preparatory ground-based experiments.

  11. Accurate multiplicity scaling in isotopically conjugate reactions

    International Nuclear Information System (INIS)

    Golokhvastov, A.I.

    1989-01-01

    The generation of accurate scaling of mutiplicity distributions is presented. The distributions of π - mesons (negative particles) and π + mesons in different nucleon-nucleon interactions (PP, NP and NN) are described by the same universal function Ψ(z) and the same energy dependence of the scale parameter which determines the stretching factor for the unit function Ψ(z) to obtain the desired multiplicity distribution. 29 refs.; 6 figs

  12. The €100 lab: A 3D-printable open-source platform for fluorescence microscopy, optogenetics, and accurate temperature control during behaviour of zebrafish, Drosophila, and Caenorhabditis elegans.

    Directory of Open Access Journals (Sweden)

    Andre Maia Chagas

    2017-07-01

    Full Text Available Small, genetically tractable species such as larval zebrafish, Drosophila, or Caenorhabditis elegans have become key model organisms in modern neuroscience. In addition to their low maintenance costs and easy sharing of strains across labs, one key appeal is the possibility to monitor single or groups of animals in a behavioural arena while controlling the activity of select neurons using optogenetic or thermogenetic tools. However, the purchase of a commercial solution for these types of experiments, including an appropriate camera system as well as a controlled behavioural arena, can be costly. Here, we present a low-cost and modular open-source alternative called 'FlyPi'. Our design is based on a 3D-printed mainframe, a Raspberry Pi computer, and high-definition camera system as well as Arduino-based optical and thermal control circuits. Depending on the configuration, FlyPi can be assembled for well under €100 and features optional modules for light-emitting diode (LED-based fluorescence microscopy and optogenetic stimulation as well as a Peltier-based temperature stimulator for thermogenetics. The complete version with all modules costs approximately €200 or substantially less if the user is prepared to 'shop around'. All functions of FlyPi can be controlled through a custom-written graphical user interface. To demonstrate FlyPi's capabilities, we present its use in a series of state-of-the-art neurogenetics experiments. In addition, we demonstrate FlyPi's utility as a medical diagnostic tool as well as a teaching aid at Neurogenetics courses held at several African universities. Taken together, the low cost and modular nature as well as fully open design of FlyPi make it a highly versatile tool in a range of applications, including the classroom, diagnostic centres, and research labs.

  13. Towards accurate emergency response behavior

    International Nuclear Information System (INIS)

    Sargent, T.O.

    1981-01-01

    Nuclear reactor operator emergency response behavior has persisted as a training problem through lack of information. The industry needs an accurate definition of operator behavior in adverse stress conditions, and training methods which will produce the desired behavior. Newly assembled information from fifty years of research into human behavior in both high and low stress provides a more accurate definition of appropriate operator response, and supports training methods which will produce the needed control room behavior. The research indicates that operator response in emergencies is divided into two modes, conditioned behavior and knowledge based behavior. Methods which assure accurate conditioned behavior, and provide for the recovery of knowledge based behavior, are described in detail

  14. Fast and accurate methods for phylogenomic analyses

    Directory of Open Access Journals (Sweden)

    Warnow Tandy

    2011-10-01

    Full Text Available Abstract Background Species phylogenies are not estimated directly, but rather through phylogenetic analyses of different gene datasets. However, true gene trees can differ from the true species tree (and hence from one another due to biological processes such as horizontal gene transfer, incomplete lineage sorting, and gene duplication and loss, so that no single gene tree is a reliable estimate of the species tree. Several methods have been developed to estimate species trees from estimated gene trees, differing according to the specific algorithmic technique used and the biological model used to explain differences between species and gene trees. Relatively little is known about the relative performance of these methods. Results We report on a study evaluating several different methods for estimating species trees from sequence datasets, simulating sequence evolution under a complex model including indels (insertions and deletions, substitutions, and incomplete lineage sorting. The most important finding of our study is that some fast and simple methods are nearly as accurate as the most accurate methods, which employ sophisticated statistical methods and are computationally quite intensive. We also observe that methods that explicitly consider errors in the estimated gene trees produce more accurate trees than methods that assume the estimated gene trees are correct. Conclusions Our study shows that highly accurate estimations of species trees are achievable, even when gene trees differ from each other and from the species tree, and that these estimations can be obtained using fairly simple and computationally tractable methods.

  15. Systematization of Accurate Discrete Optimization Methods

    Directory of Open Access Journals (Sweden)

    V. A. Ovchinnikov

    2015-01-01

    Full Text Available The object of study of this paper is to define accurate methods for solving combinatorial optimization problems of structural synthesis. The aim of the work is to systemize the exact methods of discrete optimization and define their applicability to solve practical problems.The article presents the analysis, generalization and systematization of classical methods and algorithms described in the educational and scientific literature.As a result of research a systematic presentation of combinatorial methods for discrete optimization described in various sources is given, their capabilities are described and properties of the tasks to be solved using the appropriate methods are specified.

  16. How to efficiently obtain accurate estimates of flower visitation rates by pollinators

    NARCIS (Netherlands)

    Fijen, Thijs P.M.; Kleijn, David

    2017-01-01

    Regional declines in insect pollinators have raised concerns about crop pollination. Many pollinator studies use visitation rate (pollinators/time) as a proxy for the quality of crop pollination. Visitation rate estimates are based on observation durations that vary significantly between studies.

  17. Accurately Decoding Visual Information from fMRI Data Obtained in a Realistic Virtual Environment

    Science.gov (United States)

    2015-06-09

    Center for Learning and Memory , The University of Texas at Austin, 100 E 24th Street, Stop C7000, Austin, TX 78712, USA afloren@utexas.edu Received: 18...and Maguire, the game play was recorded and participants rated their subjective mood along several axes, including arousal and valence, while...classified examples while the rest indicate incorrectly classified examples. The color of the cell indicates deviation from chance probability (16.7

  18. When Is Network Lasso Accurate?

    Directory of Open Access Journals (Sweden)

    Alexander Jung

    2018-01-01

    Full Text Available The “least absolute shrinkage and selection operator” (Lasso method has been adapted recently for network-structured datasets. In particular, this network Lasso method allows to learn graph signals from a small number of noisy signal samples by using the total variation of a graph signal for regularization. While efficient and scalable implementations of the network Lasso are available, only little is known about the conditions on the underlying network structure which ensure network Lasso to be accurate. By leveraging concepts of compressed sensing, we address this gap and derive precise conditions on the underlying network topology and sampling set which guarantee the network Lasso for a particular loss function to deliver an accurate estimate of the entire underlying graph signal. We also quantify the error incurred by network Lasso in terms of two constants which reflect the connectivity of the sampled nodes.

  19. Application of Molecular Typing Results in Source Attribution Models: The Case of Multiple Locus Variable Number Tandem Repeat Analysis (MLVA) of Salmonella Isolates Obtained from Integrated Surveillance in Denmark

    DEFF Research Database (Denmark)

    de Knegt, Leonardo; Pires, Sara Monteiro; Löfström, Charlotta

    2016-01-01

    , and antibiotic resistance profiles for the Salmonella source attribution, and assess the utility of the results for the food safety decisionmakers. Full and simplified MLVA schemes from surveillance data were tested, and model fit and consistency of results were assessed using statistical measures. We conclude...

  20. The Accurate Particle Tracer Code

    OpenAIRE

    Wang, Yulei; Liu, Jian; Qin, Hong; Yu, Zhi

    2016-01-01

    The Accurate Particle Tracer (APT) code is designed for large-scale particle simulations on dynamical systems. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and non-linear problems. Under the well-designed integrated and modularized framework, APT serves as a universal platform for researchers from different fields, such as plasma physics, accelerator physics, space science, fusio...

  1. Geopolymer obtained from coal ash

    International Nuclear Information System (INIS)

    Conte, V.; Bissari, E.S.; Uggioni, E.; Bernardin, A.M.

    2011-01-01

    Geopolymers are three-dimensional alumino silicates that can be rapidly formed at low temperature from naturally occurring aluminosilicates with a structure similar to zeolites. In this work coal ash (Tractebel Energy) was used as source of aluminosilicate according a full factorial design in eight formulations with three factors (hydroxide type and concentration and temperature) and two-levels. The ash was dried and hydroxide was added according type and concentration. The geopolymer was poured into cylindrical molds, cured (14 days) and subjected to compression test. The coal ash from power plants belongs to the Si-Al system and thus can easily form geopolymers. The compression tests showed that it is possible to obtain samples with strength comparable to conventional Portland cement. As a result, temperature and molarity are the main factors affecting the compressive strength of the obtained geopolymer. (author)

  2. Accurate determination of antenna directivity

    DEFF Research Database (Denmark)

    Dich, Mikael

    1997-01-01

    The derivation of a formula for accurate estimation of the total radiated power from a transmitting antenna for which the radiated power density is known in a finite number of points on the far-field sphere is presented. The main application of the formula is determination of directivity from power......-pattern measurements. The derivation is based on the theory of spherical wave expansion of electromagnetic fields, which also establishes a simple criterion for the required number of samples of the power density. An array antenna consisting of Hertzian dipoles is used to test the accuracy and rate of convergence...

  3. Production and characterization of pectinases obtained from ...

    African Journals Online (AJOL)

    Production and characterization of pectinases obtained from Aspergillus fumigatus in submerged fermentation system using pectin extracted from mango peels as carbon source. AL Ezugwu, SOO Eze, FC Chilaka, CU Anyanwu ...

  4. Accurate thickness measurement of graphene

    International Nuclear Information System (INIS)

    Shearer, Cameron J; Slattery, Ashley D; Stapleton, Andrew J; Shapter, Joseph G; Gibson, Christopher T

    2016-01-01

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1–1.3 nm to 0.1–0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials. (paper)

  5. Accurate method of the magnetic field measurement of quadrupole magnets

    International Nuclear Information System (INIS)

    Kumada, M.; Sakai, I.; Someya, H.; Sasaki, H.

    1983-01-01

    We present an accurate method of the magnetic field measurement of the quadrupole magnet. The method of obtaining the information of the field gradient and the effective focussing length is given. A new scheme to obtain the information of the skew field components is also proposed. The relative accuracy of the measurement was 1 x 10 -4 or less. (author)

  6. Process for preparing conifers, particularly conifers with little wood content to obtain energy sources and raw materials. Verfahren zur Aufbereitung von Koniferen, insbesondere holzarmer Koniferen zur Gewinnung von Energietraegern und Rohstoffen

    Energy Technology Data Exchange (ETDEWEB)

    Fischer, K.O.P.

    1981-11-26

    The object of the invention is a process for preparing root stocks, roots, bark and branches and twigs carrying needles or scales and seed capsules of conifers, where fuel and raw materials for hydrotherapy are obtained. The material used is reduced in size by beating and rubbing in pulverisers to a coarse grained mixture, which is reduced in size in further grinding processes in a mill to a mean grain size of 0.5 to 1 mm. The material dried during grinding by waste heat can be used directly as a powdery or fine-grained fuel, made into briquettes or non-wearing shapes or can be taken to a hydrocarbon conversion process or made into a bath extract.

  7. Comparison of corneal power, astigmatism, and wavefront aberration measurements obtained by a point-source color light-emitting diode-based topographer, a Placido-disk topographer, and a combined Placido and dual Scheimpflug device.

    Science.gov (United States)

    Ventura, Bruna V; Wang, Li; Ali, Shazia F; Koch, Douglas D; Weikert, Mitchell P

    2015-08-01

    To evaluate and compare the performance of a point-source color light-emitting diode (LED)-based topographer (color-LED) in measuring anterior corneal power and aberrations with that of a Placido-disk topographer and a combined Placido and dual Scheimpflug device. Cullen Eye Institute, Baylor College of Medicine, Houston, Texas USA. Retrospective observational case series. Normal eyes and post-refractive-surgery eyes were consecutively measured using color-LED, Placido, and dual-Scheimpflug devices. The main outcome measures were anterior corneal power, astigmatism, and higher-order aberrations (HOAs) (6.0 mm pupil), which were compared using the t test. There were no statistically significant differences in corneal power measurements in normal and post-refractive surgery eyes and in astigmatism magnitude in post-refractive surgery eyes between the color-LED device and Placido or dual Scheimpflug devices (all P > .05). In normal eyes, there were no statistically significant differences in 3rd-order coma and 4th-order spherical aberration between the color-LED and Placido devices and in HOA root mean square, 3rd-order coma, 3rd-order trefoil, 4th-order spherical aberration, and 4th-order secondary astigmatism between the color-LED and dual Scheimpflug devices (all P > .05). In post-refractive surgery eyes, the color-LED device agreed with the Placido and dual-Scheimpflug devices regarding 3rd-order coma and 4th-order spherical aberration (all P > .05). In normal and post-refractive surgery eyes, all 3 devices were comparable with respect to corneal power. The agreement in corneal aberrations varied. Drs. Wang, Koch, and Weikert are consultants to Ziemer Ophthalmic Systems AG. Dr. Koch is a consultant to Abbott Medical Optics, Inc., Alcon Surgical, Inc., and i-Optics Corp. Copyright © 2015 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  8. The accurate particle tracer code

    Science.gov (United States)

    Wang, Yulei; Liu, Jian; Qin, Hong; Yu, Zhi; Yao, Yicun

    2017-11-01

    The Accurate Particle Tracer (APT) code is designed for systematic large-scale applications of geometric algorithms for particle dynamical simulations. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and nonlinear problems. To provide a flexible and convenient I/O interface, the libraries of Lua and Hdf5 are used. Following a three-step procedure, users can efficiently extend the libraries of electromagnetic configurations, external non-electromagnetic forces, particle pushers, and initialization approaches by use of the extendible module. APT has been used in simulations of key physical problems, such as runaway electrons in tokamaks and energetic particles in Van Allen belt. As an important realization, the APT-SW version has been successfully distributed on the world's fastest computer, the Sunway TaihuLight supercomputer, by supporting master-slave architecture of Sunway many-core processors. Based on large-scale simulations of a runaway beam under parameters of the ITER tokamak, it is revealed that the magnetic ripple field can disperse the pitch-angle distribution significantly and improve the confinement of energetic runaway beam on the same time.

  9. How Accurately can we Calculate Thermal Systems?

    International Nuclear Information System (INIS)

    Cullen, D; Blomquist, R N; Dean, C; Heinrichs, D; Kalugin, M A; Lee, M; Lee, Y; MacFarlan, R; Nagaya, Y; Trkov, A

    2004-01-01

    I would like to determine how accurately a variety of neutron transport code packages (code and cross section libraries) can calculate simple integral parameters, such as K eff , for systems that are sensitive to thermal neutron scattering. Since we will only consider theoretical systems, we cannot really determine absolute accuracy compared to any real system. Therefore rather than accuracy, it would be more precise to say that I would like to determine the spread in answers that we obtain from a variety of code packages. This spread should serve as an excellent indicator of how accurately we can really model and calculate such systems today. Hopefully, eventually this will lead to improvements in both our codes and the thermal scattering models that they use in the future. In order to accomplish this I propose a number of extremely simple systems that involve thermal neutron scattering that can be easily modeled and calculated by a variety of neutron transport codes. These are theoretical systems designed to emphasize the effects of thermal scattering, since that is what we are interested in studying. I have attempted to keep these systems very simple, and yet at the same time they include most, if not all, of the important thermal scattering effects encountered in a large, water-moderated, uranium fueled thermal system, i.e., our typical thermal reactors

  10. Accurate measurements of neutron activation cross sections

    International Nuclear Information System (INIS)

    Semkova, V.

    1999-01-01

    The applications of some recent achievements of neutron activation method on high intensity neutron sources are considered from the view point of associated errors of cross sections data for neutron induced reaction. The important corrections in -y-spectrometry insuring precise determination of the induced radioactivity, methods for accurate determination of the energy and flux density of neutrons, produced by different sources, and investigations of deuterium beam composition are considered as factors determining the precision of the experimental data. The influence of the ion beam composition on the mean energy of neutrons has been investigated by measurement of the energy of neutrons induced by different magnetically analysed deuterium ion groups. Zr/Nb method for experimental determination of the neutron energy in the 13-15 MeV energy range allows to measure energy of neutrons from D-T reaction with uncertainty of 50 keV. Flux density spectra from D(d,n) E d = 9.53 MeV and Be(d,n) E d = 9.72 MeV are measured by PHRS and foil activation method. Future applications of the activation method on NG-12 are discussed. (author)

  11. Neutron Sources for Standard-Based Testing

    Energy Technology Data Exchange (ETDEWEB)

    Radev, Radoslav [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); McLean, Thomas [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2014-11-10

    The DHS TC Standards and the consensus ANSI Standards use 252Cf as the neutron source for performance testing because its energy spectrum is similar to the 235U and 239Pu fission sources used in nuclear weapons. An emission rate of 20,000 ± 20% neutrons per second is used for testing of the radiological requirements both in the ANSI standards and the TCS. Determination of the accurate neutron emission rate of the test source is important for maintaining consistency and agreement between testing results obtained at different testing facilities. Several characteristics in the manufacture and the decay of the source need to be understood and accounted for in order to make an accurate measurement of the performance of the neutron detection instrument. Additionally, neutron response characteristics of the particular instrument need to be known and taken into account as well as neutron scattering in the testing environment.

  12. Accurate calculation of high harmonics generated by relativistic Thomson scattering

    International Nuclear Information System (INIS)

    Popa, Alexandru

    2008-01-01

    The recent emergence of the field of ultraintense laser pulses, corresponding to beam intensities higher than 10 18 W cm -2 , brings about the problem of the high harmonic generation (HHG) by the relativistic Thomson scattering of the electromagnetic radiation by free electrons. Starting from the equations of the relativistic motion of the electron in the electromagnetic field, we give an exact solution of this problem. Taking into account the Lienard-Wiechert equations, we obtain a periodic scattered electromagnetic field. Without loss of generality, the solution is strongly simplified by observing that the electromagnetic field is always normal to the direction electron-detector. The Fourier series expansion of this field leads to accurate expressions of the high harmonics generated by the Thomson scattering. Our calculations lead to a discrete HHG spectrum, whose shape and angular distribution are in agreement with the experimental data from the literature. Since no approximations were made, our approach is also valid in the ultrarelativistic regime, corresponding to intensities higher than 10 23 W cm -2 , where it predicts a strong increase of the HHG intensities and of the order of harmonics. In this domain, the nonlinear Thomson scattering could be an efficient source of hard x-rays

  13. Accurate determination of light elements by charged particle activation analysis

    International Nuclear Information System (INIS)

    Shikano, K.; Shigematsu, T.

    1989-01-01

    To develop accurate determination of light elements by CPAA, accurate and practical standardization methods and uniform chemical etching are studied based on determination of carbon in gallium arsenide using the 12 C(d,n) 13 N reaction and the following results are obtained: (1)Average stopping power method with thick target yield is useful as an accurate and practical standardization method. (2)Front surface of sample has to be etched for accurate estimate of incident energy. (3)CPAA is utilized for calibration of light element analysis by physical method. (4)Calibration factor of carbon analysis in gallium arsenide using the IR method is determined to be (9.2±0.3) x 10 15 cm -1 . (author)

  14. Spectrally accurate initial data in numerical relativity

    Science.gov (United States)

    Battista, Nicholas A.

    Einstein's theory of general relativity has radically altered the way in which we perceive the universe. His breakthrough was to realize that the fabric of space is deformable in the presence of mass, and that space and time are linked into a continuum. Much evidence has been gathered in support of general relativity over the decades. Some of the indirect evidence for GR includes the phenomenon of gravitational lensing, the anomalous perihelion of mercury, and the gravitational redshift. One of the most striking predictions of GR, that has not yet been confirmed, is the existence of gravitational waves. The primary source of gravitational waves in the universe is thought to be produced during the merger of binary black hole systems, or by binary neutron stars. The starting point for computer simulations of black hole mergers requires highly accurate initial data for the space-time metric and for the curvature. The equations describing the initial space-time around the black hole(s) are non-linear, elliptic partial differential equations (PDE). We will discuss how to use a pseudo-spectral (collocation) method to calculate the initial puncture data corresponding to single black hole and binary black hole systems.

  15. Geodetic analysis of disputed accurate qibla direction

    Science.gov (United States)

    Saksono, Tono; Fulazzaky, Mohamad Ali; Sari, Zamah

    2018-04-01

    Muslims perform the prayers facing towards the correct qibla direction would be the only one of the practical issues in linking theoretical studies with practice. The concept of facing towards the Kaaba in Mecca during the prayers has long been the source of controversy among the muslim communities to not only in poor and developing countries but also in developed countries. The aims of this study were to analyse the geodetic azimuths of qibla calculated using three different models of the Earth. The use of ellipsoidal model of the Earth could be the best method for determining the accurate direction of Kaaba from anywhere on the Earth's surface. A muslim cannot direct himself towards the qibla correctly if he cannot see the Kaaba due to setting out process and certain motions during the prayer this can significantly shift the qibla direction from the actual position of the Kaaba. The requirement of muslim prayed facing towards the Kaaba is more as spiritual prerequisite rather than physical evidence.

  16. Towards accurate de novo assembly for genomes with repeats

    NARCIS (Netherlands)

    Bucur, Doina

    2017-01-01

    De novo genome assemblers designed for short k-mer length or using short raw reads are unlikely to recover complex features of the underlying genome, such as repeats hundreds of bases long. We implement a stochastic machine-learning method which obtains accurate assemblies with repeats and

  17. Dynamic weighing for accurate fertilizer application and monitoring

    NARCIS (Netherlands)

    Bergeijk, van J.; Goense, D.; Willigenburg, van L.G.; Speelman, L.

    2001-01-01

    The mass flow of fertilizer spreaders must be calibrated for the different types of fertilizers used. To obtain accurate fertilizer application manual calibration of actual mass flow must be repeated frequently. Automatic calibration is possible by measurement of the actual mass flow, based on

  18. Accurate Rapid Lifetime Determination on Time-Gated FLIM Microscopy with Optical Sectioning.

    Science.gov (United States)

    Silva, Susana F; Domingues, José Paulo; Morgado, António Miguel

    2018-01-01

    Time-gated fluorescence lifetime imaging microscopy (FLIM) is a powerful technique to assess the biochemistry of cells and tissues. When applied to living thick samples, it is hampered by the lack of optical sectioning and the need of acquiring many images for an accurate measurement of fluorescence lifetimes. Here, we report on the use of processing techniques to overcome these limitations, minimizing the acquisition time, while providing optical sectioning. We evaluated the application of the HiLo and the rapid lifetime determination (RLD) techniques for accurate measurement of fluorescence lifetimes with optical sectioning. HiLo provides optical sectioning by combining the high-frequency content from a standard image, obtained with uniform illumination, with the low-frequency content of a second image, acquired using structured illumination. Our results show that HiLo produces optical sectioning on thick samples without degrading the accuracy of the measured lifetimes. We also show that instrument response function (IRF) deconvolution can be applied with the RLD technique on HiLo images, improving greatly the accuracy of the measured lifetimes. These results open the possibility of using the RLD technique with pulsed diode laser sources to determine accurately fluorescence lifetimes in the subnanosecond range on thick multilayer samples, providing that offline processing is allowed.

  19. Accurate light-time correction due to a gravitating mass

    Energy Technology Data Exchange (ETDEWEB)

    Ashby, Neil [Department of Physics, University of Colorado, Boulder, CO (United States); Bertotti, Bruno, E-mail: ashby@boulder.nist.go [Dipartimento di Fisica Nucleare e Teorica, Universita di Pavia (Italy)

    2010-07-21

    This technical paper of mathematical physics arose as an aftermath of the 2002 Cassini experiment (Bertotti et al 2003 Nature 425 374-6), in which the PPN parameter {gamma} was measured with an accuracy {sigma}{sub {gamma}} = 2.3 x 10{sup -5} and found consistent with the prediction {gamma} = 1 of general relativity. The Orbit Determination Program (ODP) of NASA's Jet Propulsion Laboratory, which was used in the data analysis, is based on an expression (8) for the gravitational delay {Delta}t that differs from the standard formula (2); this difference is of second order in powers of m-the gravitational radius of the Sun-but in Cassini's case it was much larger than the expected order of magnitude m{sup 2}/b, where b is the distance of the closest approach of the ray. Since the ODP does not take into account any other second-order terms, it is necessary, also in view of future more accurate experiments, to revisit the whole problem, to systematically evaluate higher order corrections and to determine which terms, and why, are larger than the expected value. We note that light propagation in a static spacetime is equivalent to a problem in ordinary geometrical optics; Fermat's action functional at its minimum is just the light-time between the two end points A and B. A new and powerful formulation is thus obtained. This method is closely connected with the much more general approach of Le Poncin-Lafitte et al (2004 Class. Quantum Grav. 21 4463-83), which is based on Synge's world function. Asymptotic power series are necessary to provide a safe and automatic way of selecting which terms to keep at each order. Higher order approximations to the required quantities, in particular the delay and the deflection, are easily obtained. We also show that in a close superior conjunction, when b is much smaller than the distances of A and B from the Sun, say of order R, the second-order correction has an enhanced part of order m{sup 2}R/b{sup 2}, which

  20. Analysis shear wave velocity structure obtained from surface wave methods in Bornova, Izmir

    Energy Technology Data Exchange (ETDEWEB)

    Pamuk, Eren, E-mail: eren.pamuk@deu.edu.tr; Akgün, Mustafa, E-mail: mustafa.akgun@deu.edu.tr [Department of Geophysical Engineering, Dokuz Eylul University, Izmir (Turkey); Özdağ, Özkan Cevdet, E-mail: cevdet.ozdag@deu.edu.tr [Dokuz Eylul University Rectorate, Izmir (Turkey)

    2016-04-18

    Properties of the soil from the bedrock is necessary to describe accurately and reliably for the reduction of earthquake damage. Because seismic waves change their amplitude and frequency content owing to acoustic impedance difference between soil and bedrock. Firstly, shear wave velocity and depth information of layers on bedrock is needed to detect this changing. Shear wave velocity can be obtained using inversion of Rayleigh wave dispersion curves obtained from surface wave methods (MASW- the Multichannel Analysis of Surface Waves, ReMi-Refraction Microtremor, SPAC-Spatial Autocorrelation). While research depth is limeted in active source study, a passive source methods are utilized for deep depth which is not reached using active source methods. ReMi method is used to determine layer thickness and velocity up to 100 m using seismic refraction measurement systems.The research carried out up to desired depth depending on radius using SPAC which is utilized easily in conditions that district using of seismic studies in the city. Vs profiles which are required to calculate deformations in under static and dynamic loads can be obtained with high resolution using combining rayleigh wave dispersion curve obtained from active and passive source methods. In the this study, Surface waves data were collected using the measurements of MASW, ReMi and SPAC at the İzmir Bornova region. Dispersion curves obtained from surface wave methods were combined in wide frequency band and Vs-depth profiles were obtained using inversion. Reliability of the resulting soil profiles were provided by comparison with theoretical transfer function obtained from soil paremeters and observed soil transfer function from Nakamura technique and by examination of fitting between these functions. Vs values are changed between 200-830 m/s and engineering bedrock (Vs>760 m/s) depth is approximately 150 m.

  1. Accurate determination of the charge transfer efficiency of photoanodes for solar water splitting.

    Science.gov (United States)

    Klotz, Dino; Grave, Daniel A; Rothschild, Avner

    2017-08-09

    The oxygen evolution reaction (OER) at the surface of semiconductor photoanodes is critical for photoelectrochemical water splitting. This reaction involves photo-generated holes that oxidize water via charge transfer at the photoanode/electrolyte interface. However, a certain fraction of the holes that reach the surface recombine with electrons from the conduction band, giving rise to the surface recombination loss. The charge transfer efficiency, η t , defined as the ratio between the flux of holes that contribute to the water oxidation reaction and the total flux of holes that reach the surface, is an important parameter that helps to distinguish between bulk and surface recombination losses. However, accurate determination of η t by conventional voltammetry measurements is complicated because only the total current is measured and it is difficult to discern between different contributions to the current. Chopped light measurement (CLM) and hole scavenger measurement (HSM) techniques are widely employed to determine η t , but they often lead to errors resulting from instrumental as well as fundamental limitations. Intensity modulated photocurrent spectroscopy (IMPS) is better suited for accurate determination of η t because it provides direct information on both the total photocurrent and the surface recombination current. However, careful analysis of IMPS measurements at different light intensities is required to account for nonlinear effects. This work compares the η t values obtained by these methods using heteroepitaxial thin-film hematite photoanodes as a case study. We show that a wide spread of η t values is obtained by different analysis methods, and even within the same method different values may be obtained depending on instrumental and experimental conditions such as the light source and light intensity. Statistical analysis of the results obtained for our model hematite photoanode show good correlation between different methods for

  2. Possibility of gas flow measurements using ionization produced by radioactive sources. Performance obtained using continuous and pulsed ionization; Etude des possibilites de mesure des debits gazeux par l'ionisation creee au moyen de sources radioactives performances obtenues par ionisation continue et par ionisation pulsee

    Energy Technology Data Exchange (ETDEWEB)

    Toudoire, B. [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1969-07-01

    Two methods for measuring gas flow have been studied, based on the ionization of the fluid by a radioactive source. In the first one, called the continuous method, use is made of the relationship between the flow and the ionic density at a point situated down-stream from the ionizing source. In the second method, called 'pulsed', the time for a burst of ions to pass between two points in the circuit is measured. An attempt has been made to predict and to justify theoretically the experimental results, and to determine to what extent these methods can provide absolute measurements or measurements requiring a calibration using known gas flows. These methods are characterized by the absence of moving parts or of parts under reduced pressure and can yield results with an accuracy of between a few per cent to a few tenths of a per cent. The information, provided in either analog or digital form, can be adapted for use in servo-mechanisms or automatic systems. Two applications of an industrial type are described; they concern gas-flow measurements in a railway braking circuit, and in tubes of 20 and 30 cm diameter. (author) [French] Deux methodes de mesure de debit gazeux ont ete etudiees, basees sur l'ionisation du fluide par une source radioactive. Dans la premiere, dite continue, on exploite la relation existant entre le debit et la densite ionique en un point situe a l'aval de la source ionisante. Dans la seconde, dite pulsee, on mesure le temps de transit de bouffees d'ions entre deux points de la conduite. On s'est efforce de prevoir et de justifier par la theorie les resultats experimentaux, et de preciser dans quelle mesure ces methodes peuvent fournir des mesures absolues ou necessitent un etalonnage a partir de debits connus. Caracterisees par l'absence d'organe mobile ou deprimogene, ces methodes sont susceptibles d'une precision de quelques pour-cent a quelques pour-mille. L'information, fournie sous forme

  3. Possibility of gas flow measurements using ionization produced by radioactive sources. Performance obtained using continuous and pulsed ionization; Etude des possibilites de mesure des debits gazeux par l'ionisation creee au moyen de sources radioactives performances obtenues par ionisation continue et par ionisation pulsee

    Energy Technology Data Exchange (ETDEWEB)

    Toudoire, B [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires

    1969-07-01

    Two methods for measuring gas flow have been studied, based on the ionization of the fluid by a radioactive source. In the first one, called the continuous method, use is made of the relationship between the flow and the ionic density at a point situated down-stream from the ionizing source. In the second method, called 'pulsed', the time for a burst of ions to pass between two points in the circuit is measured. An attempt has been made to predict and to justify theoretically the experimental results, and to determine to what extent these methods can provide absolute measurements or measurements requiring a calibration using known gas flows. These methods are characterized by the absence of moving parts or of parts under reduced pressure and can yield results with an accuracy of between a few per cent to a few tenths of a per cent. The information, provided in either analog or digital form, can be adapted for use in servo-mechanisms or automatic systems. Two applications of an industrial type are described; they concern gas-flow measurements in a railway braking circuit, and in tubes of 20 and 30 cm diameter. (author) [French] Deux methodes de mesure de debit gazeux ont ete etudiees, basees sur l'ionisation du fluide par une source radioactive. Dans la premiere, dite continue, on exploite la relation existant entre le debit et la densite ionique en un point situe a l'aval de la source ionisante. Dans la seconde, dite pulsee, on mesure le temps de transit de bouffees d'ions entre deux points de la conduite. On s'est efforce de prevoir et de justifier par la theorie les resultats experimentaux, et de preciser dans quelle mesure ces methodes peuvent fournir des mesures absolues ou necessitent un etalonnage a partir de debits connus. Caracterisees par l'absence d'organe mobile ou deprimogene, ces methodes sont susceptibles d'une precision de quelques pour-cent a quelques pour-mille. L'information, fournie sous forme analogique ou numerique, se prete a la realisation de

  4. Analysis of an Internet Community about Pneumothorax and the Importance of Accurate Information about the Disease.

    Science.gov (United States)

    Kim, Bong Jun; Lee, Sungsoo

    2018-04-01

    The huge improvements in the speed of data transmission and the increasing amount of data available as the Internet has expanded have made it easy to obtain information about any disease. Since pneumothorax frequently occurs in young adolescents, patients often search the Internet for information on pneumothorax. This study analyzed an Internet community for exchanging information on pneumothorax, with an emphasis on the importance of accurate information and doctors' role in providing such information. This study assessed 599,178 visitors to the Internet community from June 2008 to April 2017. There was an average of 190 visitors, 2.2 posts, and 4.5 replies per day. A total of 6,513 posts were made, and 63.3% of them included questions about the disease. The visitors mostly searched for terms such as 'pneumothorax,' 'recurrent pneumothorax,' 'pneumothorax operation,' and 'obtaining a medical certification of having been diagnosed with pneumothorax.' However, 22% of the pneumothorax-related posts by visitors contained inaccurate information. Internet communities can be an important source of information. However, incorrect information about a disease can be harmful for patients. We, as doctors, should try to provide more in-depth information about diseases to patients and to disseminate accurate information about diseases in Internet communities.

  5. Equipment upgrade - Accurate positioning of ion chambers

    International Nuclear Information System (INIS)

    Doane, Harry J.; Nelson, George W.

    1990-01-01

    Five adjustable clamps were made to firmly support and accurately position the ion Chambers, that provide signals to the power channels for the University of Arizona TRIGA reactor. The design requirements, fabrication procedure and installation are described

  6. Accurate Ne-heavier rare gas interatomic potentials

    International Nuclear Information System (INIS)

    Candori, R.; Pirani, F.; Vecchiocattivi, F.

    1983-01-01

    Accurate interatomic potential curves for Ne-heavier rare gas systems are obtained by a multiproperty analysis. The curves are given via a parametric function which consists of a modified Dunham expansion connected at long range with the van der Waals expansion. The experimental properties considered in the analysis are the differential scattering cross sections at two different collision energies, the integral cross sections in the glory energy range and the second virial coefficients. The transport properties are considered indirectly by using the potential energy values recently obtained by inversion of the transport coefficients. (author)

  7. The Chandra Source Catalog : Automated Source Correlation

    Science.gov (United States)

    Hain, Roger; Evans, I. N.; Evans, J. D.; Glotfelty, K. J.; Anderson, C. S.; Bonaventura, N. R.; Chen, J. C.; Davis, J. E.; Doe, S. M.; Fabbiano, G.; Galle, E.; Gibbs, D. G.; Grier, J. D.; Hall, D. M.; Harbo, P. N.; He, X.; Houck, J. C.; Karovska, M.; Lauer, J.; McCollough, M. L.; McDowell, J. C.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Nowak, M. A.; Plummer, D. A.; Primini, F. A.; Refsdal, B. L.; Rots, A. H.; Siemiginowska, A. L.; Sundheim, B. A.; Tibbetts, M. S.; Van Stone, D. W.; Winkelman, S. L.; Zografou, P.

    2009-01-01

    Chandra Source Catalog (CSC) master source pipeline processing seeks to automatically detect sources and compute their properties. Since Chandra is a pointed mission and not a sky survey, different sky regions are observed for a different number of times at varying orientations, resolutions, and other heterogeneous conditions. While this provides an opportunity to collect data from a potentially large number of observing passes, it also creates challenges in determining the best way to combine different detection results for the most accurate characterization of the detected sources. The CSC master source pipeline correlates data from multiple observations by updating existing cataloged source information with new data from the same sky region as they become available. This process sometimes leads to relatively straightforward conclusions, such as when single sources from two observations are similar in size and position. Other observation results require more logic to combine, such as one observation finding a single, large source and another identifying multiple, smaller sources at the same position. We present examples of different overlapping source detections processed in the current version of the CSC master source pipeline. We explain how they are resolved into entries in the master source database, and examine the challenges of computing source properties for the same source detected multiple times. Future enhancements are also discussed. This work is supported by NASA contract NAS8-03060 (CXC).

  8. A method of accurate determination of voltage stability margin

    Energy Technology Data Exchange (ETDEWEB)

    Wiszniewski, A.; Rebizant, W. [Wroclaw Univ. of Technology, Wroclaw (Poland); Klimek, A. [AREVA Transmission and Distribution, Stafford (United Kingdom)

    2008-07-01

    In the process of developing power system disturbance, voltage instability at the receiving substations often contributes to deteriorating system stability, which eventually may lead to severe blackouts. The voltage stability margin at receiving substations may be used to determine measures to prevent voltage collapse, primarily by operating or blocking the transformer tap changing device, or by load shedding. The best measure of the stability margin is the actual load to source impedance ratio and its critical value, which is unity. This paper presented an accurate method of calculating the load to source impedance ratio, derived from the Thevenin's equivalent circuit of the system, which led to calculation of the stability margin. The paper described the calculation of the load to source impedance ratio including the supporting equations. The calculation was based on the very definition of voltage stability, which says that system stability is maintained as long as the change of power, which follows the increase of admittance is positive. The testing of the stability margin assessment method was performed in a simulative way for a number of power network structures and simulation scenarios. Results of the simulations revealed that this method is accurate and stable for all possible events occurring downstream of the device location. 3 refs., 8 figs.

  9. Experimental results obtained at GANIL

    International Nuclear Information System (INIS)

    Borrel, V.

    1993-01-01

    A review of experimental results obtained at GANIL on the study of nuclear structure and nuclear reactions with secondary radioactive beams is presented. Mass measurements by means of the GANIL cyclotrons are described. The possibilities of GANIL/LISE3 for the production and separation of radioactive beams are illustrated through a large variety of experiments. (author). 19 refs., 8 figs

  10. Fast and accurate methods of independent component analysis: A survey

    Czech Academy of Sciences Publication Activity Database

    Tichavský, Petr; Koldovský, Zbyněk

    2011-01-01

    Roč. 47, č. 3 (2011), s. 426-438 ISSN 0023-5954 R&D Projects: GA MŠk 1M0572; GA ČR GA102/09/1278 Institutional research plan: CEZ:AV0Z10750506 Keywords : Blind source separation * artifact removal * electroencephalogram * audio signal processing Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.454, year: 2011 http://library.utia.cas.cz/separaty/2011/SI/tichavsky-fast and accurate methods of independent component analysis a survey.pdf

  11. Accurate overlaying for mobile augmented reality

    NARCIS (Netherlands)

    Pasman, W; van der Schaaf, A; Lagendijk, RL; Jansen, F.W.

    1999-01-01

    Mobile augmented reality requires accurate alignment of virtual information with objects visible in the real world. We describe a system for mobile communications to be developed to meet these strict alignment criteria using a combination of computer vision. inertial tracking and low-latency

  12. Accurate activity recognition in a home setting

    NARCIS (Netherlands)

    van Kasteren, T.; Noulas, A.; Englebienne, G.; Kröse, B.

    2008-01-01

    A sensor system capable of automatically recognizing activities would allow many potential ubiquitous applications. In this paper, we present an easy to install sensor network and an accurate but inexpensive annotation method. A recorded dataset consisting of 28 days of sensor data and its

  13. Highly accurate surface maps from profilometer measurements

    Science.gov (United States)

    Medicus, Kate M.; Nelson, Jessica D.; Mandina, Mike P.

    2013-04-01

    Many aspheres and free-form optical surfaces are measured using a single line trace profilometer which is limiting because accurate 3D corrections are not possible with the single trace. We show a method to produce an accurate fully 2.5D surface height map when measuring a surface with a profilometer using only 6 traces and without expensive hardware. The 6 traces are taken at varying angular positions of the lens, rotating the part between each trace. The output height map contains low form error only, the first 36 Zernikes. The accuracy of the height map is ±10% of the actual Zernike values and within ±3% of the actual peak to valley number. The calculated Zernike values are affected by errors in the angular positioning, by the centering of the lens, and to a small effect, choices made in the processing algorithm. We have found that the angular positioning of the part should be better than 1?, which is achievable with typical hardware. The centering of the lens is essential to achieving accurate measurements. The part must be centered to within 0.5% of the diameter to achieve accurate results. This value is achievable with care, with an indicator, but the part must be edged to a clean diameter.

  14. Truncated States Obtained by Iteration

    International Nuclear Information System (INIS)

    Cardoso, W. B.; Almeida, N. G. de

    2008-01-01

    We introduce the concept of truncated states obtained via iterative processes (TSI) and study its statistical features, making an analogy with dynamical systems theory (DST). As a specific example, we have studied TSI for the doubling and the logistic functions, which are standard functions in studying chaos. TSI for both the doubling and logistic functions exhibit certain similar patterns when their statistical features are compared from the point of view of DST

  15. A Highly Accurate Approach for Aeroelastic System with Hysteresis Nonlinearity

    Directory of Open Access Journals (Sweden)

    C. C. Cui

    2017-01-01

    Full Text Available We propose an accurate approach, based on the precise integration method, to solve the aeroelastic system of an airfoil with a pitch hysteresis. A major procedure for achieving high precision is to design a predictor-corrector algorithm. This algorithm enables accurate determination of switching points resulting from the hysteresis. Numerical examples show that the results obtained by the presented method are in excellent agreement with exact solutions. In addition, the high accuracy can be maintained as the time step increases in a reasonable range. It is also found that the Runge-Kutta method may sometimes provide quite different and even fallacious results, though the step length is much less than that adopted in the presented method. With such high computational accuracy, the presented method could be applicable in dynamical systems with hysteresis nonlinearities.

  16. Reliability of "Google" for obtaining medical information

    Directory of Open Access Journals (Sweden)

    Mihir Kothari

    2015-01-01

    Full Text Available Internet is used by many patients to obtain relevant medical information. We assessed the impact of "Google" search on the knowledge of the parents whose ward suffered from squint. In 21 consecutive patients, the "Google" search improved the mean score of the correct answers from 47% to 62%. We found that "Google" search was useful and reliable source of information for the patients with regards to the disease etiopathogenesis and the problems caused by the disease. The internet-based information, however, was incomplete and not reliable with regards to the disease treatment.

  17. An accurate Rb density measurement method for a plasma wakefield accelerator experiment using a novel Rb reservoir

    CERN Document Server

    Öz, E.; Muggli, P.

    2016-01-01

    A method to accurately measure the density of Rb vapor is described. We plan on using this method for the Advanced Wakefield (AWAKE)~\\cite{bib:awake} project at CERN , which will be the world's first proton driven plasma wakefield experiment. The method is similar to the hook~\\cite{bib:Hook} method and has been described in great detail in the work by W. Tendell Hill et. al.~\\cite{bib:densitymeter}. In this method a cosine fit is applied to the interferogram to obtain a relative accuracy on the order of $1\\%$ for the vapor density-length product. A single-mode, fiber-based, Mach-Zenhder interferometer will be built and used near the ends of the 10 meter-long AWAKE plasma source to be able to make accurate relative density measurement between these two locations. This can then be used to infer the vapor density gradient along the AWAKE plasma source and also change it to the value desired for the plasma wakefield experiment. Here we describe the plan in detail and show preliminary results obtained using a prot...

  18. Long characteristics with piecewise linear sources designed for unstructured grids

    International Nuclear Information System (INIS)

    Pandya, Tara M.; Adams, Marvin L.; Hawkins, W. Daryl

    2011-01-01

    We present a method of long characteristics (MOC or LC) that employs a piece-wise linear (PWL) finite-element representation of the total source in each cell. PWL basis functions were designed to allow discontinuous finite-element methods (DFEMs) and characteristic methods to obtain accurate solutions in optically thick diffusive regions with polygonal (2D) or polyhedral (3D) cells. Our work is motivated by the following observations. Our PWL-LC should reproduce the excellent diffusion-limit behavior of the PWL DFEM but should be more accurate in streaming regions. As an LC method it also offer the potential for improved performance of transport sweeps on massively parallel architectures, because it allows face-based and track-based sweeps in addition to cell-based. We have implemented the two-dimensional (x, y) polygonal-cell version of this method in the parallel transport code PDT. The rectangular-grid results shown here demonstrate that the method with PWL sources is accurate for thick diffusive problems, for which methods with piece-wise constant or higher-order polynomial sources fail. Our results also demonstrate that the PWL-LC method is more accurate than the PWL-DFEM in streaming dominated steady-state problems. We discuss options for time discretization and present results from time-dependent problems that illustrate pros and cons of some options. Our results suggest that the most accurate solutions will be obtained via long characteristics in space and time but that less memory-intensive treatments can provide MOC solutions that are at least as robust and accurate as those obtained by PWL-DFEM. (author)

  19. Obtaining zircaloy powder through hydriding

    International Nuclear Information System (INIS)

    Dupim, Ivaldete da Silva; Moreira, Joao M.L.

    2009-01-01

    Zirconium alloys are good options for the metal matrix in dispersion fuels for power reactors due to their low thermal neutron absorption cross-section, good corrosion resistance, good mechanical strength and high thermal conductivity. A necessary step for obtaining such fuels is producing Zr alloy powder for the metal matrix composite material. This article presents results from the Zircaloy-4 hydrogenation tests with the purpose to embrittle the alloy as a first step for comminuting. Several hydrogenation tests were performed and studied through thermogravimetric analysis. They included H 2 pressures of 25 and 50 kPa and temperatures ranging between from 20 to 670 deg C. X-ray diffraction analysis showed in the hydrogenated samples the predominant presence of ZrH 2 and some ZrO 2 . Some kinetics parameters for the Zircaloy-4 hydrogenation reaction were obtained: the time required to reach the equilibrium state at the dwell temperature was about 100 minutes; the hydrogenation rate during the heating process from 20 to 670 deg C was about 21 mg/h, and at constant temperature of 670 deg C, the hydride rate was about 1.15 mg/h. The hydrogenation rate is largest during the heating process and most of it occurs during this period. After hydrogenated, the samples could easily be comminuted indicating that this is a possible technology to obtain Zircaloy powder. The results show that only few minutes of hydrogenation are necessary to reach the hydride levels required for comminuting the Zircaloy. The final hydride stoichiometry was between 2.7 and 2.8 H for each Zr atom in the sample (author)

  20. Time-Accurate Simulations of Synthetic Jet-Based Flow Control for An Axisymmetric Spinning Body

    National Research Council Canada - National Science Library

    Sahu, Jubaraj

    2004-01-01

    .... A time-accurate Navier-Stokes computational technique has been used to obtain numerical solutions for the unsteady jet-interaction flow field for a spinning projectile at a subsonic speed, Mach...

  1. Atom interferometry experiments with lithium. Accurate measurement of the electric polarizability

    International Nuclear Information System (INIS)

    Miffre, A.

    2005-06-01

    Atom interferometers are very sensitive tools to make precise measurements of physical quantities. This study presents a measurement of the static electric polarizability of lithium by atom interferometry. Our result, α = (24.33 ± 0.16)*10 -30 m 3 , improves by a factor 3 the most accurate measurements of this quantity. This work describes the tuning and the operation of a Mach-Zehnder atom interferometer in detail. The two interfering arms are separated by the elastic diffraction of the atomic wave by a laser standing wave, almost resonant with the first resonance transition of lithium atom. A set of experimental techniques, often complicated to implement, is necessary to build the experimental set-up. After a detailed study of the atom source (a supersonic beam of lithium seeded in argon), we present our experimental atom signals which exhibit a very high fringe visibility, up to 84.5 % for first order diffraction. A wide variety of signals has been observed by diffraction of the bosonic isotope at higher diffraction orders and by diffraction of the fermionic less abundant isotope. The quality of these signals is then used to do very accurate phase measurements. A first experiment investigates how the atom interferometer signals are modified by a magnetic field gradient. An absolute measurement of lithium atom electric polarizability is then achieved by applying a static electric field on one of the two interfering arms, separated by only 90 micrometers. The construction of such a capacitor, its alignment in the experimental set-up and its operation are fully detailed.We obtain a very accurate phase measurement of the induced Lo Surdo - Stark phase shift (0.07 % precision). For this first measurement, the final uncertainty on the electric polarizability of lithium is only 0.66 %, and is dominated by the uncertainty on the atom beam mean velocity, so that a further reduction of the uncertainty can be expected. (author)

  2. Accurate guitar tuning by cochlear implant musicians.

    Directory of Open Access Journals (Sweden)

    Thomas Lu

    Full Text Available Modern cochlear implant (CI users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task.

  3. Accurate estimation of indoor travel times

    DEFF Research Database (Denmark)

    Prentow, Thor Siiger; Blunck, Henrik; Stisen, Allan

    2014-01-01

    The ability to accurately estimate indoor travel times is crucial for enabling improvements within application areas such as indoor navigation, logistics for mobile workers, and facility management. In this paper, we study the challenges inherent in indoor travel time estimation, and we propose...... the InTraTime method for accurately estimating indoor travel times via mining of historical and real-time indoor position traces. The method learns during operation both travel routes, travel times and their respective likelihood---both for routes traveled as well as for sub-routes thereof. InTraTime...... allows to specify temporal and other query parameters, such as time-of-day, day-of-week or the identity of the traveling individual. As input the method is designed to take generic position traces and is thus interoperable with a variety of indoor positioning systems. The method's advantages include...

  4. Software Estimation: Developing an Accurate, Reliable Method

    Science.gov (United States)

    2011-08-01

    based and size-based estimates is able to accurately plan, launch, and execute on schedule. Bob Sinclair, NAWCWD Chris Rickets , NAWCWD Brad Hodgins...Office by Carnegie Mellon University. SMPSP and SMTSP are service marks of Carnegie Mellon University. 1. Rickets , Chris A, “A TSP Software Maintenance...Life Cycle”, CrossTalk, March, 2005. 2. Koch, Alan S, “TSP Can Be the Building blocks for CMMI”, CrossTalk, March, 2005. 3. Hodgins, Brad, Rickets

  5. Highly Accurate Prediction of Jobs Runtime Classes

    OpenAIRE

    Reiner-Benaim, Anat; Grabarnick, Anna; Shmueli, Edi

    2016-01-01

    Separating the short jobs from the long is a known technique to improve scheduling performance. In this paper we describe a method we developed for accurately predicting the runtimes classes of the jobs to enable this separation. Our method uses the fact that the runtimes can be represented as a mixture of overlapping Gaussian distributions, in order to train a CART classifier to provide the prediction. The threshold that separates the short jobs from the long jobs is determined during the ev...

  6. Source rock

    Directory of Open Access Journals (Sweden)

    Abubakr F. Makky

    2014-03-01

    Full Text Available West Beni Suef Concession is located at the western part of Beni Suef Basin which is a relatively under-explored basin and lies about 150 km south of Cairo. The major goal of this study is to evaluate the source rock by using different techniques as Rock-Eval pyrolysis, Vitrinite reflectance (%Ro, and well log data of some Cretaceous sequences including Abu Roash (E, F and G members, Kharita and Betty formations. The BasinMod 1D program is used in this study to construct the burial history and calculate the levels of thermal maturity of the Fayoum-1X well based on calibration of measured %Ro and Tmax against calculated %Ro model. The calculated Total Organic Carbon (TOC content from well log data compared with the measured TOC from the Rock-Eval pyrolysis in Fayoum-1X well is shown to match against the shale source rock but gives high values against the limestone source rock. For that, a new model is derived from well log data to calculate accurately the TOC content against the limestone source rock in the study area. The organic matter existing in Abu Roash (F member is fair to excellent and capable of generating a significant amount of hydrocarbons (oil prone produced from (mixed type I/II kerogen. The generation potential of kerogen in Abu Roash (E and G members and Betty formations is ranging from poor to fair, and generating hydrocarbons of oil and gas prone (mixed type II/III kerogen. Eventually, kerogen (type III of Kharita Formation has poor to very good generation potential and mainly produces gas. Thermal maturation of the measured %Ro, calculated %Ro model, Tmax and Production index (PI indicates that Abu Roash (F member exciting in the onset of oil generation, whereas Abu Roash (E and G members, Kharita and Betty formations entered the peak of oil generation.

  7. Mental models accurately predict emotion transitions.

    Science.gov (United States)

    Thornton, Mark A; Tamir, Diana I

    2017-06-06

    Successful social interactions depend on people's ability to predict others' future actions and emotions. People possess many mechanisms for perceiving others' current emotional states, but how might they use this information to predict others' future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others' emotional dynamics. People could then use these mental models of emotion transitions to predict others' future emotions from currently observable emotions. To test this hypothesis, studies 1-3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants' ratings of emotion transitions predicted others' experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation-valence, social impact, rationality, and human mind-inform participants' mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants' accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone.

  8. Mental models accurately predict emotion transitions

    Science.gov (United States)

    Thornton, Mark A.; Tamir, Diana I.

    2017-01-01

    Successful social interactions depend on people’s ability to predict others’ future actions and emotions. People possess many mechanisms for perceiving others’ current emotional states, but how might they use this information to predict others’ future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others’ emotional dynamics. People could then use these mental models of emotion transitions to predict others’ future emotions from currently observable emotions. To test this hypothesis, studies 1–3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants’ ratings of emotion transitions predicted others’ experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation—valence, social impact, rationality, and human mind—inform participants’ mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants’ accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone. PMID:28533373

  9. A practical method for accurate quantification of large fault trees

    International Nuclear Information System (INIS)

    Choi, Jong Soo; Cho, Nam Zin

    2007-01-01

    This paper describes a practical method to accurately quantify top event probability and importance measures from incomplete minimal cut sets (MCS) of a large fault tree. The MCS-based fault tree method is extensively used in probabilistic safety assessments. Several sources of uncertainties exist in MCS-based fault tree analysis. The paper is focused on quantification of the following two sources of uncertainties: (1) the truncation neglecting low-probability cut sets and (2) the approximation in quantifying MCSs. The method proposed in this paper is based on a Monte Carlo simulation technique to estimate probability of the discarded MCSs and the sum of disjoint products (SDP) approach complemented by the correction factor approach (CFA). The method provides capability to accurately quantify the two uncertainties and estimate the top event probability and importance measures of large coherent fault trees. The proposed fault tree quantification method has been implemented in the CUTREE code package and is tested on the two example fault trees

  10. Discrete sensors distribution for accurate plantar pressure analyses.

    Science.gov (United States)

    Claverie, Laetitia; Ille, Anne; Moretto, Pierre

    2016-12-01

    The aim of this study was to determine the distribution of discrete sensors under the footprint for accurate plantar pressure analyses. For this purpose, two different sensor layouts have been tested and compared, to determine which was the most accurate to monitor plantar pressure with wireless devices in research and/or clinical practice. Ten healthy volunteers participated in the study (age range: 23-58 years). The barycenter of pressures (BoP) determined from the plantar pressure system (W-inshoe®) was compared to the center of pressures (CoP) determined from a force platform (AMTI) in the medial-lateral (ML) and anterior-posterior (AP) directions. Then, the vertical ground reaction force (vGRF) obtained from both W-inshoe® and force platform was compared for both layouts for each subject. The BoP and vGRF determined from the plantar pressure system data showed good correlation (SCC) with those determined from the force platform data, notably for the second sensor organization (ML SCC= 0.95; AP SCC=0.99; vGRF SCC=0.91). The study demonstrates that an adjusted placement of removable sensors is key to accurate plantar pressure analyses. These results are promising for a plantar pressure recording outside clinical or laboratory settings, for long time monitoring, real time feedback or for whatever activity requiring a low-cost system. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.

  11. Can Measured Synergy Excitations Accurately Construct Unmeasured Muscle Excitations?

    Science.gov (United States)

    Bianco, Nicholas A; Patten, Carolynn; Fregly, Benjamin J

    2018-01-01

    Accurate prediction of muscle and joint contact forces during human movement could improve treatment planning for disorders such as osteoarthritis, stroke, Parkinson's disease, and cerebral palsy. Recent studies suggest that muscle synergies, a low-dimensional representation of a large set of muscle electromyographic (EMG) signals (henceforth called "muscle excitations"), may reduce the redundancy of muscle excitation solutions predicted by optimization methods. This study explores the feasibility of using muscle synergy information extracted from eight muscle EMG signals (henceforth called "included" muscle excitations) to accurately construct muscle excitations from up to 16 additional EMG signals (henceforth called "excluded" muscle excitations). Using treadmill walking data collected at multiple speeds from two subjects (one healthy, one poststroke), we performed muscle synergy analysis on all possible subsets of eight included muscle excitations and evaluated how well the calculated time-varying synergy excitations could construct the remaining excluded muscle excitations (henceforth called "synergy extrapolation"). We found that some, but not all, eight-muscle subsets yielded synergy excitations that achieved >90% extrapolation variance accounted for (VAF). Using the top 10% of subsets, we developed muscle selection heuristics to identify included muscle combinations whose synergy excitations achieved high extrapolation accuracy. For 3, 4, and 5 synergies, these heuristics yielded extrapolation VAF values approximately 5% lower than corresponding reconstruction VAF values for each associated eight-muscle subset. These results suggest that synergy excitations obtained from experimentally measured muscle excitations can accurately construct unmeasured muscle excitations, which could help limit muscle excitations predicted by muscle force optimizations.

  12. A Modeling Approach to Enhance Animal-Obtained Oceanographic Data Geo- Position

    Science.gov (United States)

    Tremblay, Y.; Robinson, P.; Weise, M. J.; Costa, D. P.

    2006-12-01

    Diving animals are increasingly being used as platforms to collect oceanographic data such as CTD profiles. Animal borne sensors provide an amazing amount of data that have to be spatially referenced. Because of technical limitations geo-position of these data mostly comes from the interpolation of locations obtained through the ARGOS positioning system. This system lacks spatio-temporal resolution compared to the Global Positioning System (GPS) and therefore, the positions of these oceanographic data are not well defined. A consequence of this is that many data collected in coastal regions are discarded, because many casts' records fell on land. Using modeling techniques, we propose a method to deal with this problem. The method is rather intuitive, and instead of deleting unreasonable or low-quality locations, it uses them by taking into account their lack of precision as a source of information. In a similar way, coastlines are used as sources of information, because marine animals do not travel over land. The method was evaluated using simultaneously obtained tracks with the Argos and GPS system. The tracks obtained from this method are considerably enhanced and allow a more accurate geo-reference of oceanographic data. In addition, the method provides a way to evaluate spatial errors for each cast that is not otherwise possible with classical filtering methods.

  13. Drugs obtained by biotechnology processing

    Directory of Open Access Journals (Sweden)

    Hugo Almeida

    2011-06-01

    Full Text Available In recent years, the number of drugs of biotechnological origin available for many different diseases has increased exponentially, including different types of cancer, diabetes mellitus, infectious diseases (e.g. AIDS Virus / HIV as well as cardiovascular, neurological, respiratory, and autoimmune diseases, among others. The pharmaceutical industry has used different technologies to obtain new and promising active ingredients, as exemplified by the fermentation technique, recombinant DNA technique and the hybridoma technique. The expiry of the patents of the first drugs of biotechnological origin and the consequent emergence of biosimilar products, have posed various questions to health authorities worldwide regarding the definition, framework, and requirements for authorization to market such products.Nos últimos anos, tem aumentado exponencialmente o número de fármacos de origem biotecnológica ao dispor das mais diversas patologias, entre elas destacam-se, os diferentes tipos de cancêr, as doenças infecciosas (ex. vírus AIDS/HIV, as doenças autoimunes, as doenças cardiovasculares, a Diabetes Mellitus, as doenças neurológicas, as doenças respiratórias, entre outras. A indústria farmacêutica tem recorrido a diferentes tecnologias para a obtenção de novos e promissores princípios ativos, como são exemplo a fermentação, a técnica de DNA Recombinante, a técnica de hidridoma, entre outras. A queda das patentes dos primeiros fármacos de origem biotecnológica e o consequente aparecimento dos produtos biossimilares têm colocado diferentes questões às autoridades de saúde mundiais, sobre a definição, enquadramento e exigências para a autorização de entrada no mercado deste tipo de produtos.

  14. Cyanophycin production from nitrogen-containing chemicals obtained from biomass

    NARCIS (Netherlands)

    Elbahloul, Y.A.K.B.; Scott, E.L.; Mooibroek, H.; Sanders, J.P.M.; Obsts, M.; Steinbüchel, A.

    2006-01-01

    The present invention relates to fermentation processes for the production of cyanophycin in a microorganism whereby a plant-derived nitrogen source is converted by the microorganism into cyanophycin. The plant-derived nitrogen source preferably is a process stream being obtained in the processing

  15. 48 CFR 509.105-1 - Obtaining information.

    Science.gov (United States)

    2010-10-01

    ... Obtaining information. (a) From a prospective contractor. FAR 9.105-1 lists a number of sources of..., Contractor's Qualifications and Financial Information, but only after exhausting other available sources of... finance, and auditors before determining that an offeror is responsible. [74 FR 12732, Mar. 25, 2009] ...

  16. Robust and accurate vectorization of line drawings.

    Science.gov (United States)

    Hilaire, Xavier; Tombre, Karl

    2006-06-01

    This paper presents a method for vectorizing the graphical parts of paper-based line drawings. The method consists of separating the input binary image into layers of homogeneous thickness, skeletonizing each layer, segmenting the skeleton by a method based on random sampling, and simplifying the result. The segmentation method is robust with a best bound of 50 percent noise reached for indefinitely long primitives. Accurate estimation of the recognized vector's parameters is enabled by explicitly computing their feasibility domains. Theoretical performance analysis and expression of the complexity of the segmentation method are derived. Experimental results and comparisons with other vectorization systems are also provided.

  17. The first accurate description of an aurora

    Science.gov (United States)

    Schröder, Wilfried

    2006-12-01

    As technology has advanced, the scientific study of auroral phenomena has increased by leaps and bounds. A look back at the earliest descriptions of aurorae offers an interesting look into how medieval scholars viewed the subjects that we study.Although there are earlier fragmentary references in the literature, the first accurate description of the aurora borealis appears to be that published by the German Catholic scholar Konrad von Megenberg (1309-1374) in his book Das Buch der Natur (The Book of Nature). The book was written between 1349 and 1350.

  18. Accurate Charge Densities from Powder Diffraction

    DEFF Research Database (Denmark)

    Bindzus, Niels; Wahlberg, Nanna; Becker, Jacob

    Synchrotron powder X-ray diffraction has in recent years advanced to a level, where it has become realistic to probe extremely subtle electronic features. Compared to single-crystal diffraction, it may be superior for simple, high-symmetry crystals owing to negligible extinction effects and minimal...... peak overlap. Additionally, it offers the opportunity for collecting data on a single scale. For charge densities studies, the critical task is to recover accurate and bias-free structure factors from the diffraction pattern. This is the focal point of the present study, scrutinizing the performance...

  19. Arbitrarily accurate twin composite π -pulse sequences

    Science.gov (United States)

    Torosov, Boyan T.; Vitanov, Nikolay V.

    2018-04-01

    We present three classes of symmetric broadband composite pulse sequences. The composite phases are given by analytic formulas (rational fractions of π ) valid for any number of constituent pulses. The transition probability is expressed by simple analytic formulas and the order of pulse area error compensation grows linearly with the number of pulses. Therefore, any desired compensation order can be produced by an appropriate composite sequence; in this sense, they are arbitrarily accurate. These composite pulses perform equally well as or better than previously published ones. Moreover, the current sequences are more flexible as they allow total pulse areas of arbitrary integer multiples of π .

  20. An easy way to measure accurately the direct magnetoelectric voltage coefficient of thin film devices

    Energy Technology Data Exchange (ETDEWEB)

    Poullain, Gilles, E-mail: gilles.poullain@ensicaen.fr; More-Chevalier, Joris; Cibert, Christophe; Bouregba, Rachid

    2017-01-15

    Tb{sub x}Dy{sub 1−x}Fe{sub 2}/Pt/Pb(Zr{sub x}, Ti{sub 1−x})O{sub 3} thin films were grown on Pt/TiO{sub 2}/SiO{sub 2}/Si substrate by multi-target sputtering. The magnetoelectric voltage coefficient α{sup Η}{sub ΜΕ} was determined at room temperature using a lock-in amplifier. By adding, in series in the circuit, a capacitor of the same value as that of the device under test, we were able to demonstrate that the magnetoelectric device behaves as a voltage source. Furthermore, a simple way to subtract the stray voltage arising from the flow of eddy currents in the measurement set-up, is proposed. This allows the easy and accurate determination of the true magnetoelectric voltage coefficient. A large α{sup Η}{sub ΜΕ} of 8.3 V/cm. Oe was thus obtained for a Terfenol-D/Pt/PZT thin film device, without DC magnetic field nor mechanical resonance. - Highlights: • Magnetoelectric device behaves as a voltage source. • A simple way to subtract eddy currents during the measurement, is proposed.

  1. Optimization of light source parameters in the photodynamic therapy of heterogeneous prostate

    International Nuclear Information System (INIS)

    Li Jun; Altschuler, Martin D; Hahn, Stephen M; Zhu, Timothy C

    2008-01-01

    The three-dimensional (3D) heterogeneous distributions of optical properties in a patient prostate can now be measured in vivo. Such data can be used to obtain a more accurate light-fluence kernel. (For specified sources and points, the kernel gives the fluence delivered to a point by a source of unit strength.) In turn, the kernel can be used to solve the inverse problem that determines the source strengths needed to deliver a prescribed photodynamic therapy (PDT) dose (or light-fluence) distribution within the prostate (assuming uniform drug concentration). We have developed and tested computational procedures to use the new heterogeneous data to optimize delivered light-fluence. New problems arise, however, in quickly obtaining an accurate kernel following the insertion of interstitial light sources and data acquisition. (1) The light-fluence kernel must be calculated in 3D and separately for each light source, which increases kernel size. (2) An accurate kernel for light scattering in a heterogeneous medium requires ray tracing and volume partitioning, thus significant calculation time. To address these problems, two different kernels were examined and compared for speed of creation and accuracy of dose. Kernels derived more quickly involve simpler algorithms. Our goal is to achieve optimal dose planning with patient-specific heterogeneous optical data applied through accurate kernels, all within clinical times. The optimization process is restricted to accepting the given (interstitially inserted) sources, and determining the best source strengths with which to obtain a prescribed dose. The Cimmino feasibility algorithm is used for this purpose. The dose distribution and source weights obtained for each kernel are analyzed. In clinical use, optimization will also be performed prior to source insertion to obtain initial source positions, source lengths and source weights, but with the assumption of homogeneous optical properties. For this reason, we compare the

  2. Ultra-accurate collaborative information filtering via directed user similarity

    Science.gov (United States)

    Guo, Q.; Song, W.-J.; Liu, J.-G.

    2014-07-01

    A key challenge of the collaborative filtering (CF) information filtering is how to obtain the reliable and accurate results with the help of peers' recommendation. Since the similarities from small-degree users to large-degree users would be larger than the ones in opposite direction, the large-degree users' selections are recommended extensively by the traditional second-order CF algorithms. By considering the users' similarity direction and the second-order correlations to depress the influence of mainstream preferences, we present the directed second-order CF (HDCF) algorithm specifically to address the challenge of accuracy and diversity of the CF algorithm. The numerical results for two benchmark data sets, MovieLens and Netflix, show that the accuracy of the new algorithm outperforms the state-of-the-art CF algorithms. Comparing with the CF algorithm based on random walks proposed by Liu et al. (Int. J. Mod. Phys. C, 20 (2009) 285) the average ranking score could reach 0.0767 and 0.0402, which is enhanced by 27.3% and 19.1% for MovieLens and Netflix, respectively. In addition, the diversity, precision and recall are also enhanced greatly. Without relying on any context-specific information, tuning the similarity direction of CF algorithms could obtain accurate and diverse recommendations. This work suggests that the user similarity direction is an important factor to improve the personalized recommendation performance.

  3. Accurate control testing for clay liner permeability

    Energy Technology Data Exchange (ETDEWEB)

    Mitchell, R J

    1991-08-01

    Two series of centrifuge tests were carried out to evaluate the use of centrifuge modelling as a method of accurate control testing of clay liner permeability. The first series used a large 3 m radius geotechnical centrifuge and the second series a small 0.5 m radius machine built specifically for research on clay liners. Two permeability cells were fabricated in order to provide direct data comparisons between the two methods of permeability testing. In both cases, the centrifuge method proved to be effective and efficient, and was found to be free of both the technical difficulties and leakage risks normally associated with laboratory permeability testing of fine grained soils. Two materials were tested, a consolidated kaolin clay having an average permeability coefficient of 1.2{times}10{sup -9} m/s and a compacted illite clay having a permeability coefficient of 2.0{times}10{sup -11} m/s. Four additional tests were carried out to demonstrate that the 0.5 m radius centrifuge could be used for linear performance modelling to evaluate factors such as volumetric water content, compaction method and density, leachate compatibility and other construction effects on liner leakage. The main advantages of centrifuge testing of clay liners are rapid and accurate evaluation of hydraulic properties and realistic stress modelling for performance evaluations. 8 refs., 12 figs., 7 tabs.

  4. Caracterización de la Funcionalidad Tecnológica de una Fuente Rica en Fibra Dietaria Obtenida a partir de Cáscara de Plátano / Characterization of Technological Functionality of Dietary Fiber Rich Source Obtained from Plantain Peel

    Directory of Open Access Journals (Sweden)

    Alarcón García Miguel Ángel

    2013-08-01

    Full Text Available Resumen. Con el objetivo de obtener la fuente de fibra dietaria y realizar su caracterización, cáscaras de plátano verde (Musa AAB fueron sometidas a un proceso industrial que involucró etapas de selección, lavado, troceado, secado (hasta alcanzar una humedad final del 5% y molido. El proceso completo presentó un rendimiento de 2% en fuente de fibra de cáscara de plátano (FFCP. Se determinaron valores de fibra dietaria total (FDT; 46,79%, fibra dietaria soluble (FDS; 1,68% y fibra dietaria insoluble (FDI; 45,12%. El material resultante fue sometido al efecto de la temperatura utilizando 2 niveles (temperatura ambiente, 20 ºC, y temperatura de escaldado para productos cárnicos, 74 °C y bajo estas condiciones fue caracterizado en términos de capacidad de absorción de agua, capacidad de absorción de aceite, capacidad de retención de agua y capacidad de absorción de moléculas orgánicas; variables que no presentaron diferencias estadísticamente significativas, a excepción de la capacidad de absorción de aceite. Por lo anterior, puede concluirse que la FFCP corresponde a un recurso con aptitud para su inclusión en matrices alimenticias tipo cárnicas. / Abstract. Green plantain peels (Musa AAB were subjected to an industrial process involving selection, washing, chopping, drying (until reach a final moisture content of 5% and grounding steps with the aim of obtaining a source of dietary fiber and perform their characterization. The total process yield was 2% as a fiber source plantain peel (FSPP. Values for total dietary fiber (TDF; 46.79%, soluble dietary fiber (SDF; 1.68% and insoluble dietary fiber (IDF; 45.12% were determined. The resulting material was subjected to the effect of two levels of temperature(environmental temperature, 20 ºC, and scalding temperature, 74 °C for meat products. Under these conditions, the FSPP was characterized in terms of water absorption capacity, oil retention capacity, water holding capacity

  5. Accurate estimation of the RMS emittance from single current amplifier data

    International Nuclear Information System (INIS)

    Stockli, Martin P.; Welton, R.F.; Keller, R.; Letchford, A.P.; Thomae, R.W.; Thomason, J.W.G.

    2002-01-01

    This paper presents the SCUBEEx rms emittance analysis, a self-consistent, unbiased elliptical exclusion method, which combines traditional data-reduction methods with statistical methods to obtain accurate estimates for the rms emittance. Rather than considering individual data, the method tracks the average current density outside a well-selected, variable boundary to separate the measured beam halo from the background. The average outside current density is assumed to be part of a uniform background and not part of the particle beam. Therefore the average outside current is subtracted from the data before evaluating the rms emittance within the boundary. As the boundary area is increased, the average outside current and the inside rms emittance form plateaus when all data containing part of the particle beam are inside the boundary. These plateaus mark the smallest acceptable exclusion boundary and provide unbiased estimates for the average background and the rms emittance. Small, trendless variations within the plateaus allow for determining the uncertainties of the estimates caused by variations of the measured background outside the smallest acceptable exclusion boundary. The robustness of the method is established with complementary variations of the exclusion boundary. This paper presents a detailed comparison between traditional data reduction methods and SCUBEEx by analyzing two complementary sets of emittance data obtained with a Lawrence Berkeley National Laboratory and an ISIS H - ion source

  6. Can tritiated water-dilution space accurately predict total body water in chukar partridges

    International Nuclear Information System (INIS)

    Crum, B.G.; Williams, J.B.; Nagy, K.A.

    1985-01-01

    Total body water (TBW) volumes determined from the dilution space of injected tritiated water have consistently overestimated actual water volumes (determined by desiccation to constant mass) in reptiles and mammals, but results for birds are controversial. We investigated potential errors in both the dilution method and the desiccation method in an attempt to resolve this controversy. Tritiated water dilution yielded an accurate measurement of water mass in vitro. However, in vivo, this method yielded a 4.6% overestimate of the amount of water (3.1% of live body mass) in chukar partridges, apparently largely because of loss of tritium from body water to sites of dissociable hydrogens on body solids. An additional source of overestimation (approximately 2% of body mass) was loss of tritium to the solids in blood samples during distillation of blood to obtain pure water for tritium analysis. Measuring tritium activity in plasma samples avoided this problem but required measurement of, and correction for, the dry matter content in plasma. Desiccation to constant mass by lyophilization or oven-drying also overestimated the amount of water actually in the bodies of chukar partridges by 1.4% of body mass, because these values included water adsorbed onto the outside of feathers. When desiccating defeathered carcasses, oven-drying at 70 degrees C yielded TBW values identical to those obtained from lyophilization, but TBW was overestimated (0.5% of body mass) by drying at 100 degrees C due to loss of organic substances as well as water

  7. AN ACCURATE FLUX DENSITY SCALE FROM 1 TO 50 GHz

    International Nuclear Information System (INIS)

    Perley, R. A.; Butler, B. J.

    2013-01-01

    We develop an absolute flux density scale for centimeter-wavelength astronomy by combining accurate flux density ratios determined by the Very Large Array between the planet Mars and a set of potential calibrators with the Rudy thermophysical emission model of Mars, adjusted to the absolute scale established by the Wilkinson Microwave Anisotropy Probe. The radio sources 3C123, 3C196, 3C286, and 3C295 are found to be varying at a level of less than ∼5% per century at all frequencies between 1 and 50 GHz, and hence are suitable as flux density standards. We present polynomial expressions for their spectral flux densities, valid from 1 to 50 GHz, with absolute accuracy estimated at 1%-3% depending on frequency. Of the four sources, 3C286 is the most compact and has the flattest spectral index, making it the most suitable object on which to establish the spectral flux density scale. The sources 3C48, 3C138, 3C147, NGC 7027, NGC 6542, and MWC 349 show significant variability on various timescales. Polynomial coefficients for the spectral flux density are developed for 3C48, 3C138, and 3C147 for each of the 17 observation dates, spanning 1983-2012. The planets Venus, Uranus, and Neptune are included in our observations, and we derive their brightness temperatures over the same frequency range.

  8. WAYS OF OBTAINING FINANCING BY TOUR OPERATORS

    Directory of Open Access Journals (Sweden)

    CARLAN ADRIANA

    2015-12-01

    Full Text Available domanin.Romania is a country with highly touristic potential that is not exploited to maximum. In order to reach a high quality level of tourism permanent development and modernization are needed and also the establishment of new businesses That conducts other activities other than those which takes place in our country. Ways of getting funds are multiple, depending on individual needs.To develop tourism activities it is necessary to require some funding that can come from various sources: auto-financing, loans from various banks or from third parties and grants offered by the European Union. There are many programs designed to support the development of tourism, such as ROP that allows people to access grants in order to implement projects for the establishment and the development of the activity in the touristic field. The purpose of this article is to highlight funding opportunities for the tourism operators and to assist them in choosing the appropriate form of financing of the current activity or the activity they want to implement in the future and description of how to obtain the necessary funds from various sources.

  9. Accurate metacognition for visual sensory memory representations.

    Science.gov (United States)

    Vandenbroucke, Annelinde R E; Sligte, Ilja G; Barrett, Adam B; Seth, Anil K; Fahrenfort, Johannes J; Lamme, Victor A F

    2014-04-01

    The capacity to attend to multiple objects in the visual field is limited. However, introspectively, people feel that they see the whole visual world at once. Some scholars suggest that this introspective feeling is based on short-lived sensory memory representations, whereas others argue that the feeling of seeing more than can be attended to is illusory. Here, we investigated this phenomenon by combining objective memory performance with subjective confidence ratings during a change-detection task. This allowed us to compute a measure of metacognition--the degree of knowledge that subjects have about the correctness of their decisions--for different stages of memory. We show that subjects store more objects in sensory memory than they can attend to but, at the same time, have similar metacognition for sensory memory and working memory representations. This suggests that these subjective impressions are not an illusion but accurate reflections of the richness of visual perception.

  10. An accurate nonlinear Monte Carlo collision operator

    International Nuclear Information System (INIS)

    Wang, W.X.; Okamoto, M.; Nakajima, N.; Murakami, S.

    1995-03-01

    A three dimensional nonlinear Monte Carlo collision model is developed based on Coulomb binary collisions with the emphasis both on the accuracy and implementation efficiency. The operator of simple form fulfills particle number, momentum and energy conservation laws, and is equivalent to exact Fokker-Planck operator by correctly reproducing the friction coefficient and diffusion tensor, in addition, can effectively assure small-angle collisions with a binary scattering angle distributed in a limited range near zero. Two highly vectorizable algorithms are designed for its fast implementation. Various test simulations regarding relaxation processes, electrical conductivity, etc. are carried out in velocity space. The test results, which is in good agreement with theory, and timing results on vector computers show that it is practically applicable. The operator may be used for accurately simulating collisional transport problems in magnetized and unmagnetized plasmas. (author)

  11. Accurate predictions for the LHC made easy

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    The data recorded by the LHC experiments is of a very high quality. To get the most out of the data, precise theory predictions, including uncertainty estimates, are needed to reduce as much as possible theoretical bias in the experimental analyses. Recently, significant progress has been made in computing Next-to-Leading Order (NLO) computations, including matching to the parton shower, that allow for these accurate, hadron-level predictions. I shall discuss one of these efforts, the MadGraph5_aMC@NLO program, that aims at the complete automation of predictions at the NLO accuracy within the SM as well as New Physics theories. I’ll illustrate some of the theoretical ideas behind this program, show some selected applications to LHC physics, as well as describe the future plans.

  12. Apparatus for accurately measuring high temperatures

    Science.gov (United States)

    Smith, D.D.

    The present invention is a thermometer used for measuring furnace temperatures in the range of about 1800/sup 0/ to 2700/sup 0/C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.

  13. Accurate Modeling Method for Cu Interconnect

    Science.gov (United States)

    Yamada, Kenta; Kitahara, Hiroshi; Asai, Yoshihiko; Sakamoto, Hideo; Okada, Norio; Yasuda, Makoto; Oda, Noriaki; Sakurai, Michio; Hiroi, Masayuki; Takewaki, Toshiyuki; Ohnishi, Sadayuki; Iguchi, Manabu; Minda, Hiroyasu; Suzuki, Mieko

    This paper proposes an accurate modeling method of the copper interconnect cross-section in which the width and thickness dependence on layout patterns and density caused by processes (CMP, etching, sputtering, lithography, and so on) are fully, incorporated and universally expressed. In addition, we have developed specific test patterns for the model parameters extraction, and an efficient extraction flow. We have extracted the model parameters for 0.15μm CMOS using this method and confirmed that 10%τpd error normally observed with conventional LPE (Layout Parameters Extraction) was completely dissolved. Moreover, it is verified that the model can be applied to more advanced technologies (90nm, 65nm and 55nm CMOS). Since the interconnect delay variations due to the processes constitute a significant part of what have conventionally been treated as random variations, use of the proposed model could enable one to greatly narrow the guardbands required to guarantee a desired yield, thereby facilitating design closure.

  14. Study of geometry to obtain the volume fraction of multiphase flows using the MCNP-X code

    International Nuclear Information System (INIS)

    Peixoto, Philippe N.B.; Salgado, Cesar M.

    2015-01-01

    The gamma ray attenuation technique is used in many works to obtaining volume fraction of multiphase flows in the oil industry, because it is a noninvasive technique with good precision. In these studies are simulated various geometries with different flow regime, compositions of materials, source-detector positions and types of collimation for sources. This work aim evaluate the interference in the results of the geometry changes and obtaining the best measuring geometry to provide the volume fractions accurately by evaluating different geometries simulations (ranging the source-detector position, flow schemes and homogeneity Makeup) in the MCNP-X code. The study was performed for two types of biphasic compositions of materials (oil-water and oil-air), two flow regimes (annular and smooth stratified) and was varied the position of each material in relative to source and detector positions. Another study to evaluate the interference of homogeneity of the compositions in the results was also conducted in order to verify the possibility of removing part of the composition and make a homogeneous blend using a mixer equipment. All these variations were simulated with two different types of beam, divergent beam and pencil beam. From the simulated geometries, it was possible to compare the differences between the areas of the spectra generated for each model. The results indicate that the flow regime and the differences in the material's densities interfere in the results being necessary to establish a specific simulation geometry for each flows regime. However, the simulations indicate that changing the type of collimation of sources do not affect the results, but improving the counts statistics, increasing the accurate. (author)

  15. Study of geometry to obtain the volume fraction of multiphase flows using the MCNP-X code

    Energy Technology Data Exchange (ETDEWEB)

    Peixoto, Philippe N.B.; Salgado, Cesar M., E-mail: phbelache@hotmail.com, E-mail: otero@ien.gov.br [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil)

    2015-07-01

    The gamma ray attenuation technique is used in many works to obtaining volume fraction of multiphase flows in the oil industry, because it is a noninvasive technique with good precision. In these studies are simulated various geometries with different flow regime, compositions of materials, source-detector positions and types of collimation for sources. This work aim evaluate the interference in the results of the geometry changes and obtaining the best measuring geometry to provide the volume fractions accurately by evaluating different geometries simulations (ranging the source-detector position, flow schemes and homogeneity Makeup) in the MCNP-X code. The study was performed for two types of biphasic compositions of materials (oil-water and oil-air), two flow regimes (annular and smooth stratified) and was varied the position of each material in relative to source and detector positions. Another study to evaluate the interference of homogeneity of the compositions in the results was also conducted in order to verify the possibility of removing part of the composition and make a homogeneous blend using a mixer equipment. All these variations were simulated with two different types of beam, divergent beam and pencil beam. From the simulated geometries, it was possible to compare the differences between the areas of the spectra generated for each model. The results indicate that the flow regime and the differences in the material's densities interfere in the results being necessary to establish a specific simulation geometry for each flows regime. However, the simulations indicate that changing the type of collimation of sources do not affect the results, but improving the counts statistics, increasing the accurate. (author)

  16. A highly accurate method for determination of dissolved oxygen: Gravimetric Winkler method

    International Nuclear Information System (INIS)

    Helm, Irja; Jalukse, Lauri; Leito, Ivo

    2012-01-01

    Highlights: ► Probably the most accurate method available for dissolved oxygen concentration measurement was developed. ► Careful analysis of uncertainty sources was carried out and the method was optimized for minimizing all uncertainty sources as far as practical. ► This development enables more accurate calibration of dissolved oxygen sensors for routine analysis than has been possible before. - Abstract: A high-accuracy Winkler titration method has been developed for determination of dissolved oxygen concentration. Careful analysis of uncertainty sources relevant to the Winkler method was carried out and the method was optimized for minimizing all uncertainty sources as far as practical. The most important improvements were: gravimetric measurement of all solutions, pre-titration to minimize the effect of iodine volatilization, accurate amperometric end point detection and careful accounting for dissolved oxygen in the reagents. As a result, the developed method is possibly the most accurate method of determination of dissolved oxygen available. Depending on measurement conditions and on the dissolved oxygen concentration the combined standard uncertainties of the method are in the range of 0.012–0.018 mg dm −3 corresponding to the k = 2 expanded uncertainty in the range of 0.023–0.035 mg dm −3 (0.27–0.38%, relative). This development enables more accurate calibration of electrochemical and optical dissolved oxygen sensors for routine analysis than has been possible before.

  17. Accurate performance analysis of opportunistic decode-and-forward relaying

    KAUST Repository

    Tourki, Kamel

    2011-07-01

    In this paper, we investigate an opportunistic relaying scheme where the selected relay assists the source-destination (direct) communication. In our study, we consider a regenerative opportunistic relaying scheme in which the direct path may be considered unusable, and the destination may use a selection combining technique. We first derive the exact statistics of each hop, in terms of probability density function (PDF). Then, the PDFs are used to determine accurate closed form expressions for end-to-end outage probability for a transmission rate R. Furthermore, we evaluate the asymptotical performance analysis and the diversity order is deduced. Finally, we validate our analysis by showing that performance simulation results coincide with our analytical results over different network architectures. © 2011 IEEE.

  18. Approaching system equilibrium with accurate or not accurate feedback information in a two-route system

    Science.gov (United States)

    Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi

    2015-02-01

    With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium.

  19. An accurate and portable solid state neutron rem meter

    Energy Technology Data Exchange (ETDEWEB)

    Oakes, T.M. [Nuclear Science and Engineering Institute, University of Missouri, Columbia, MO (United States); Bellinger, S.L. [Department of Mechanical and Nuclear Engineering, Kansas State University, Manhattan, KS (United States); Miller, W.H. [Nuclear Science and Engineering Institute, University of Missouri, Columbia, MO (United States); Missouri University Research Reactor, Columbia, MO (United States); Myers, E.R. [Department of Physics, University of Missouri, Kansas City, MO (United States); Fronk, R.G.; Cooper, B.W [Department of Mechanical and Nuclear Engineering, Kansas State University, Manhattan, KS (United States); Sobering, T.J. [Electronics Design Laboratory, Kansas State University, KS (United States); Scott, P.R. [Department of Physics, University of Missouri, Kansas City, MO (United States); Ugorowski, P.; McGregor, D.S; Shultis, J.K. [Department of Mechanical and Nuclear Engineering, Kansas State University, Manhattan, KS (United States); Caruso, A.N., E-mail: carusoan@umkc.edu [Department of Physics, University of Missouri, Kansas City, MO (United States)

    2013-08-11

    Accurately resolving the ambient neutron dose equivalent spanning the thermal to 15 MeV energy range with a single configuration and lightweight instrument is desirable. This paper presents the design of a portable, high intrinsic efficiency, and accurate neutron rem meter whose energy-dependent response is electronically adjusted to a chosen neutron dose equivalent standard. The instrument may be classified as a moderating type neutron spectrometer, based on an adaptation to the classical Bonner sphere and position sensitive long counter, which, simultaneously counts thermalized neutrons by high thermal efficiency solid state neutron detectors. The use of multiple detectors and moderator arranged along an axis of symmetry (e.g., long axis of a cylinder) with known neutron-slowing properties allows for the construction of a linear combination of responses that approximate the ambient neutron dose equivalent. Variations on the detector configuration are investigated via Monte Carlo N-Particle simulations to minimize the total instrument mass while maintaining acceptable response accuracy—a dose error less than 15% for bare {sup 252}Cf, bare AmBe, an epi-thermal and mixed monoenergetic sources is found at less than 4.5 kg moderator mass in all studied cases. A comparison of the energy dependent dose equivalent response and resultant energy dependent dose equivalent error of the present dosimeter to commercially-available portable rem meters and the prior art are presented. Finally, the present design is assessed by comparison of the simulated output resulting from applications of several known neutron sources and dose rates.

  20. The importance of accurate meteorological input fields and accurate planetary boundary layer parameterizations, tested against ETEX-1

    International Nuclear Information System (INIS)

    Brandt, J.; Ebel, A.; Elbern, H.; Jakobs, H.; Memmesheimer, M.; Mikkelsen, T.; Thykier-Nielsen, S.; Zlatev, Z.

    1997-01-01

    Atmospheric transport of air pollutants is, in principle, a well understood process. If information about the state of the atmosphere is given in all details (infinitely accurate information about wind speed, etc.) and infinitely fast computers are available then the advection equation could in principle be solved exactly. This is, however, not the case: discretization of the equations and input data introduces some uncertainties and errors in the results. Therefore many different issues have to be carefully studied in order to diminish these uncertainties and to develop an accurate transport model. Some of these are e.g. the numerical treatment of the transport equation, accuracy of the mean meteorological input fields and parameterizations of sub-grid scale phenomena (as e.g. parameterizations of the 2 nd and higher order turbulence terms in order to reach closure in the perturbation equation). A tracer model for studying transport and dispersion of air pollution caused by a single but strong source is under development. The model simulations from the first ETEX release illustrate the differences caused by using various analyzed fields directly in the tracer model or using a meteorological driver. Also different parameterizations of the mixing height and the vertical exchange are compared. (author)

  1. GHM method for obtaining rationalsolutions of nonlinear differential equations.

    Science.gov (United States)

    Vazquez-Leal, Hector; Sarmiento-Reyes, Arturo

    2015-01-01

    In this paper, we propose the application of the general homotopy method (GHM) to obtain rational solutions of nonlinear differential equations. It delivers a high precision representation of the nonlinear differential equation using a few linear algebraic terms. In order to assess the benefits of this proposal, three nonlinear problems are solved and compared against other semi-analytic methods or numerical methods. The obtained results show that GHM is a powerful tool, capable to generate highly accurate rational solutions. AMS subject classification 34L30.

  2. Implicit time accurate simulation of unsteady flow

    Science.gov (United States)

    van Buuren, René; Kuerten, Hans; Geurts, Bernard J.

    2001-03-01

    Implicit time integration was studied in the context of unsteady shock-boundary layer interaction flow. With an explicit second-order Runge-Kutta scheme, a reference solution to compare with the implicit second-order Crank-Nicolson scheme was determined. The time step in the explicit scheme is restricted by both temporal accuracy as well as stability requirements, whereas in the A-stable implicit scheme, the time step has to obey temporal resolution requirements and numerical convergence conditions. The non-linear discrete equations for each time step are solved iteratively by adding a pseudo-time derivative. The quasi-Newton approach is adopted and the linear systems that arise are approximately solved with a symmetric block Gauss-Seidel solver. As a guiding principle for properly setting numerical time integration parameters that yield an efficient time accurate capturing of the solution, the global error caused by the temporal integration is compared with the error resulting from the spatial discretization. Focus is on the sensitivity of properties of the solution in relation to the time step. Numerical simulations show that the time step needed for acceptable accuracy can be considerably larger than the explicit stability time step; typical ratios range from 20 to 80. At large time steps, convergence problems that are closely related to a highly complex structure of the basins of attraction of the iterative method may occur. Copyright

  3. A stiffly accurate integrator for elastodynamic problems

    KAUST Repository

    Michels, Dominik L.

    2017-07-21

    We present a new integration algorithm for the accurate and efficient solution of stiff elastodynamic problems governed by the second-order ordinary differential equations of structural mechanics. Current methods have the shortcoming that their performance is highly dependent on the numerical stiffness of the underlying system that often leads to unrealistic behavior or a significant loss of efficiency. To overcome these limitations, we present a new integration method which is based on a mathematical reformulation of the underlying differential equations, an exponential treatment of the full nonlinear forcing operator as opposed to more standard partially implicit or exponential approaches, and the utilization of the concept of stiff accuracy which ensures that the efficiency of the simulations is significantly less sensitive to increased stiffness. As a consequence, we are able to tremendously accelerate the simulation of stiff systems compared to established integrators and significantly increase the overall accuracy. The advantageous behavior of this approach is demonstrated on a broad spectrum of complex examples like deformable bodies, textiles, bristles, and human hair. Our easily parallelizable integrator enables more complex and realistic models to be explored in visual computing without compromising efficiency.

  4. The determination of the pressure-viscosity coefficient of a lubricant through an accurate film thickness formula and accurate film thickness measurements : part 2 : high L values

    NARCIS (Netherlands)

    Leeuwen, van H.J.

    2011-01-01

    The pressure-viscosity coefficient of a traction fluid is determined by fitting calculation results on accurate film thickness measurements, obtained at different speeds, loads, and temperatures. Through experiments, covering a range of 5.6

  5. Extended radio sources in the cluster environment

    International Nuclear Information System (INIS)

    Burns, J.O. Jr.

    1979-01-01

    Extended radio galaxies that lie in rich and poor clusters were studied. A sample of 3CR and 4C radio sources that spatially coincide with poor Zwicky clusters of galaxies was observed to obtain accurate positions and flux densities. Then interferometer observations at a resolution of approx. = 10 arcsec were performed on the sample. The resulting maps were used to determine the nature of the extended source structure, to make secure optical identifications, and to eliminate possible background sources. The results suggest that the environments around both classical double and head-tail radio sources are similar in rich and poor clusters. The majority of the poor cluster sources exhibit some signs of morphological distortion (i.e., head-tails) indicative of dynamic interaction with a relatively dense intracluster medium. A large fraction (60 to 100%) of all radio sources appear to be members of clusters of galaxies if one includes both poor and rich cluster sources. Detailed total intensity and polarization observations for a more restricted sample of two classical double sources and nine head-tail galaxies were also performed. The purpose was to examine the spatial distributions of spectral index and polarization. Thin streams of radio emission appear to connect the nuclear radio-point components to the more extended structures in the head-tail galaxies. It is suggested that a non-relativistic plasma beam can explain both the appearance of the thin streams and larger-scale structure as well as the energy needed to generate the observed radio emission. The rich and poor radio cluster samples are combined to investigate the relationship between source morphology and the scale sizes of clustering. There is some indication that a large fraction of radio sources, including those in these samples, are in superclusters of galaxies

  6. KFM: a homemade yet accurate and dependable fallout meter

    International Nuclear Information System (INIS)

    Kearny, C.H.; Barnes, P.R.; Chester, C.V.; Cortner, M.W.

    1978-01-01

    The KFM is a homemade fallout meter that can be made using only materials, tools, and skills found in millions of American homes. It is an accurate and dependable electroscope-capacitor. The KFM, in conjunction with its attached table and a watch, is designed for use as a rate meter. Its attached table relates observed differences in the separations of its two leaves (before and after exposures at the listed time intervals) to the dose rates during exposures of these time intervals. In this manner dose rates from 30 mR/hr up to 43 R/hr can be determined with an accuracy of +-25%. A KFM can be charged with any one of the three expedient electrostatic charging devices described. Due to the use of anhydrite (made by heating gypsum from wallboard) inside a KFM and the expedient ''dry-bucket'' in which it can be charged when the air is very humid, this instrument always can be charged and used to obtain accurate measurements of gamma radiation no matter how high the relative humidity. The step-by-step illustrated instructions for making and using a KFM are presented. These instructions have been improved after each successive field test. The majority of the untrained test families, adequately motivated by cash bonuses offered for success and guided only by these written instructions, have succeeded in making and using a KFM

  7. Accurate Multisteps Traffic Flow Prediction Based on SVM

    Directory of Open Access Journals (Sweden)

    Zhang Mingheng

    2013-01-01

    Full Text Available Accurate traffic flow prediction is prerequisite and important for realizing intelligent traffic control and guidance, and it is also the objective requirement for intelligent traffic management. Due to the strong nonlinear, stochastic, time-varying characteristics of urban transport system, artificial intelligence methods such as support vector machine (SVM are now receiving more and more attentions in this research field. Compared with the traditional single-step prediction method, the multisteps prediction has the ability that can predict the traffic state trends over a certain period in the future. From the perspective of dynamic decision, it is far important than the current traffic condition obtained. Thus, in this paper, an accurate multi-steps traffic flow prediction model based on SVM was proposed. In which, the input vectors were comprised of actual traffic volume and four different types of input vectors were compared to verify their prediction performance with each other. Finally, the model was verified with actual data in the empirical analysis phase and the test results showed that the proposed SVM model had a good ability for traffic flow prediction and the SVM-HPT model outperformed the other three models for prediction.

  8. Accurate phylogenetic classification of DNA fragments based onsequence composition

    Energy Technology Data Exchange (ETDEWEB)

    McHardy, Alice C.; Garcia Martin, Hector; Tsirigos, Aristotelis; Hugenholtz, Philip; Rigoutsos, Isidore

    2006-05-01

    Metagenome studies have retrieved vast amounts of sequenceout of a variety of environments, leading to novel discoveries and greatinsights into the uncultured microbial world. Except for very simplecommunities, diversity makes sequence assembly and analysis a verychallenging problem. To understand the structure a 5 nd function ofmicrobial communities, a taxonomic characterization of the obtainedsequence fragments is highly desirable, yet currently limited mostly tothose sequences that contain phylogenetic marker genes. We show that forclades at the rank of domain down to genus, sequence composition allowsthe very accurate phylogenetic 10 characterization of genomic sequence.We developed a composition-based classifier, PhyloPythia, for de novophylogenetic sequence characterization and have trained it on adata setof 340 genomes. By extensive evaluation experiments we show that themethodis accurate across all taxonomic ranks considered, even forsequences that originate fromnovel organisms and are as short as 1kb.Application to two metagenome datasets 15 obtained from samples ofphosphorus-removing sludge showed that the method allows the accurateclassification at genus level of most sequence fragments from thedominant populations, while at the same time correctly characterizingeven larger parts of the samples at higher taxonomic levels.

  9. Quality metric for accurate overlay control in <20nm nodes

    Science.gov (United States)

    Klein, Dana; Amit, Eran; Cohen, Guy; Amir, Nuriel; Har-Zvi, Michael; Huang, Chin-Chou Kevin; Karur-Shanmugam, Ramkumar; Pierson, Bill; Kato, Cindy; Kurita, Hiroyuki

    2013-04-01

    The semiconductor industry is moving toward 20nm nodes and below. As the Overlay (OVL) budget is getting tighter at these advanced nodes, the importance in the accuracy in each nanometer of OVL error is critical. When process owners select OVL targets and methods for their process, they must do it wisely; otherwise the reported OVL could be inaccurate, resulting in yield loss. The same problem can occur when the target sampling map is chosen incorrectly, consisting of asymmetric targets that will cause biased correctable terms and a corrupted wafer. Total measurement uncertainty (TMU) is the main parameter that process owners use when choosing an OVL target per layer. Going towards the 20nm nodes and below, TMU will not be enough for accurate OVL control. KLA-Tencor has introduced a quality score named `Qmerit' for its imaging based OVL (IBO) targets, which is obtained on the-fly for each OVL measurement point in X & Y. This Qmerit score will enable the process owners to select compatible targets which provide accurate OVL values for their process and thereby improve their yield. Together with K-T Analyzer's ability to detect the symmetric targets across the wafer and within the field, the Archer tools will continue to provide an independent, reliable measurement of OVL error into the next advanced nodes, enabling fabs to manufacture devices that meet their tight OVL error budgets.

  10. Accurate Holdup Calculations with Predictive Modeling & Data Integration

    Energy Technology Data Exchange (ETDEWEB)

    Azmy, Yousry [North Carolina State Univ., Raleigh, NC (United States). Dept. of Nuclear Engineering; Cacuci, Dan [Univ. of South Carolina, Columbia, SC (United States). Dept. of Mechanical Engineering

    2017-04-03

    In facilities that process special nuclear material (SNM) it is important to account accurately for the fissile material that enters and leaves the plant. Although there are many stages and processes through which materials must be traced and measured, the focus of this project is material that is “held-up” in equipment, pipes, and ducts during normal operation and that can accumulate over time into significant quantities. Accurately estimating the holdup is essential for proper SNM accounting (vis-à-vis nuclear non-proliferation), criticality and radiation safety, waste management, and efficient plant operation. Usually it is not possible to directly measure the holdup quantity and location, so these must be inferred from measured radiation fields, primarily gamma and less frequently neutrons. Current methods to quantify holdup, i.e. Generalized Geometry Holdup (GGH), primarily rely on simple source configurations and crude radiation transport models aided by ad hoc correction factors. This project seeks an alternate method of performing measurement-based holdup calculations using a predictive model that employs state-of-the-art radiation transport codes capable of accurately simulating such situations. Inverse and data assimilation methods use the forward transport model to search for a source configuration that best matches the measured data and simultaneously provide an estimate of the level of confidence in the correctness of such configuration. In this work the holdup problem is re-interpreted as an inverse problem that is under-determined, hence may permit multiple solutions. A probabilistic approach is applied to solving the resulting inverse problem. This approach rates possible solutions according to their plausibility given the measurements and initial information. This is accomplished through the use of Bayes’ Theorem that resolves the issue of multiple solutions by giving an estimate of the probability of observing each possible solution. To use

  11. Vessel calibration for accurate material accountancy at RRP

    International Nuclear Information System (INIS)

    Yanagisawa, Yuu; Ono, Sawako; Iwamoto, Tomonori

    2004-01-01

    RRP has a 800t·Upr capacity a year to re-process, where would be handled a large amount of nuclear materials as solution. A large scale plant like RRP will require accurate materials accountancy system, so that the vessel calibration with high-precision is very important as initial vessel calibration before operation. In order to obtain the calibration curve, it is needed well-known each the increment volume related with liquid height. Then we performed at least 2 or 3 times run with water for vessel calibration and careful evaluation for the calibration data should be needed. We performed vessel calibration overall 210 vessels, and the calibration of 81 vessels including IAT and OAT were held under presence of JSGO and IAEA inspectors taking into account importance on the material accountancy. This paper describes outline of the initial vessel calibration and calibration results based on back pressure measurement with dip tubes. (author)

  12. Fast and accurate automated cell boundary determination for fluorescence microscopy

    Science.gov (United States)

    Arce, Stephen Hugo; Wu, Pei-Hsun; Tseng, Yiider

    2013-07-01

    Detailed measurement of cell phenotype information from digital fluorescence images has the potential to greatly advance biomedicine in various disciplines such as patient diagnostics or drug screening. Yet, the complexity of cell conformations presents a major barrier preventing effective determination of cell boundaries, and introduces measurement error that propagates throughout subsequent assessment of cellular parameters and statistical analysis. State-of-the-art image segmentation techniques that require user-interaction, prolonged computation time and specialized training cannot adequately provide the support for high content platforms, which often sacrifice resolution to foster the speedy collection of massive amounts of cellular data. This work introduces a strategy that allows us to rapidly obtain accurate cell boundaries from digital fluorescent images in an automated format. Hence, this new method has broad applicability to promote biotechnology.

  13. Multi-objective optimization of inverse planning for accurate radiotherapy

    International Nuclear Information System (INIS)

    Cao Ruifen; Pei Xi; Cheng Mengyun; Li Gui; Hu Liqin; Wu Yican; Jing Jia; Li Guoli

    2011-01-01

    The multi-objective optimization of inverse planning based on the Pareto solution set, according to the multi-objective character of inverse planning in accurate radiotherapy, was studied in this paper. Firstly, the clinical requirements of a treatment plan were transformed into a multi-objective optimization problem with multiple constraints. Then, the fast and elitist multi-objective Non-dominated Sorting Genetic Algorithm (NSGA-II) was introduced to optimize the problem. A clinical example was tested using this method. The results show that an obtained set of non-dominated solutions were uniformly distributed and the corresponding dose distribution of each solution not only approached the expected dose distribution, but also met the dose-volume constraints. It was indicated that the clinical requirements were better satisfied using the method and the planner could select the optimal treatment plan from the non-dominated solution set. (authors)

  14. Accurate mass measurements on neutron-deficient krypton isotopes

    CERN Document Server

    Rodríguez, D.; Äystö, J.; Beck, D.; Blaum, K.; Bollen, G.; Herfurth, F.; Jokinen, A.; Kellerbauer, A.; Kluge, H.-J.; Kolhinen, V.S.; Oinonen, M.; Sauvan, E.; Schwarz, S.

    2006-01-01

    The masses of $^{72–78,80,82,86}$Kr were measured directly with the ISOLTRAP Penning trap mass spectrometer at ISOLDE/CERN. For all these nuclides, the measurements yielded mass uncertainties below 10 keV. The ISOLTRAP mass values for $^{72–75}$Kr being more precise than the previous results obtained by means of other techniques, and thus completely determine the new values in the Atomic-Mass Evaluation. Besides the interest of these masses for nuclear astrophysics, nuclear structure studies, and Standard Model tests, these results constitute a valuable and accurate input to improve mass models. In this paper, we present the mass measurements and discuss the mass evaluation for these Kr isotopes.

  15. A Modified Proportional Navigation Guidance for Accurate Target Hitting

    Directory of Open Access Journals (Sweden)

    A. Moharampour

    2010-03-01

    First, the pure proportional navigation guidance (PPNG in 3-dimensional state is explained in a new point of view. The main idea is based on the distinction between angular rate vector and rotation vector conceptions. The current innovation is based on selection of line of sight (LOS coordinates. A comparison between two available choices for LOS coordinates system is proposed. An improvement is made by adding two additional terms. First term includes a cross range compensator which is used to provide and enhance path observability, and obtain convergent estimates of state variables. The second term is new concept lead bias term, which has been calculated by assuming an equivalent acceleration along the target longitudinal axis. Simulation results indicate that the lead bias term properly provides terminal conditions for accurate target interception.

  16. Simultaneous head tissue conductivity and EEG source location estimation.

    Science.gov (United States)

    Akalin Acar, Zeynep; Acar, Can E; Makeig, Scott

    2016-01-01

    Accurate electroencephalographic (EEG) source localization requires an electrical head model incorporating accurate geometries and conductivity values for the major head tissues. While consistent conductivity values have been reported for scalp, brain, and cerebrospinal fluid, measured brain-to-skull conductivity ratio (BSCR) estimates have varied between 8 and 80, likely reflecting both inter-subject and measurement method differences. In simulations, mis-estimation of skull conductivity can produce source localization errors as large as 3cm. Here, we describe an iterative gradient-based approach to Simultaneous tissue Conductivity And source Location Estimation (SCALE). The scalp projection maps used by SCALE are obtained from near-dipolar effective EEG sources found by adequate independent component analysis (ICA) decomposition of sufficient high-density EEG data. We applied SCALE to simulated scalp projections of 15cm(2)-scale cortical patch sources in an MR image-based electrical head model with simulated BSCR of 30. Initialized either with a BSCR of 80 or 20, SCALE estimated BSCR as 32.6. In Adaptive Mixture ICA (AMICA) decompositions of (45-min, 128-channel) EEG data from two young adults we identified sets of 13 independent components having near-dipolar scalp maps compatible with a single cortical source patch. Again initialized with either BSCR 80 or 25, SCALE gave BSCR estimates of 34 and 54 for the two subjects respectively. The ability to accurately estimate skull conductivity non-invasively from any well-recorded EEG data in combination with a stable and non-invasively acquired MR imaging-derived electrical head model could remove a critical barrier to using EEG as a sub-cm(2)-scale accurate 3-D functional cortical imaging modality. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Accurate deuterium spectroscopy for fundamental studies

    Science.gov (United States)

    Wcisło, P.; Thibault, F.; Zaborowski, M.; Wójtewicz, S.; Cygan, A.; Kowzan, G.; Masłowski, P.; Komasa, J.; Puchalski, M.; Pachucki, K.; Ciuryło, R.; Lisak, D.

    2018-07-01

    We present an accurate measurement of the weak quadrupole S(2) 2-0 line in self-perturbed D2 and theoretical ab initio calculations of both collisional line-shape effects and energy of this rovibrational transition. The spectra were collected at the 247-984 Torr pressure range with a frequency-stabilized cavity ring-down spectrometer linked to an optical frequency comb (OFC) referenced to a primary time standard. Our line-shape modeling employed quantum calculations of molecular scattering (the pressure broadening and shift and their speed dependencies were calculated, while the complex frequency of optical velocity-changing collisions was fitted to experimental spectra). The velocity-changing collisions are handled with the hard-sphere collisional kernel. The experimental and theoretical pressure broadening and shift are consistent within 5% and 27%, respectively (the discrepancy for shift is 8% when referred not to the speed averaged value, which is close to zero, but to the range of variability of the speed-dependent shift). We use our high pressure measurement to determine the energy, ν0, of the S(2) 2-0 transition. The ab initio line-shape calculations allowed us to mitigate the expected collisional systematics reaching the 410 kHz accuracy of ν0. We report theoretical determination of ν0 taking into account relativistic and QED corrections up to α5. Our estimation of the accuracy of the theoretical ν0 is 1.3 MHz. We observe 3.4σ discrepancy between experimental and theoretical ν0.

  18. Towards Accurate Application Characterization for Exascale (APEX)

    Energy Technology Data Exchange (ETDEWEB)

    Hammond, Simon David [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    Sandia National Laboratories has been engaged in hardware and software codesign activities for a number of years, indeed, it might be argued that prototyping of clusters as far back as the CPLANT machines and many large capability resources including ASCI Red and RedStorm were examples of codesigned solutions. As the research supporting our codesign activities has moved closer to investigating on-node runtime behavior a nature hunger has grown for detailed analysis of both hardware and algorithm performance from the perspective of low-level operations. The Application Characterization for Exascale (APEX) LDRD was a project concieved of addressing some of these concerns. Primarily the research was to intended to focus on generating accurate and reproducible low-level performance metrics using tools that could scale to production-class code bases. Along side this research was an advocacy and analysis role associated with evaluating tools for production use, working with leading industry vendors to develop and refine solutions required by our code teams and to directly engage with production code developers to form a context for the application analysis and a bridge to the research community within Sandia. On each of these accounts significant progress has been made, particularly, as this report will cover, in the low-level analysis of operations for important classes of algorithms. This report summarizes the development of a collection of tools under the APEX research program and leaves to other SAND and L2 milestone reports the description of codesign progress with Sandia’s production users/developers.

  19. How flatbed scanners upset accurate film dosimetry

    Science.gov (United States)

    van Battum, L. J.; Huizenga, H.; Verdaasdonk, R. M.; Heukelom, S.

    2016-01-01

    Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2-2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner’s transmission mode, with red-green-blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner’s optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film.

  20. Accurate hydrocarbon estimates attained with radioactive isotope

    International Nuclear Information System (INIS)

    Hubbard, G.

    1983-01-01

    To make accurate economic evaluations of new discoveries, an oil company needs to know how much gas and oil a reservoir contains. The porous rocks of these reservoirs are not completely filled with gas or oil, but contain a mixture of gas, oil and water. It is extremely important to know what volume percentage of this water--called connate water--is contained in the reservoir rock. The percentage of connate water can be calculated from electrical resistivity measurements made downhole. The accuracy of this method can be improved if a pure sample of connate water can be analyzed or if the chemistry of the water can be determined by conventional logging methods. Because of the similarity of the mud filtrate--the water in a water-based drilling fluid--and the connate water, this is not always possible. If the oil company cannot distinguish between connate water and mud filtrate, its oil-in-place calculations could be incorrect by ten percent or more. It is clear that unless an oil company can be sure that a sample of connate water is pure, or at the very least knows exactly how much mud filtrate it contains, its assessment of the reservoir's water content--and consequently its oil or gas content--will be distorted. The oil companies have opted for the Repeat Formation Tester (RFT) method. Label the drilling fluid with small doses of tritium--a radioactive isotope of hydrogen--and it will be easy to detect and quantify in the sample

  1. How flatbed scanners upset accurate film dosimetry

    International Nuclear Information System (INIS)

    Van Battum, L J; Verdaasdonk, R M; Heukelom, S; Huizenga, H

    2016-01-01

    Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2–2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner’s transmission mode, with red–green–blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner’s optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film. (paper)

  2. Critical assessment of pediatric neurosurgery patient/parent educational information obtained via the Internet.

    Science.gov (United States)

    Garcia, Michael; Daugherty, Christopher; Ben Khallouq, Bertha; Maugans, Todd

    2018-05-01

    OBJECTIVE The Internet is used frequently by patients and family members to acquire information about pediatric neurosurgical conditions. The sources, nature, accuracy, and usefulness of this information have not been examined recently. The authors analyzed the results from searches of 10 common pediatric neurosurgical terms using a novel scoring test to assess the value of the educational information obtained. METHODS Google and Bing searches were performed for 10 common pediatric neurosurgical topics (concussion, craniosynostosis, hydrocephalus, pediatric brain tumor, pediatric Chiari malformation, pediatric epilepsy surgery, pediatric neurosurgery, plagiocephaly, spina bifida, and tethered spinal cord). The first 10 "hits" obtained with each search engine were analyzed using the Currency, Relevance, Authority, Accuracy, and Purpose (CRAAP) test, which assigns a numerical score in each of 5 domains. Agreement between results was assessed for 1) concurrent searches with Google and Bing; 2) Google searches over time (6 months apart); 3) Google searches using mobile and PC platforms concurrently; and 4) searches using privacy settings. Readability was assessed with an online analytical tool. RESULTS Google and Bing searches yielded information with similar CRAAP scores (mean 72% and 75%, respectively), but with frequently differing results (58% concordance/matching results). There was a high level of agreement (72% concordance) over time for Google searches and also between searches using general and privacy settings (92% concordance). Government sources scored the best in both CRAAP score and readability. Hospitals and universities were the most prevalent sources, but these sources had the lowest CRAAP scores, due in part to an abundance of self-marketing. The CRAAP scores for mobile and desktop platforms did not differ significantly (p = 0.49). CONCLUSIONS Google and Bing searches yielded useful educational information, using either mobile or PC platforms. Most

  3. Modelling of classical ghost images obtained using scattered light

    International Nuclear Information System (INIS)

    Crosby, S; Castelletto, S; Aruldoss, C; Scholten, R E; Roberts, A

    2007-01-01

    The images obtained in ghost imaging with pseudo-thermal light sources are highly dependent on the spatial coherence properties of the incident light. Pseudo-thermal light is often created by reducing the coherence length of a coherent source by passing it through a turbid mixture of scattering spheres. We describe a model for simulating ghost images obtained with such partially coherent light, using a wave-transport model to calculate the influence of the scattering on initially coherent light. The model is able to predict important properties of the pseudo-thermal source, such as the coherence length and the amplitude of the residual unscattered component of the light which influence the resolution and visibility of the final ghost image. We show that the residual ballistic component introduces an additional background in the reconstructed image, and the spatial resolution obtainable depends on the size of the scattering spheres

  4. Modelling of classical ghost images obtained using scattered light

    Energy Technology Data Exchange (ETDEWEB)

    Crosby, S; Castelletto, S; Aruldoss, C; Scholten, R E; Roberts, A [School of Physics, University of Melbourne, Victoria, 3010 (Australia)

    2007-08-15

    The images obtained in ghost imaging with pseudo-thermal light sources are highly dependent on the spatial coherence properties of the incident light. Pseudo-thermal light is often created by reducing the coherence length of a coherent source by passing it through a turbid mixture of scattering spheres. We describe a model for simulating ghost images obtained with such partially coherent light, using a wave-transport model to calculate the influence of the scattering on initially coherent light. The model is able to predict important properties of the pseudo-thermal source, such as the coherence length and the amplitude of the residual unscattered component of the light which influence the resolution and visibility of the final ghost image. We show that the residual ballistic component introduces an additional background in the reconstructed image, and the spatial resolution obtainable depends on the size of the scattering spheres.

  5. Multipurpose discriminator with accurate time coupling

    International Nuclear Information System (INIS)

    Baldin, B.Yu.; Krumshtejn, Z.V.; Ronzhin, A.I.

    1977-01-01

    The principle diagram of a multipurpose discriminator is described, designed on the basis of a wide-band differential amplifier. The discriminator has three independent channels: the timing channel, the lower level discriminator and the control channel. The timing channel and the lower level discriminator are connected to a coincidence circuit. Three methods of timing are used: a single threshold, a double threshold with timing on the pulse front, and a constant fraction timing. The lower level discriminator is a wide-band amplifier with an adjustable threshold. The investigation of compensation characteristics of the discriminator has shown that the time shift of the discriminator output in the constant fraction timing regime does not exceed +-75 ns for the input signal range of 1:85. The time resolution was found to be 20 ns in the 20% energy range near the photo-peak maximum of 60 Co γ source

  6. Accurate computer simulation of a drift chamber

    International Nuclear Information System (INIS)

    Killian, T.J.

    1980-01-01

    A general purpose program for drift chamber studies is described. First the capacitance matrix is calculated using a Green's function technique. The matrix is used in a linear-least-squares fit to choose optimal operating voltages. Next the electric field is computed, and given knowledge of gas parameters and magnetic field environment, a family of electron trajectories is determined. These are finally used to make drift distance vs time curves which may be used directly by a track reconstruction program. Results are compared with data obtained from the cylindrical chamber in the Axial Field Magnet experiment at the CERN ISR

  7. Accurate computer simulation of a drift chamber

    CERN Document Server

    Killian, T J

    1980-01-01

    The author describes a general purpose program for drift chamber studies. First the capacitance matrix is calculated using a Green's function technique. The matrix is used in a linear-least-squares fit to choose optimal operating voltages. Next the electric field is computed, and given knowledge of gas parameters and magnetic field environment, a family of electron trajectories is determined. These are finally used to make drift distance vs time curves which may be used directly by a track reconstruction program. The results are compared with data obtained from the cylindrical chamber in the Axial Field Magnet experiment at the CERN ISR. (1 refs).

  8. The use of nuclear energy for obtaining petroleum

    International Nuclear Information System (INIS)

    Waldmann, H.; Koch, C.; Thelen, H.J.; Kappe, P.

    1982-01-01

    After some basic considerations of petroleum demand, petroleum supply and petroleum reserves, the article gives a survey of the various methods of obtaining petroleum. The use of energy in the form of steam and electricity in the previously used processes and in conventional deposits requires up to 50% of the energy contained in the oil obtained. Now unconventional sources of petroleum (tertiary petroleum, heavy fractions and shale oil) could become of interest to West Germany in the near future. The economy of production can be determined by the energy source used, to a large extent. A series of possibilities are discussed for using nuclear steam raising systems for this purpose. (UA) [de

  9. Accurate color measurement methods for medical displays.

    Science.gov (United States)

    Saha, Anindita; Kelley, Edward F; Badano, Aldo

    2010-01-01

    The necessity for standard instrumentation and measurements of color that are repeatable and reproducible is the major motivation behind this work. Currently, different instrumentation and methods can yield very different results when measuring the same feature such as color uniformity or color difference. As color increasingly comes into play in medical imaging diagnostics, display color will have to be quantified in order to assess whether the display should be used for imaging purposes. The authors report on the characterization of three novel probes for measuring display color with minimal contamination from screen areas outside the measurement spot or from off-normal emissions. They compare three probe designs: A modified small-spot luminance probe and two conic probe designs based on black frusta. To compare the three color probe designs, spectral and luminance measurements were taken with specialized instrumentation to determine the luminance changes and color separation abilities of the probes. The probes were characterized with a scanning slit method, veiling glare, and a moving laser and LED arrangement. The scanning slit measurement was done using a black slit plate over a white line on an LCD monitor. The luminance was measured in 1 mm increments from the center of the slit to +/- 15 mm above and below the slit at different distances between the probe and the slit. The veiling glare setup consisted of measurements of the luminance of a black spot pattern with a white disk of radius of 100 mm as the black spot increases in 1 mm radius increments. The moving LED and laser method consisted of a red and green light orthogonal to the probe tip for the light to directly shine into the probe. The green light source was moved away from the red source in 1 cm increments to measure color stray-light contamination at different probe distances. The results of the color testing using the LED and laser methods suggest a better performance of one of the frusta probes

  10. Rapid, accurate, and direct determination of total lycopene content in tomato paste

    Science.gov (United States)

    Bicanic, D.; Anese, M.; Luterotti, S.; Dadarlat, D.; Gibkes, J.; Lubbers, M.

    2003-01-01

    Lycopene that imparts red color to the tomato fruit is the most potent antioxidant among carotenes, an important nutrient and also used as a color ingredient in many food formulations. Since cooked and processed foods derived from tomatoes were shown to provide optimal lycopene boost, products such as paste, puree, juice, etc. are nowadays gaining popularity as dietary sources. The analysis of lycopene in tomato paste (partially dehydrated product prepared by vacuum concentrating tomato juice) is carried out using either high pressure liquid chromatography (HPLC), spectrophotometry, or by evaluating the color. The instability of lycopene during processes of extraction, etc., handling, and disposal of organic solvents makes the preparation of a sample for the analysis a delicate task. Despite a recognized need for accurate and rapid assessment of lycopene in tomato products no such method is available at present. The study described here focuses on a direct determination of a total lycopene content in different tomato pastes by means of the laser optothermal window (LOW) method at 502 nm. The concentration of lycopene in tomato paste ranged between 25 and 150 mg per 100 g product; the results are in excellent agreement with those obtained by spectrophotometry. The time needed to complete LOW analysis is very short, so that decomposition of pigment and the formation of artifacts are minimized. Preliminary results indicate a good degree of reproducibility making the LOW method suitable for routine assays of lycopene content in tomato paste.

  11. Accurate and cost-effective MTF measurement system for lens modules of digital cameras

    Science.gov (United States)

    Chang, Gao-Wei; Liao, Chia-Cheng; Yeh, Zong-Mu

    2007-01-01

    For many years, the widening use of digital imaging products, e.g., digital cameras, has given rise to much attention in the market of consumer electronics. However, it is important to measure and enhance the imaging performance of the digital ones, compared to that of conventional cameras (with photographic films). For example, the effect of diffraction arising from the miniaturization of the optical modules tends to decrease the image resolution. As a figure of merit, modulation transfer function (MTF) has been broadly employed to estimate the image quality. Therefore, the objective of this paper is to design and implement an accurate and cost-effective MTF measurement system for the digital camera. Once the MTF of the sensor array is provided, that of the optical module can be then obtained. In this approach, a spatial light modulator (SLM) is employed to modulate the spatial frequency of light emitted from the light-source. The modulated light going through the camera under test is consecutively detected by the sensors. The corresponding images formed from the camera are acquired by a computer and then, they are processed by an algorithm for computing the MTF. Finally, through the investigation on the measurement accuracy from various methods, such as from bar-target and spread-function methods, it appears that our approach gives quite satisfactory results.

  12. OpenMC In Situ Source Convergence Detection

    Energy Technology Data Exchange (ETDEWEB)

    Aldrich, Garrett Allen [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Univ. of California, Davis, CA (United States); Dutta, Soumya [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); The Ohio State Univ., Columbus, OH (United States); Woodring, Jonathan Lee [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-05-07

    We designed and implemented an in situ version of particle source convergence for the OpenMC particle transport simulator. OpenMC is a Monte Carlo based-particle simulator for neutron criticality calculations. For the transport simulation to be accurate, source particles must converge on a spatial distribution. Typically, convergence is obtained by iterating the simulation by a user-settable, fixed number of steps, and it is assumed that convergence is achieved. We instead implement a method to detect convergence, using the stochastic oscillator for identifying convergence of source particles based on their accumulated Shannon Entropy. Using our in situ convergence detection, we are able to detect and begin tallying results for the full simulation once the proper source distribution has been confirmed. Our method ensures that the simulation is not started too early, by a user setting too optimistic parameters, or too late, by setting too conservative a parameter.

  13. Accurate atom-mapping computation for biochemical reactions.

    Science.gov (United States)

    Latendresse, Mario; Malerich, Jeremiah P; Travers, Mike; Karp, Peter D

    2012-11-26

    The complete atom mapping of a chemical reaction is a bijection of the reactant atoms to the product atoms that specifies the terminus of each reactant atom. Atom mapping of biochemical reactions is useful for many applications of systems biology, in particular for metabolic engineering where synthesizing new biochemical pathways has to take into account for the number of carbon atoms from a source compound that are conserved in the synthesis of a target compound. Rapid, accurate computation of the atom mapping(s) of a biochemical reaction remains elusive despite significant work on this topic. In particular, past researchers did not validate the accuracy of mapping algorithms. We introduce a new method for computing atom mappings called the minimum weighted edit-distance (MWED) metric. The metric is based on bond propensity to react and computes biochemically valid atom mappings for a large percentage of biochemical reactions. MWED models can be formulated efficiently as Mixed-Integer Linear Programs (MILPs). We have demonstrated this approach on 7501 reactions of the MetaCyc database for which 87% of the models could be solved in less than 10 s. For 2.1% of the reactions, we found multiple optimal atom mappings. We show that the error rate is 0.9% (22 reactions) by comparing these atom mappings to 2446 atom mappings of the manually curated Kyoto Encyclopedia of Genes and Genomes (KEGG) RPAIR database. To our knowledge, our computational atom-mapping approach is the most accurate and among the fastest published to date. The atom-mapping data will be available in the MetaCyc database later in 2012; the atom-mapping software will be available within the Pathway Tools software later in 2012.

  14. Accurate mass and velocity functions of dark matter haloes

    Science.gov (United States)

    Comparat, Johan; Prada, Francisco; Yepes, Gustavo; Klypin, Anatoly

    2017-08-01

    N-body cosmological simulations are an essential tool to understand the observed distribution of galaxies. We use the MultiDark simulation suite, run with the Planck cosmological parameters, to revisit the mass and velocity functions. At redshift z = 0, the simulations cover four orders of magnitude in halo mass from ˜1011M⊙ with 8783 874 distinct haloes and 532 533 subhaloes. The total volume used is ˜515 Gpc3, more than eight times larger than in previous studies. We measure and model the halo mass function, its covariance matrix w.r.t halo mass and the large-scale halo bias. With the formalism of the excursion-set mass function, we explicit the tight interconnection between the covariance matrix, bias and halo mass function. We obtain a very accurate (function. We also model the subhalo mass function and its relation to the distinct halo mass function. The set of models obtained provides a complete and precise framework for the description of haloes in the concordance Planck cosmology. Finally, we provide precise analytical fits of the Vmax maximum velocity function up to redshift z publicly available in the Skies and Universes data base.

  15. Overview of galactic results obtained by MAGIC

    Energy Technology Data Exchange (ETDEWEB)

    Zanin, Roberta

    2013-06-15

    MAGIC is a system of two atmospheric Cherenkov telescopes which explores the very-high-energy sky, from some tens of GeV up to tens of TeV. Located in the Canary island of La Palma, MAGIC has the lowest energy threshold among the instruments of its kind, well suited to study the still poorly explored energy band below 100 GeV. Although the space-borne gamma-ray telescope Fermi/LAT is sensitive up to 300 GeV, gamma-ray rates drop fast with increasing energy, so γ-ray collection areas larger than 10{sup 4}m{sup 2}, as those provided by grounds-based instruments, are crucial above a few GeV. The combination of MAGIC and Fermi/LAT observations have provided the first astrophysical spectra sampled in the inverse Compton peak region, resulting in a complete coverage from MeV up to TeV energies, as well as the discovery of a pulsed emission in the very-high-energy band. This paper focuses on the latest results on Galactic sources obtained by MAGIC which are highlighted by the detection of the pulsed gamma-ray emission from the Crab pulsar up to 400 GeV. In addition, we will present the morphological study on the W51 complex which allowed to pinpoint the location of the majority of the emission around the interaction point between the supernova remnant W51C and the star forming region W51B, but also to find a possible contribution from the associated pulsar wind nebula. Other important scientific achievements involve the Crab Nebula with an unprecedented spectrum covering three decades in energy starting from 50 GeV and a morphological study of the unidentified source HESS J1857+026 which supports the pulsar wind nebula scenario. Finally we will report on the searches of very-high-energy signals from gamma-ray binaries, mainly LS I 303+ and HESS J0632+057.

  16. MEG source imaging method using fast L1 minimum-norm and its applications to signals with brain noise and human resting-state source amplitude images.

    Science.gov (United States)

    Huang, Ming-Xiong; Huang, Charles W; Robb, Ashley; Angeles, AnneMarie; Nichols, Sharon L; Baker, Dewleen G; Song, Tao; Harrington, Deborah L; Theilmann, Rebecca J; Srinivasan, Ramesh; Heister, David; Diwakar, Mithun; Canive, Jose M; Edgar, J Christopher; Chen, Yu-Han; Ji, Zhengwei; Shen, Max; El-Gabalawy, Fady; Levy, Michael; McLay, Robert; Webb-Murphy, Jennifer; Liu, Thomas T; Drake, Angela; Lee, Roland R

    2014-01-01

    The present study developed a fast MEG source imaging technique based on Fast Vector-based Spatio-Temporal Analysis using a L1-minimum-norm (Fast-VESTAL) and then used the method to obtain the source amplitude images of resting-state magnetoencephalography (MEG) signals for different frequency bands. The Fast-VESTAL technique consists of two steps. First, L1-minimum-norm MEG source images were obtained for the dominant spatial modes of sensor-waveform covariance matrix. Next, accurate source time-courses with millisecond temporal resolution were obtained using an inverse operator constructed from the spatial source images of Step 1. Using simulations, Fast-VESTAL's performance was assessed for its 1) ability to localize multiple correlated sources; 2) ability to faithfully recover source time-courses; 3) robustness to different SNR conditions including SNR with negative dB levels; 4) capability to handle correlated brain noise; and 5) statistical maps of MEG source images. An objective pre-whitening method was also developed and integrated with Fast-VESTAL to remove correlated brain noise. Fast-VESTAL's performance was then examined in the analysis of human median-nerve MEG responses. The results demonstrated that this method easily distinguished sources in the entire somatosensory network. Next, Fast-VESTAL was applied to obtain the first whole-head MEG source-amplitude images from resting-state signals in 41 healthy control subjects, for all standard frequency bands. Comparisons between resting-state MEG sources images and known neurophysiology were provided. Additionally, in simulations and cases with MEG human responses, the results obtained from using conventional beamformer technique were compared with those from Fast-VESTAL, which highlighted the beamformer's problems of signal leaking and distorted source time-courses. © 2013.

  17. How accurate is the 14C method

    International Nuclear Information System (INIS)

    Nydal, R.

    1979-01-01

    Radiocarbon daters have in recent years focussed their interest on accuracy and reliability of 14 C dates. The use of dates for resolving fine chronological structures that are not dateable otherwise has stressed this point. The total uncertainty in dating an event is composed of errors relating to dating of the sample, i.e. uncertainty in measured quantities, deviations from assumed content of 14 C in material when alive; and errors related to quality of sample material, i.e. contamination from carbon of different age, diffuse context between sample and event. Statistical variability in counting of 14 C activity gives the most important contribution to measurement uncertainty - increasing with age and shortage of sample material. Corrections for isotopic fractionation and reservoir effects must be performed, and - most important when dates are compared with historical ages - the dendrochronological calibration will correct for past variations in the atmospheric 14 C content. Future improvement of dating precision can however only be obtained by the combined efforts of both daters and submitters of samples, thus minimizing errors related to selection and handling of sample material as well as those related to the 14 C method and measurements. (Auth.)

  18. Accurate Ambient Noise Assessment Using Smartphones

    Science.gov (United States)

    Zamora, Willian; Calafate, Carlos T.; Cano, Juan-Carlos; Manzoni, Pietro

    2017-01-01

    Nowadays, smartphones have become ubiquitous and one of the main communication resources for human beings. Their widespread adoption was due to the huge technological progress and to the development of multiple useful applications. Their characteristics have also experienced a substantial improvement as they now integrate multiple sensors able to convert the smartphone into a flexible and multi-purpose sensing unit. The combined use of multiple smartphones endowed with several types of sensors gives the possibility to monitor a certain area with fine spatial and temporal granularity, a procedure typically known as crowdsensing. In this paper, we propose using smartphones as environmental noise-sensing units. For this purpose, we focus our study on the sound capture and processing procedure, analyzing the impact of different noise calculation algorithms, as well as in determining their accuracy when compared to a professional noise measurement unit. We analyze different candidate algorithms using different types of smartphones, and we study the most adequate time period and sampling strategy to optimize the data-gathering process. In addition, we perform an experimental study comparing our approach with the results obtained using a professional device. Experimental results show that, if the smartphone application is well tuned, it is possible to measure noise levels with a accuracy degree comparable to professional devices for the entire dynamic range typically supported by microphones embedded in smartphones, i.e., 35–95 dB. PMID:28430126

  19. Can Selforganizing Maps Accurately Predict Photometric Redshifts?

    Science.gov (United States)

    Way, Michael J.; Klose, Christian

    2012-01-01

    We present an unsupervised machine-learning approach that can be employed for estimating photometric redshifts. The proposed method is based on a vector quantization called the self-organizing-map (SOM) approach. A variety of photometrically derived input values were utilized from the Sloan Digital Sky Survey's main galaxy sample, luminous red galaxy, and quasar samples, along with the PHAT0 data set from the Photo-z Accuracy Testing project. Regression results obtained with this new approach were evaluated in terms of root-mean-square error (RMSE) to estimate the accuracy of the photometric redshift estimates. The results demonstrate competitive RMSE and outlier percentages when compared with several other popular approaches, such as artificial neural networks and Gaussian process regression. SOM RMSE results (using delta(z) = z(sub phot) - z(sub spec)) are 0.023 for the main galaxy sample, 0.027 for the luminous red galaxy sample, 0.418 for quasars, and 0.022 for PHAT0 synthetic data. The results demonstrate that there are nonunique solutions for estimating SOM RMSEs. Further research is needed in order to find more robust estimation techniques using SOMs, but the results herein are a positive indication of their capabilities when compared with other well-known methods

  20. Establishing Accurate and Sustainable Geospatial Reference Layers in Developing Countries

    Science.gov (United States)

    Seaman, V. Y.

    2017-12-01

    Accurate geospatial reference layers (settlement names & locations, administrative boundaries, and population) are not readily available for most developing countries. This critical information gap makes it challenging for governments to efficiently plan, allocate resources, and provide basic services. It also hampers international agencies' response to natural disasters, humanitarian crises, and other emergencies. The current work involves a recent successful effort, led by the Bill & Melinda Gates Foundation and the Government of Nigeria, to obtain such data. The data collection began in 2013, with local teams collecting names, coordinates, and administrative attributes for over 100,000 settlements using ODK-enabled smartphones. A settlement feature layer extracted from satellite imagery was used to ensure all settlements were included. Administrative boundaries (Ward, LGA) were created using the settlement attributes. These "new" boundary layers were much more accurate than existing shapefiles used by the government and international organizations. The resulting data sets helped Nigeria eradicate polio from all areas except in the extreme northeast, where security issues limited access and vaccination activities. In addition to the settlement and boundary layers, a GIS-based population model was developed, in partnership with Oak Ridge National Laboratories and Flowminder), that used the extracted settlement areas and characteristics, along with targeted microcensus data. This model provides population and demographics estimates independent of census or other administrative data, at a resolution of 90 meters. These robust geospatial data layers found many other uses, including establishing catchment area settlements and populations for health facilities, validating denominators for population-based surveys, and applications across a variety of government sectors. Based on the success of the Nigeria effort, a partnership between DfID and the Bill & Melinda Gates

  1. The KFM, A Homemade Yet Accurate and Dependable Fallout Meter

    Energy Technology Data Exchange (ETDEWEB)

    Kearny, C.H.

    2001-11-20

    The KFM is a homemade fallout meter that can be made using only materials, tools, and skills found in millions of American homes. It is an accurate and dependable electroscope-capacitor. The KFM, in conjunction with its attached table and a watch, is designed for use as a rate meter. Its attached table relates observed differences in the separations of its two leaves (before and after exposures at the listed time intervals) to the dose rates during exposures of these time intervals. In this manner dose rates from 30 mR/hr up to 43 R/hr can be determined with an accuracy of {+-}25%. A KFM can be charged with any one of the three expedient electrostatic charging devices described. Due to the use of anhydrite (made by heating gypsum from wallboard) inside a KFM and the expedient ''dry-bucket'' in which it can be charged when the air is very humid, this instrument always can be charged and used to obtain accurate measurements of gamma radiation no matter how high the relative humidity. The heart of this report is the step-by-step illustrated instructions for making and using a KFM. These instructions have been improved after each successive field test. The majority of the untrained test families, adequately motivated by cash bonuses offered for success and guided only by these written instructions, have succeeded in making and using a KFM. NOTE: ''The KFM, A Homemade Yet Accurate and Dependable Fallout Meter'', was published by Oak Ridge National Laboratory report in1979. Some of the materials originally suggested for suspending the leaves of the Kearny Fallout Meter (KFM) are no longer available. Because of changes in the manufacturing process, other materials (e.g., sewing thread, unwaxed dental floss) may not have the insulating capability to work properly. Oak Ridge National Laboratory has not tested any of the suggestions provided in the preface of the report, but they have been used by other groups. When using these

  2. Obtaining cementitious material from municipal solid waste

    Directory of Open Access Journals (Sweden)

    Macías, A.

    2007-06-01

    Full Text Available The primary purpose of the present study was to determine the viability of using incinerator ash and slag from municipal solid waste as a secondary source of cementitious materials. The combustion products used were taken from two types of Spanish MSW incinerators, one located at Valdemingómez, in Madrid, and the other in Melilla, with different incineration systems: one with fluidised bed combustion and other with mass burn waterwall. The effect of temperature (from 800 to 1,200 ºC on washed and unwashed incinerator residue was studied, in particular with regard to phase formation in washed products with a high NaCl and KCl content. The solid phases obtained were characterized by X-ray diffraction and BET-N2 specific surface procedures.El principal objetivo del trabajo ha sido determinar la viabilidad del uso de las cenizas y escorias procedentes de la incineración de residuos sólidos urbanos, como materia prima secundaria para la obtención de fases cementantes. Para ello se han empleado los residuos generados en dos tipos de incineradoras españolas de residuos sólidos urbanos: la incineradora de Valdemingómez y la incineradora de Melilla. Se ha estudiado la transformación de los residuos, sin tratamiento previo, en función de la temperatura de calentamiento (desde 800 ºC hasta 1.200 ºC, así como la influencia del lavado de los residuos con alto contenido en NaCl y KCl en la formación de fases obtenidas a las diferentes temperaturas de calcinación. Las fases obtenidas fueron caracterizadas por difracción de rayos X y área superficial por el método BET-N2.

  3. Stability of wheat germ oil obtained by supercritical carbon dioxide ...

    African Journals Online (AJOL)

    심정은

    For determination of stability, wheat germ oil obtained by ethanolysis reactants was characterized by ... extract non polar lipids with lipid soluble bioactive com- pounds from different sources (Esquivel et al., 1997; ... thin layer of cotton was placed at the bottom of the extraction vessel. Before plugging with cap another layer of ...

  4. 41 CFR 101-26.308 - Obtaining filing cabinets.

    Science.gov (United States)

    2010-07-01

    ... 41 Public Contracts and Property Management 2 2010-07-01 2010-07-01 true Obtaining filing cabinets. 101-26.308 Section 101-26.308 Public Contracts and Property Management Federal Property Management Regulations System FEDERAL PROPERTY MANAGEMENT REGULATIONS SUPPLY AND PROCUREMENT 26-PROCUREMENT SOURCES AND...

  5. Weather Satellite Pictures and How to Obtain Them.

    Science.gov (United States)

    Petit, Noel J.; Johnson, Philip

    1982-01-01

    An introduction to satellite meteorology is presented to promote use of live weather satellite photographs in the classroom. Topics addressed include weather satellites, how they work, earth emissions, satellite photography, satellite image analysis, obtaining satellite pictures, and future considerations. Includes sources for materials to…

  6. Mass Media Campaign Impacts Influenza Vaccine Obtainment of University Students

    Science.gov (United States)

    Shropshire, Ali M.; Brent-Hotchkiss, Renee; Andrews, Urkovia K.

    2013-01-01

    Objective: To describe the effectiveness of a mass media campaign in increasing the rate of college student influenza vaccine obtainment. Participants/Methods: Students ("N" = 721) at a large southern university completed a survey between September 2011 and January 2012 assessing what flu clinic media sources were visualized and if they…

  7. Immediate Dose Assessment for Radiation Accident in Laboratory Containing Gamma Source and/or Neutron Source

    International Nuclear Information System (INIS)

    Ahmed, E.M.

    2012-01-01

    One of the most important safety requirements for any place containing radiation sources is an accurate and fast way to assess the dose rate in both normal and accidental case. In normal case, the source is completely protected inside its surrounded shields in case of non use. In some cases this source may stuck outside its shield. In this case the walls of the place act as a shield. Many studies were carried for obtaining the most appropriate materials that may be used as shielding depending on their efficiency and also their cost. As concrete- with different densities- is the most available constructive material, this study presented a theoretical model using MCNP-4B code, based on Monte Carlo method to estimate the dose rate distribution in a laboratory with concrete walls in case of source stuck accident. The study dealt with Cs-137 as gamma source and Am-Be-241 as neutron source. Two different densities of concrete and also different thicknesses of walls were studied. The used model was verified by comparing the results with a practical study concerning with the effect of adding carbon powder to the concrete. The results showed good agreement

  8. Onboard Autonomous Corrections for Accurate IRF Pointing.

    Science.gov (United States)

    Jorgensen, J. L.; Betto, M.; Denver, T.

    2002-05-01

    Over the past decade, the Noise Equivalent Angle (NEA) of onboard attitude reference instruments, has decreased from tens-of-arcseconds to the sub-arcsecond level. This improved performance is partly due to improved sensor-technology with enhanced signal to noise ratios, partly due to improved processing electronics which allows for more sophisticated and faster signal processing. However, the main reason for the increased precision, is the application of onboard autonomy, which apart from simple outlier rejection also allows for removal of "false positive" answers, and other "unexpected" noise sources, that otherwise would degrade the quality of the measurements (e.g. discrimination between signals caused by starlight and ionizing radiation). The utilization of autonomous signal processing has also provided the means for another onboard processing step, namely the autonomous recovery from lost in space, where the attitude instrument without a priori knowledge derive the absolute attitude, i.e. in IRF coordinates, within fractions of a second. Combined with precise orbital state or position data, the absolute attitude information opens for multiple ways to improve the mission performance, either by reducing operations costs, by increasing pointing accuracy, by reducing mission expendables, or by providing backup decision information in case of anomalies. The Advanced Stellar Compass's (ASC) is a miniature, high accuracy, attitude instrument which features fully autonomous operations. The autonomy encompass all direct steps from automatic health checkout at power-on, over fully automatic SEU and SEL handling and proton induced sparkle removal, to recovery from "lost in space", and optical disturbance detection and handling. But apart from these more obvious autonomy functions, the ASC also features functions to handle and remove the aforementioned residuals. These functions encompass diverse operators such as a full orbital state vector model with automatic cloud

  9. Accurate blood pressure recording: is it difficult?

    Science.gov (United States)

    Bhalla, A; Singh, R; D'cruz, S; Lehl, S S; Sachdev, A

    2005-11-01

    Blood pressure (BP) measurement is a routine procedure but errors are frequently committed during BP recording. AIMS AND SETTINGS: The aim of the study was to look at the prevalent practices in the institute regarding BP recording. The study was conducted in the Medicine Department at Government Medical College, Chandigarh, a teaching institute for MBBS students. A prospective, observational study was performed amongst the 80 doctors in a tertiary care hospital. All of them were observed by a single observer during the act of BP recording. The observer was well versed with the guidelines issued by British Hypertension Society (BHS) and the deviations from the standard set of guidelines issued by BHS were noted. The errors were defined as deviations from these guidelines. The results were recorded as percentage of doctors committing these errors. In our study, 90% used mercury type sphygmomanometer. Zero error of the apparatus, hand dominance was not noted by any one. Every one used the standard BP cuff for recording BP. 70% of them did not let the patient rest before recording BP. 80% did not remove the clothing from the arm. None of them recorded BP in both arms. In out patient setting, 80% recorded blood pressure in sitting position and 14% in supine position. In all the patients where BP was recorded in sitting position BP apparatus was below the level of heart and 20% did not have their arm supported. 60% did not use palpatory method for noticing systolic BP and 70% did not raise pressure 30-40 mm Hg above the systolic level before checking the BP by auscultation. 80% lowered the BP at a rate of more than 2 mm/s and 60% rounded off the BP to nearest 5-10 mm Hg. 70% recorded BP only once and 90% of the rest re inflated the cuff without completely deflating and allowing rest before a second reading was obtained. The practice of recording BP in our hospital varies from the standard guidelines issued by the BHS.

  10. Concurrent and Accurate Short Read Mapping on Multicore Processors.

    Science.gov (United States)

    Martínez, Héctor; Tárraga, Joaquín; Medina, Ignacio; Barrachina, Sergio; Castillo, Maribel; Dopazo, Joaquín; Quintana-Ortí, Enrique S

    2015-01-01

    We introduce a parallel aligner with a work-flow organization for fast and accurate mapping of RNA sequences on servers equipped with multicore processors. Our software, HPG Aligner SA (HPG Aligner SA is an open-source application. The software is available at http://www.opencb.org, exploits a suffix array to rapidly map a large fraction of the RNA fragments (reads), as well as leverages the accuracy of the Smith-Waterman algorithm to deal with conflictive reads. The aligner is enhanced with a careful strategy to detect splice junctions based on an adaptive division of RNA reads into small segments (or seeds), which are then mapped onto a number of candidate alignment locations, providing crucial information for the successful alignment of the complete reads. The experimental results on a platform with Intel multicore technology report the parallel performance of HPG Aligner SA, on RNA reads of 100-400 nucleotides, which excels in execution time/sensitivity to state-of-the-art aligners such as TopHat 2+Bowtie 2, MapSplice, and STAR.

  11. Reducing dose calculation time for accurate iterative IMRT planning

    International Nuclear Information System (INIS)

    Siebers, Jeffrey V.; Lauterbach, Marc; Tong, Shidong; Wu Qiuwen; Mohan, Radhe

    2002-01-01

    A time-consuming component of IMRT optimization is the dose computation required in each iteration for the evaluation of the objective function. Accurate superposition/convolution (SC) and Monte Carlo (MC) dose calculations are currently considered too time-consuming for iterative IMRT dose calculation. Thus, fast, but less accurate algorithms such as pencil beam (PB) algorithms are typically used in most current IMRT systems. This paper describes two hybrid methods that utilize the speed of fast PB algorithms yet achieve the accuracy of optimizing based upon SC algorithms via the application of dose correction matrices. In one method, the ratio method, an infrequently computed voxel-by-voxel dose ratio matrix (R=D SC /D PB ) is applied for each beam to the dose distributions calculated with the PB method during the optimization. That is, D PB xR is used for the dose calculation during the optimization. The optimization proceeds until both the IMRT beam intensities and the dose correction ratio matrix converge. In the second method, the correction method, a periodically computed voxel-by-voxel correction matrix for each beam, defined to be the difference between the SC and PB dose computations, is used to correct PB dose distributions. To validate the methods, IMRT treatment plans developed with the hybrid methods are compared with those obtained when the SC algorithm is used for all optimization iterations and with those obtained when PB-based optimization is followed by SC-based optimization. In the 12 patient cases studied, no clinically significant differences exist in the final treatment plans developed with each of the dose computation methodologies. However, the number of time-consuming SC iterations is reduced from 6-32 for pure SC optimization to four or less for the ratio matrix method and five or less for the correction method. Because the PB algorithm is faster at computing dose, this reduces the inverse planning optimization time for our implementation

  12. Rapid and accurate pyrosequencing of angiosperm plastid genomes

    Science.gov (United States)

    Moore, Michael J; Dhingra, Amit; Soltis, Pamela S; Shaw, Regina; Farmerie, William G; Folta, Kevin M; Soltis, Douglas E

    2006-01-01

    Background Plastid genome sequence information is vital to several disciplines in plant biology, including phylogenetics and molecular biology. The past five years have witnessed a dramatic increase in the number of completely sequenced plastid genomes, fuelled largely by advances in conventional Sanger sequencing technology. Here we report a further significant reduction in time and cost for plastid genome sequencing through the successful use of a newly available pyrosequencing platform, the Genome Sequencer 20 (GS 20) System (454 Life Sciences Corporation), to rapidly and accurately sequence the whole plastid genomes of the basal eudicot angiosperms Nandina domestica (Berberidaceae) and Platanus occidentalis (Platanaceae). Results More than 99.75% of each plastid genome was simultaneously obtained during two GS 20 sequence runs, to an average depth of coverage of 24.6× in Nandina and 17.3× in Platanus. The Nandina and Platanus plastid genomes shared essentially identical gene complements and possessed the typical angiosperm plastid structure and gene arrangement. To assess the accuracy of the GS 20 sequence, over 45 kilobases of sequence were generated for each genome using conventional sequencing. Overall error rates of 0.043% and 0.031% were observed in GS 20 sequence for Nandina and Platanus, respectively. More than 97% of all observed errors were associated with homopolymer runs, with ~60% of all errors associated with homopolymer runs of 5 or more nucleotides and ~50% of all errors associated with regions of extensive homopolymer runs. No substitution errors were present in either genome. Error rates were generally higher in the single-copy and noncoding regions of both plastid genomes relative to the inverted repeat and coding regions. Conclusion Highly accurate and essentially complete sequence information was obtained for the Nandina and Platanus plastid genomes using the GS 20 System. More importantly, the high accuracy observed in the GS 20 plastid

  13. Rapid and accurate pyrosequencing of angiosperm plastid genomes

    Directory of Open Access Journals (Sweden)

    Farmerie William G

    2006-08-01

    Full Text Available Abstract Background Plastid genome sequence information is vital to several disciplines in plant biology, including phylogenetics and molecular biology. The past five years have witnessed a dramatic increase in the number of completely sequenced plastid genomes, fuelled largely by advances in conventional Sanger sequencing technology. Here we report a further significant reduction in time and cost for plastid genome sequencing through the successful use of a newly available pyrosequencing platform, the Genome Sequencer 20 (GS 20 System (454 Life Sciences Corporation, to rapidly and accurately sequence the whole plastid genomes of the basal eudicot angiosperms Nandina domestica (Berberidaceae and Platanus occidentalis (Platanaceae. Results More than 99.75% of each plastid genome was simultaneously obtained during two GS 20 sequence runs, to an average depth of coverage of 24.6× in Nandina and 17.3× in Platanus. The Nandina and Platanus plastid genomes shared essentially identical gene complements and possessed the typical angiosperm plastid structure and gene arrangement. To assess the accuracy of the GS 20 sequence, over 45 kilobases of sequence were generated for each genome using conventional sequencing. Overall error rates of 0.043% and 0.031% were observed in GS 20 sequence for Nandina and Platanus, respectively. More than 97% of all observed errors were associated with homopolymer runs, with ~60% of all errors associated with homopolymer runs of 5 or more nucleotides and ~50% of all errors associated with regions of extensive homopolymer runs. No substitution errors were present in either genome. Error rates were generally higher in the single-copy and noncoding regions of both plastid genomes relative to the inverted repeat and coding regions. Conclusion Highly accurate and essentially complete sequence information was obtained for the Nandina and Platanus plastid genomes using the GS 20 System. More importantly, the high accuracy

  14. Radiation Source Mapping with Bayesian Inverse Methods

    Science.gov (United States)

    Hykes, Joshua Michael

    We present a method to map the spectral and spatial distributions of radioactive sources using a small number of detectors. Locating and identifying radioactive materials is important for border monitoring, accounting for special nuclear material in processing facilities, and in clean-up operations. Most methods to analyze these problems make restrictive assumptions about the distribution of the source. In contrast, the source-mapping method presented here allows an arbitrary three-dimensional distribution in space and a flexible group and gamma peak distribution in energy. To apply the method, the system's geometry and materials must be known. A probabilistic Bayesian approach is used to solve the resulting inverse problem (IP) since the system of equations is ill-posed. The probabilistic approach also provides estimates of the confidence in the final source map prediction. A set of adjoint flux, discrete ordinates solutions, obtained in this work by the Denovo code, are required to efficiently compute detector responses from a candidate source distribution. These adjoint fluxes are then used to form the linear model to map the state space to the response space. The test for the method is simultaneously locating a set of 137Cs and 60Co gamma sources in an empty room. This test problem is solved using synthetic measurements generated by a Monte Carlo (MCNP) model and using experimental measurements that we collected for this purpose. With the synthetic data, the predicted source distributions identified the locations of the sources to within tens of centimeters, in a room with an approximately four-by-four meter floor plan. Most of the predicted source intensities were within a factor of ten of their true value. The chi-square value of the predicted source was within a factor of five from the expected value based on the number of measurements employed. With a favorable uniform initial guess, the predicted source map was nearly identical to the true distribution

  15. Obtaining shale oil suitable for lighting

    Energy Technology Data Exchange (ETDEWEB)

    Giraudel, M

    1851-11-12

    Treats with sulphuric acid and then with soda, obtaining 57 per cent of products suitable for lighting in place of the usual 35 to 40 per cent as obtained by present processes. The product has a less disagreeable odor.

  16. On canonical cylinder sections for accurate determination of contact angle in microgravity

    Science.gov (United States)

    Concus, Paul; Finn, Robert; Zabihi, Farhad

    1992-01-01

    Large shifts of liquid arising from small changes in certain container shapes in zero gravity can be used as a basis for accurately determining contact angle. Canonical geometries for this purpose, recently developed mathematically, are investigated here computationally. It is found that the desired nearly-discontinuous behavior can be obtained and that the shifts of liquid have sufficient volume to be readily observed.

  17. Fast and accurate exercise policies for Bermudan swaptions in the LIBOR market model

    NARCIS (Netherlands)

    P.K. Karlsson (Patrik); S. Jain (Shashi); C.W. Oosterlee (Kees)

    2016-01-01

    htmlabstractThis paper describes an American Monte Carlo approach for obtaining fast and accurate exercise policies for pricing of callable LIBOR Exotics (e.g., Bermudan swaptions) in the LIBOR market model using the Stochastic Grid Bundling Method (SGBM). SGBM is a bundling and regression based

  18. Flexible, fast and accurate sequence alignment profiling on GPGPU with PaSWAS

    NARCIS (Netherlands)

    Warris, S.; Yalcin, F.; Jackson, K.J.; Nap, J.P.H.

    2015-01-01

    Motivation To obtain large-scale sequence alignments in a fast and flexible way is an important step in the analyses of next generation sequencing data. Applications based on the Smith-Waterman (SW) algorithm are often either not fast enough, limited to dedicated tasks or not sufficiently accurate

  19. Polarized source upgrading

    International Nuclear Information System (INIS)

    Clegg, T.B.; Rummel, R.L.; Carter, E.P.; Westerfeldt, C.R.; Lovette, A.W.; Edwards, S.E.

    1985-01-01

    The decision was made this past year to move the Lamb-shift polarized ion source which was first installed in the laboratory in 1970. The motivation was the need to improve the flexibility of spin-axis orientation by installing the ion source with a new Wien-filter spin precessor which is capable of rotating physically about the beam axis. The move of the polarized source was accomplished in approximately two months, with the accelerator being turned off for experiments during approximately four weeks of this time. The occasion of the move provided the opportunity to rewire completely the entire polarized ion source frame and to rebuild approximately half of the electronic chassis on the source. The result is an ion source which is now logically wired and carefully documented. Beams obtained from the source are much more stable than those previously available

  20. Review of current GPS methodologies for producing accurate time series and their error sources

    Science.gov (United States)

    He, Xiaoxing; Montillet, Jean-Philippe; Fernandes, Rui; Bos, Machiel; Yu, Kegen; Hua, Xianghong; Jiang, Weiping

    2017-05-01

    The Global Positioning System (GPS) is an important tool to observe and model geodynamic processes such as plate tectonics and post-glacial rebound. In the last three decades, GPS has seen tremendous advances in the precision of the measurements, which allow researchers to study geophysical signals through a careful analysis of daily time series of GPS receiver coordinates. However, the GPS observations contain errors and the time series can be described as the sum of a real signal and noise. The signal itself can again be divided into station displacements due to geophysical causes and to disturbing factors. Examples of the latter are errors in the realization and stability of the reference frame and corrections due to ionospheric and tropospheric delays and GPS satellite orbit errors. There is an increasing demand on detecting millimeter to sub-millimeter level ground displacement signals in order to further understand regional scale geodetic phenomena hence requiring further improvements in the sensitivity of the GPS solutions. This paper provides a review spanning over 25 years of advances in processing strategies, error mitigation methods and noise modeling for the processing and analysis of GPS daily position time series. The processing of the observations is described step-by-step and mainly with three different strategies in order to explain the weaknesses and strengths of the existing methodologies. In particular, we focus on the choice of the stochastic model in the GPS time series, which directly affects the estimation of the functional model including, for example, tectonic rates, seasonal signals and co-seismic offsets. Moreover, the geodetic community continues to develop computational methods to fully automatize all phases from analysis of GPS time series. This idea is greatly motivated by the large number of GPS receivers installed around the world for diverse applications ranging from surveying small deformations of civil engineering structures (e.g., subsidence of the highway bridge) to the detection of particular geophysical signals.

  1. Radiological and chemical source terms for Solid Waste Operations Complex

    International Nuclear Information System (INIS)

    Boothe, G.F.

    1994-01-01

    The purpose of this document is to describe the radiological and chemical source terms for the major projects of the Solid Waste Operations Complex (SWOC), including Project W-112, Project W-133 and Project W-100 (WRAP 2A). For purposes of this document, the term ''source term'' means the design basis inventory. All of the SWOC source terms involve the estimation of the radiological and chemical contents of various waste packages from different waste streams, and the inventories of these packages within facilities or within a scope of operations. The composition of some of the waste is not known precisely; consequently, conservative assumptions were made to ensure that the source term represents a bounding case (i.e., it is expected that the source term would not be exceeded). As better information is obtained on the radiological and chemical contents of waste packages and more accurate facility specific models are developed, this document should be revised as appropriate. Radiological source terms are needed to perform shielding and external dose calculations, to estimate routine airborne releases, to perform release calculations and dose estimates for safety documentation, to calculate the maximum possible fire loss and specific source terms for individual fire areas, etc. Chemical source terms (i.e., inventories of combustible, flammable, explosive or hazardous chemicals) are used to determine combustible loading, fire protection requirements, personnel exposures to hazardous chemicals from routine and accident conditions, and a wide variety of other safety and environmental requirements

  2. Determination of accurate metal silicide layer thickness by RBS

    International Nuclear Information System (INIS)

    Kirchhoff, J.F.; Baumann, S.M.; Evans, C.; Ward, I.; Coveney, P.

    1995-01-01

    Rutherford Backscattering Spectrometry (RBS) is a proven useful analytical tool for determining compositional information of a wide variety of materials. One of the most widely utilized applications of RBS is the study of the composition of metal silicides (MSi x ), also referred to as polycides. A key quantity obtained from an analysis of a metal silicide is the ratio of silicon to metal (Si/M). Although compositional information is very reliable in these applications, determination of metal silicide layer thickness by RBS techniques can differ from true layer thicknesses by more than 40%. The cause of these differences lies in how the densities utilized in the RBS analysis are calculated. The standard RBS analysis software packages calculate layer densities by assuming each element's bulk densities weighted by the fractional atomic presence. This calculation causes large thickness discrepancies in metal silicide thicknesses because most films form into crystal structures with distinct densities. Assuming a constant layer density for a full spectrum of Si/M values for metal silicide samples improves layer thickness determination but ignores the underlying physics of the films. We will present results of RBS determination of the thickness various metal silicide films with a range of Si/M values using a physically accurate model for the calculation of layer densities. The thicknesses are compared to scanning electron microscopy (SEM) cross-section micrographs. We have also developed supporting software that incorporates these calculations into routine analyses. (orig.)

  3. Accurate measurement of RF exposure from emerging wireless communication systems

    International Nuclear Information System (INIS)

    Letertre, Thierry; Toffano, Zeno; Monebhurrun, Vikass

    2013-01-01

    Isotropic broadband probes or spectrum analyzers (SAs) may be used for the measurement of rapidly varying electromagnetic fields generated by emerging wireless communication systems. In this paper this problematic is investigated by comparing the responses measured by two different isotropic broadband probes typically used to perform electric field (E-field) evaluations. The broadband probes are submitted to signals with variable duty cycles (DC) and crest factors (CF) either with or without Orthogonal Frequency Division Multiplexing (OFDM) modulation but with the same root-mean-square (RMS) power. The two probes do not provide accurate enough results for deterministic signals such as Worldwide Interoperability for Microwave Access (WIMAX) or Long Term Evolution (LTE) as well as for non-deterministic signals such as Wireless Fidelity (WiFi). The legacy measurement protocols should be adapted to cope for the emerging wireless communication technologies based on the OFDM modulation scheme. This is not easily achieved except when the statistics of the RF emission are well known. In this case the measurement errors are shown to be systematic and a correction factor or calibration can be applied to obtain a good approximation of the total RMS power.

  4. Accurate prediction of the enthalpies of formation for xanthophylls.

    Science.gov (United States)

    Lii, Jenn-Huei; Liao, Fu-Xing; Hu, Ching-Han

    2011-11-30

    This study investigates the applications of computational approaches in the prediction of enthalpies of formation (ΔH(f)) for C-, H-, and O-containing compounds. Molecular mechanics (MM4) molecular mechanics method, density functional theory (DFT) combined with the atomic equivalent (AE) and group equivalent (GE) schemes, and DFT-based correlation corrected atomization (CCAZ) were used. We emphasized on the application to xanthophylls, C-, H-, and O-containing carotenoids which consist of ∼ 100 atoms and extended π-delocaization systems. Within the training set, MM4 predictions are more accurate than those obtained using AE and GE; however a systematic underestimation was observed in the extended systems. ΔH(f) for the training set molecules predicted by CCAZ combined with DFT are in very good agreement with the G3 results. The average absolute deviations (AADs) of CCAZ combined with B3LYP and MPWB1K are 0.38 and 0.53 kcal/mol compared with the G3 data, and are 0.74 and 0.69 kcal/mol compared with the available experimental data, respectively. Consistency of the CCAZ approach for the selected xanthophylls is revealed by the AAD of 2.68 kcal/mol between B3LYP-CCAZ and MPWB1K-CCAZ. Copyright © 2011 Wiley Periodicals, Inc.

  5. Accurate ab initio vibrational energies of methyl chloride

    International Nuclear Information System (INIS)

    Owens, Alec; Yurchenko, Sergei N.; Yachmenev, Andrey; Tennyson, Jonathan; Thiel, Walter

    2015-01-01

    Two new nine-dimensional potential energy surfaces (PESs) have been generated using high-level ab initio theory for the two main isotopologues of methyl chloride, CH 3 35 Cl and CH 3 37 Cl. The respective PESs, CBS-35  HL , and CBS-37  HL , are based on explicitly correlated coupled cluster calculations with extrapolation to the complete basis set (CBS) limit, and incorporate a range of higher-level (HL) additive energy corrections to account for core-valence electron correlation, higher-order coupled cluster terms, scalar relativistic effects, and diagonal Born-Oppenheimer corrections. Variational calculations of the vibrational energy levels were performed using the computer program TROVE, whose functionality has been extended to handle molecules of the form XY 3 Z. Fully converged energies were obtained by means of a complete vibrational basis set extrapolation. The CBS-35  HL and CBS-37  HL PESs reproduce the fundamental term values with root-mean-square errors of 0.75 and 1.00 cm −1 , respectively. An analysis of the combined effect of the HL corrections and CBS extrapolation on the vibrational wavenumbers indicates that both are needed to compute accurate theoretical results for methyl chloride. We believe that it would be extremely challenging to go beyond the accuracy currently achieved for CH 3 Cl without empirical refinement of the respective PESs

  6. Accurate ab initio vibrational energies of methyl chloride

    Energy Technology Data Exchange (ETDEWEB)

    Owens, Alec, E-mail: owens@mpi-muelheim.mpg.de [Max-Planck-Institut für Kohlenforschung, Kaiser-Wilhelm-Platz 1, 45470 Mülheim an der Ruhr (Germany); Department of Physics and Astronomy, University College London, Gower Street, WC1E 6BT London (United Kingdom); Yurchenko, Sergei N.; Yachmenev, Andrey; Tennyson, Jonathan [Department of Physics and Astronomy, University College London, Gower Street, WC1E 6BT London (United Kingdom); Thiel, Walter [Max-Planck-Institut für Kohlenforschung, Kaiser-Wilhelm-Platz 1, 45470 Mülheim an der Ruhr (Germany)

    2015-06-28

    Two new nine-dimensional potential energy surfaces (PESs) have been generated using high-level ab initio theory for the two main isotopologues of methyl chloride, CH{sub 3}{sup 35}Cl and CH{sub 3}{sup 37}Cl. The respective PESs, CBS-35{sup  HL}, and CBS-37{sup  HL}, are based on explicitly correlated coupled cluster calculations with extrapolation to the complete basis set (CBS) limit, and incorporate a range of higher-level (HL) additive energy corrections to account for core-valence electron correlation, higher-order coupled cluster terms, scalar relativistic effects, and diagonal Born-Oppenheimer corrections. Variational calculations of the vibrational energy levels were performed using the computer program TROVE, whose functionality has been extended to handle molecules of the form XY {sub 3}Z. Fully converged energies were obtained by means of a complete vibrational basis set extrapolation. The CBS-35{sup  HL} and CBS-37{sup  HL} PESs reproduce the fundamental term values with root-mean-square errors of 0.75 and 1.00 cm{sup −1}, respectively. An analysis of the combined effect of the HL corrections and CBS extrapolation on the vibrational wavenumbers indicates that both are needed to compute accurate theoretical results for methyl chloride. We believe that it would be extremely challenging to go beyond the accuracy currently achieved for CH{sub 3}Cl without empirical refinement of the respective PESs.

  7. Generating Facial Expressions Using an Anatomically Accurate Biomechanical Model.

    Science.gov (United States)

    Wu, Tim; Hung, Alice; Mithraratne, Kumar

    2014-11-01

    This paper presents a computational framework for modelling the biomechanics of human facial expressions. A detailed high-order (Cubic-Hermite) finite element model of the human head was constructed using anatomical data segmented from magnetic resonance images. The model includes a superficial soft-tissue continuum consisting of skin, the subcutaneous layer and the superficial Musculo-Aponeurotic system. Embedded within this continuum mesh, are 20 pairs of facial muscles which drive facial expressions. These muscles were treated as transversely-isotropic and their anatomical geometries and fibre orientations were accurately depicted. In order to capture the relative composition of muscles and fat, material heterogeneity was also introduced into the model. Complex contact interactions between the lips, eyelids, and between superficial soft tissue continuum and deep rigid skeletal bones were also computed. In addition, this paper investigates the impact of incorporating material heterogeneity and contact interactions, which are often neglected in similar studies. Four facial expressions were simulated using the developed model and the results were compared with surface data obtained from a 3D structured-light scanner. Predicted expressions showed good agreement with the experimental data.

  8. New method in obtaining correction factor of power confirming

    International Nuclear Information System (INIS)

    Deng Yongjun; Li Rundong; Liu Yongkang; Zhou Wei

    2010-01-01

    Westcott theory is the most widely used method in reactor power calibration, which particularly suited to research reactor. But this method is very fussy because lots of correction parameters which rely on empirical formula to special reactor type are needed. The incidence coefficient between foil activity and reactor power was obtained by Monte-Carlo calculation, which was carried out with precise description of the reactor core and the foil arrangement position by MCNP input card. So the reactor power was determined by the core neutron fluence profile and the foil activity placed in the position for normalization use. The characteristic of this new method is simpler, more flexible and accurate than Westcott theory. In this paper, the results of SPRR-300 obtained by the new method in theory were compared with the experimental results, which verified the possibility of this new method. (authors)

  9. Theoretical evaluation of accuracy in position and size of brain activity obtained by near-infrared topography

    International Nuclear Information System (INIS)

    Kawaguchi, Hiroshi; Hayashi, Toshiyuki; Kato, Toshinori; Okada, Eiji

    2004-01-01

    Near-infrared (NIR) topography can obtain a topographical distribution of the activated region in the brain cortex. Near-infrared light is strongly scattered in the head, and the volume of tissue sampled by a source-detector pair on the head surface is broadly distributed in the brain. This scattering effect results in poor resolution and contrast in the topographic image of the brain activity. In this study, a one-dimensional distribution of absorption change in a head model is calculated by mapping and reconstruction methods to evaluate the effect of the image reconstruction algorithm and the interval of measurement points for topographic imaging on the accuracy of the topographic image. The light propagation in the head model is predicted by Monte Carlo simulation to obtain the spatial sensitivity profile for a source-detector pair. The measurement points are one-dimensionally arranged on the surface of the model, and the distance between adjacent measurement points is varied from 4 mm to 28 mm. Small intervals of the measurement points improve the topographic image calculated by both the mapping and reconstruction methods. In the conventional mapping method, the limit of the spatial resolution depends upon the interval of the measurement points and spatial sensitivity profile for source-detector pairs. The reconstruction method has advantages over the mapping method which improve the results of one-dimensional analysis when the interval of measurement points is less than 12 mm. The effect of overlapping of spatial sensitivity profiles indicates that the reconstruction method may be effective to improve the spatial resolution of a two-dimensional reconstruction of topographic image obtained with larger interval of measurement points. Near-infrared topography with the reconstruction method potentially obtains an accurate distribution of absorption change in the brain even if the size of absorption change is less than 10 mm

  10. A new method to estimate heat source parameters in gas metal arc welding simulation process

    International Nuclear Information System (INIS)

    Jia, Xiaolei; Xu, Jie; Liu, Zhaoheng; Huang, Shaojie; Fan, Yu; Sun, Zhi

    2014-01-01

    Highlights: •A new method for accurate simulation of heat source parameters was presented. •The partial least-squares regression analysis was recommended in the method. •The welding experiment results verified accuracy of the proposed method. -- Abstract: Heat source parameters were usually recommended by experience in welding simulation process, which induced error in simulation results (e.g. temperature distribution and residual stress). In this paper, a new method was developed to accurately estimate heat source parameters in welding simulation. In order to reduce the simulation complexity, a sensitivity analysis of heat source parameters was carried out. The relationships between heat source parameters and welding pool characteristics (fusion width (W), penetration depth (D) and peak temperature (T p )) were obtained with both the multiple regression analysis (MRA) and the partial least-squares regression analysis (PLSRA). Different regression models were employed in each regression method. Comparisons of both methods were performed. A welding experiment was carried out to verify the method. The results showed that both the MRA and the PLSRA were feasible and accurate for prediction of heat source parameters in welding simulation. However, the PLSRA was recommended for its advantages of requiring less simulation data

  11. Approaches for the accurate definition of geological time boundaries

    Science.gov (United States)

    Schaltegger, Urs; Baresel, Björn; Ovtcharova, Maria; Goudemand, Nicolas; Bucher, Hugo

    2015-04-01

    Which strategies lead to the most precise and accurate date of a given geological boundary? Geological units are usually defined by the occurrence of characteristic taxa and hence boundaries between these geological units correspond to dramatic faunal and/or floral turnovers and they are primarily defined using first or last occurrences of index species, or ideally by the separation interval between two consecutive, characteristic associations of fossil taxa. These boundaries need to be defined in a way that enables their worldwide recognition and correlation across different stratigraphic successions, using tools as different as bio-, magneto-, and chemo-stratigraphy, and astrochronology. Sedimentary sequences can be dated in numerical terms by applying high-precision chemical-abrasion, isotope-dilution, thermal-ionization mass spectrometry (CA-ID-TIMS) U-Pb age determination to zircon (ZrSiO4) in intercalated volcanic ashes. But, though volcanic activity is common in geological history, ashes are not necessarily close to the boundary we would like to date precisely and accurately. In addition, U-Pb zircon data sets may be very complex and difficult to interpret in terms of the age of ash deposition. To overcome these difficulties we use a multi-proxy approach we applied to the precise and accurate dating of the Permo-Triassic and Early-Middle Triassic boundaries in South China. a) Dense sampling of ashes across the critical time interval and a sufficiently large number of analysed zircons per ash sample can guarantee the recognition of all system complexities. Geochronological datasets from U-Pb dating of volcanic zircon may indeed combine effects of i) post-crystallization Pb loss from percolation of hydrothermal fluids (even using chemical abrasion), with ii) age dispersion from prolonged residence of earlier crystallized zircon in the magmatic system. As a result, U-Pb dates of individual zircons are both apparently younger and older than the depositional age

  12. Numerical model of electron cyclotron resonance ion source

    Directory of Open Access Journals (Sweden)

    V. Mironov

    2015-12-01

    Full Text Available Important features of the electron cyclotron resonance ion source (ECRIS operation are accurately reproduced with a numerical code. The code uses the particle-in-cell technique to model the dynamics of ions in ECRIS plasma. It is shown that a gas dynamical ion confinement mechanism is sufficient to provide the ion production rates in ECRIS close to the experimentally observed values. Extracted ion currents are calculated and compared to the experiment for a few sources. Changes in the simulated extracted ion currents are obtained with varying the gas flow into the source chamber and the microwave power. Empirical scaling laws for ECRIS design are studied and the underlying physical effects are discussed.

  13. Influential Factors for Accurate Load Prediction in a Demand Response Context

    DEFF Research Database (Denmark)

    Wollsen, Morten Gill; Kjærgaard, Mikkel Baun; Jørgensen, Bo Nørregaard

    2016-01-01

    Accurate prediction of a buildings electricity load is crucial to respond to Demand Response events with an assessable load change. However, previous work on load prediction lacks to consider a wider set of possible data sources. In this paper we study different data scenarios to map the influence....... Next, the time of day that is being predicted greatly influence the prediction which is related to the weather pattern. By presenting these results we hope to improve the modeling of building loads and algorithms for Demand Response planning.......Accurate prediction of a buildings electricity load is crucial to respond to Demand Response events with an assessable load change. However, previous work on load prediction lacks to consider a wider set of possible data sources. In this paper we study different data scenarios to map the influence...

  14. Accurate mobile malware detection and classification in the cloud.

    Science.gov (United States)

    Wang, Xiaolei; Yang, Yuexiang; Zeng, Yingzhi

    2015-01-01

    As the dominator of the Smartphone operating system market, consequently android has attracted the attention of s malware authors and researcher alike. The number of types of android malware is increasing rapidly regardless of the considerable number of proposed malware analysis systems. In this paper, by taking advantages of low false-positive rate of misuse detection and the ability of anomaly detection to detect zero-day malware, we propose a novel hybrid detection system based on a new open-source framework CuckooDroid, which enables the use of Cuckoo Sandbox's features to analyze Android malware through dynamic and static analysis. Our proposed system mainly consists of two parts: anomaly detection engine performing abnormal apps detection through dynamic analysis; signature detection engine performing known malware detection and classification with the combination of static and dynamic analysis. We evaluate our system using 5560 malware samples and 6000 benign samples. Experiments show that our anomaly detection engine with dynamic analysis is capable of detecting zero-day malware with a low false negative rate (1.16 %) and acceptable false positive rate (1.30 %); it is worth noting that our signature detection engine with hybrid analysis can accurately classify malware samples with an average positive rate 98.94 %. Considering the intensive computing resources required by the static and dynamic analysis, our proposed detection system should be deployed off-device, such as in the Cloud. The app store markets and the ordinary users can access our detection system for malware detection through cloud service.

  15. Global inventory of NOx sources

    International Nuclear Information System (INIS)

    Delmas, R.; Serca, D.; Jambert, C.

    1997-01-01

    Nitrogen oxides are key compounds for the oxidation capacity of the troposphere. Their concentration depends on the proximity of sources because of their short atmospheric lifetime. An accurate knowledge of the distribution of their sources and sinks is therefore crucial. At the global scale, the dominant sources of nitrogen oxides - combustion of fossil fuel (about 50%) and biomass burning (about 20%) - are basically anthropogenic. Natural sources, including lightning and microbial activity in soils, represent therefore less than 30% of total emissions. Fertilizer use in agriculture constitutes an anthropogenic perturbation to the microbial source. The methods to estimate the magnitude and distribution of these dominant sources of nitrogen oxides are discussed. Some minor sources which may play a specific role in tropospheric chemistry such as NO x emission from aircraft in the upper troposphere or input from production in the stratosphere from N 2 O photodissociation are also considered

  16. Obtaining of potassium dicyan-argentate

    International Nuclear Information System (INIS)

    Sattarova, M.A.; Solojenkin, P.M.

    1997-01-01

    This work is devoted to obtaining of potassium dicyan-argentate. By means of exchange reaction between silver nitrate and potassium cyanide the potassium dicyan-argentate was synthesized. The analysis of obtained samples was carried out by means of titration and potentiometry.

  17. Treating shale oil to obtain sulfonates

    Energy Technology Data Exchange (ETDEWEB)

    Schaeffer, H

    1921-01-21

    The process shows as its principal characteristics: (1) treating the oil with chlorsulfonic acid at a temperature of about 100/sup 0/C; (2) the transformation of the sulfonic acid obtained into salts; (3) as new industrial products, the sulfonates obtained and their industrial application as disinfectants for hides and wood.

  18. Strategies for obtaining unpublished drug trial data

    DEFF Research Database (Denmark)

    Wolfe, Nicole; Gøtzsche, Peter C.; Bero, Lisa Anne

    2013-01-01

    Authors of systematic reviews have difficulty obtaining unpublished data for their reviews. This project aimed to provide an in-depth description of the experiences of authors in searching for and gaining access to unpublished data for their systematic reviews, and to give guidance on best...... practices for identifying, obtaining and using unpublished data....

  19. 38 CFR 21.5725 - Obtaining benefits.

    Science.gov (United States)

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 2 2010-07-01 2010-07-01 false Obtaining benefits. 21... benefits. (a) Actions required of the individual. In order to obtain benefits under the educational assistance and subsistence allowance program, an individual must— (1) File a claim for benefits with VA, and...

  20. Accurate formulas for the penalty caused by interferometric crosstalk

    DEFF Research Database (Denmark)

    Rasmussen, Christian Jørgen; Liu, Fenghai; Jeppesen, Palle

    2000-01-01

    New simple formulas for the penalty caused by interferometric crosstalk in PIN receiver systems and optically preamplified receiver systems are presented. They are more accurate than existing formulas.......New simple formulas for the penalty caused by interferometric crosstalk in PIN receiver systems and optically preamplified receiver systems are presented. They are more accurate than existing formulas....

  1. A new, accurate predictive model for incident hypertension

    DEFF Research Database (Denmark)

    Völzke, Henry; Fung, Glenn; Ittermann, Till

    2013-01-01

    Data mining represents an alternative approach to identify new predictors of multifactorial diseases. This work aimed at building an accurate predictive model for incident hypertension using data mining procedures.......Data mining represents an alternative approach to identify new predictors of multifactorial diseases. This work aimed at building an accurate predictive model for incident hypertension using data mining procedures....

  2. Accurate and Simple Calibration of DLP Projector Systems

    DEFF Research Database (Denmark)

    Wilm, Jakob; Olesen, Oline Vinter; Larsen, Rasmus

    2014-01-01

    does not rely on an initial camera calibration, and so does not carry over the error into projector calibration. A radial interpolation scheme is used to convert features coordinates into projector space, thereby allowing for a very accurate procedure. This allows for highly accurate determination...

  3. Accurate Compton scattering measurements for N{sub 2} molecules

    Energy Technology Data Exchange (ETDEWEB)

    Kobayashi, Kohjiro [Advanced Technology Research Center, Gunma University, 1-5-1 Tenjin-cho, Kiryu, Gunma 376-8515 (Japan); Itou, Masayoshi; Tsuji, Naruki; Sakurai, Yoshiharu [Japan Synchrotron Radiation Research Institute (JASRI), 1-1-1 Kouto, Sayo-cho, Sayo-gun, Hyogo 679-5198 (Japan); Hosoya, Tetsuo; Sakurai, Hiroshi, E-mail: sakuraih@gunma-u.ac.jp [Department of Production Science and Technology, Gunma University, 29-1 Hon-cho, Ota, Gunma 373-0057 (Japan)

    2011-06-14

    The accurate Compton profiles of N{sub 2} gas were measured using 121.7 keV synchrotron x-rays. The present accurate measurement proves the better agreement of the CI (configuration interaction) calculation than the Hartree-Fock calculation and suggests the importance of multi-excitation in the CI calculations for the accuracy of wavefunctions in ground states.

  4. A Novel Multimode Waveguide Coupler for Accurate Power Measurement of Traveling Wave Tube Harmonic Frequencies

    Science.gov (United States)

    Wintucky, Edwin G.; Simons, Rainee N.

    2014-01-01

    This paper presents the design, fabrication and test results for a novel waveguide multimode directional coupler (MDC). The coupler fabricated from two dissimilar waveguides is capable of isolating the power at the second harmonic frequency from the fundamental power at the output port of a traveling-wave tube (TWT). In addition to accurate power measurements at harmonic frequencies, a potential application of the MDC is in the design of a beacon source for atmospheric propagation studies at millimeter-wave frequencies.

  5. A method for an accurate in-flight calibration of AVHRR data for vegetation index calculation

    OpenAIRE

    Asmami , Mbarek; Wald , Lucien

    1992-01-01

    International audience; A significant degradation in the Advanced Very High Resolution Radiometer (AVHRR) responsitivity, on the NOAA satellite series, has occurred since the prelaunch calibration and with time since launch. This affects the index vegetation (NDVI), which is an important source of information for monitoring vegetation conditions on regional and global scales. Many studies have been carried out which use the Viewing Earth calibration approach in order to provide accurate calib...

  6. Planets as background noise sources in free space optical communications

    Science.gov (United States)

    Katz, J.

    1986-01-01

    Background noise generated by planets is the dominant noise source in most deep space direct detection optical communications systems. Earlier approximate analyses of this problem are based on simplified blackbody calculations and can yield results that may be inaccurate by up to an order of magnitude. Various other factors that need to be taken into consideration, such as the phase angle and the actual spectral dependence of the planet albedo, in order to obtain a more accurate estimate of the noise magnitude are examined.

  7. Source localization with an advanced gravitational wave detector network

    International Nuclear Information System (INIS)

    Fairhurst, Stephen

    2011-01-01

    We derive an expression for the accuracy with which sources can be localized using a network of gravitational wave detectors. The result is obtained via triangulation, using timing accuracies at each detector and is applicable to a network with any number of detectors. We use this result to investigate the ability of advanced gravitational wave detector networks to accurately localize signals from compact binary coalescences. We demonstrate that additional detectors can significantly improve localization results and illustrate our findings with networks comprised of the advanced LIGO, advanced Virgo and LCGT. In addition, we evaluate the benefits of relocating one of the advanced LIGO detectors to Australia.

  8. NINJA-OPS: Fast Accurate Marker Gene Alignment Using Concatenated Ribosomes.

    Directory of Open Access Journals (Sweden)

    Gabriel A Al-Ghalith

    2016-01-01

    Full Text Available The explosion of bioinformatics technologies in the form of next generation sequencing (NGS has facilitated a massive influx of genomics data in the form of short reads. Short read mapping is therefore a fundamental component of next generation sequencing pipelines which routinely match these short reads against reference genomes for contig assembly. However, such techniques have seldom been applied to microbial marker gene sequencing studies, which have mostly relied on novel heuristic approaches. We propose NINJA Is Not Just Another OTU-Picking Solution (NINJA-OPS, or NINJA for short, a fast and highly accurate novel method enabling reference-based marker gene matching (picking Operational Taxonomic Units, or OTUs. NINJA takes advantage of the Burrows-Wheeler (BW alignment using an artificial reference chromosome composed of concatenated reference sequences, the "concatesome," as the BW input. Other features include automatic support for paired-end reads with arbitrary insert sizes. NINJA is also free and open source and implements several pre-filtering methods that elicit substantial speedup when coupled with existing tools. We applied NINJA to several published microbiome studies, obtaining accuracy similar to or better than previous reference-based OTU-picking methods while achieving an order of magnitude or more speedup and using a fraction of the memory footprint. NINJA is a complete pipeline that takes a FASTA-formatted input file and outputs a QIIME-formatted taxonomy-annotated BIOM file for an entire MiSeq run of human gut microbiome 16S genes in under 10 minutes on a dual-core laptop.

  9. Source Water Protection Contaminant Sources

    Data.gov (United States)

    Iowa State University GIS Support and Research Facility — Simplified aggregation of potential contaminant sources used for Source Water Assessment and Protection. The data is derived from IDNR, IDALS, and US EPA program...

  10. How Do Qataris Source Health Information?

    Directory of Open Access Journals (Sweden)

    Sopna M Choudhury

    Full Text Available Qatar is experiencing rapid population expansion with increasing demands on healthcare services for both acute and chronic conditions. Sourcing accurate information about health conditions is crucial, yet the methods used for sourcing health information in Qatar are currently unknown. Gaining a better understanding of the sources the Qatari population use to recognize and manage health and/or disease will help to develop strategies to educate individuals about existing and emerging health problems.To investigate the methods used by the Qatari population to source health information. We hypothesized that the Internet would be a key service used to access health information by the Qatari population.A researcher-led questionnaire was used to collect information from Qatari adults, aged 18-85 years. Participants were approached in shopping centers and public places in Doha, the capital city of Qatar. The questionnaire was used to ascertain information concerning demographics, health status, and utilization of health care services during the past year as well as sources of health information used.Data from a total of 394 eligible participants were included. The Internet was widely used for seeking health information among the Qatari population (71.1%. A greater proportion of Qatari females (78.7% reported searching for health-related information using the Internet compared to Qatari males (60.8%. Other commonly used sources were family and friends (37.8% and Primary Health Care Centers (31.2%. Google was the most commonly used search engine (94.8%. Gender, age and education levels were all significant predictors of Internet use for heath information (P<0.001 for all predictors. Females were 2.9 times more likely than males (P<0.001 and people educated to university or college level were 3.03 times more likely (P<0.001 to use the Internet for heath information.The Internet is a widely used source to obtain health-related information by the Qatari

  11. Irradiation device using radiation sources

    International Nuclear Information System (INIS)

    Perraudin, Claude; Amarge, Edmond; Guiho, J.-P.; Horiot, J.-C.; Taniel, Gerard; Viel, Georges; Brethon, J.-P.

    1981-01-01

    The invention refers to an irradiation appliance making use of radioactive sources such as cobalt 60. This invention concerns an irradiation appliance delivering an easily adjustable irradiation beam in accurate dimensions and enabling the radioactive sources to be changed without making use of intricate manipulations at the very place where the appliance has to be used. This kind of appliance is employed in radiotherapy [fr

  12. Improved image quality in pinhole SPECT by accurate modeling of the point spread function in low magnification systems

    International Nuclear Information System (INIS)

    Pino, Francisco; Roé, Nuria; Aguiar, Pablo; Falcon, Carles; Ros, Domènec; Pavía, Javier

    2015-01-01

    Purpose: Single photon emission computed tomography (SPECT) has become an important noninvasive imaging technique in small-animal research. Due to the high resolution required in small-animal SPECT systems, the spatially variant system response needs to be included in the reconstruction algorithm. Accurate modeling of the system response should result in a major improvement in the quality of reconstructed images. The aim of this study was to quantitatively assess the impact that an accurate modeling of spatially variant collimator/detector response has on image-quality parameters, using a low magnification SPECT system equipped with a pinhole collimator and a small gamma camera. Methods: Three methods were used to model the point spread function (PSF). For the first, only the geometrical pinhole aperture was included in the PSF. For the second, the septal penetration through the pinhole collimator was added. In the third method, the measured intrinsic detector response was incorporated. Tomographic spatial resolution was evaluated and contrast, recovery coefficients, contrast-to-noise ratio, and noise were quantified using a custom-built NEMA NU 4–2008 image-quality phantom. Results: A high correlation was found between the experimental data corresponding to intrinsic detector response and the fitted values obtained by means of an asymmetric Gaussian distribution. For all PSF models, resolution improved as the distance from the point source to the center of the field of view increased and when the acquisition radius diminished. An improvement of resolution was observed after a minimum of five iterations when the PSF modeling included more corrections. Contrast, recovery coefficients, and contrast-to-noise ratio were better for the same level of noise in the image when more accurate models were included. Ring-type artifacts were observed when the number of iterations exceeded 12. Conclusions: Accurate modeling of the PSF improves resolution, contrast, and recovery

  13. Improved image quality in pinhole SPECT by accurate modeling of the point spread function in low magnification systems

    Energy Technology Data Exchange (ETDEWEB)

    Pino, Francisco [Unitat de Biofísica, Facultat de Medicina, Universitat de Barcelona, Barcelona 08036, Spain and Servei de Física Mèdica i Protecció Radiològica, Institut Català d’Oncologia, L’Hospitalet de Llobregat 08907 (Spain); Roé, Nuria [Unitat de Biofísica, Facultat de Medicina, Universitat de Barcelona, Barcelona 08036 (Spain); Aguiar, Pablo, E-mail: pablo.aguiar.fernandez@sergas.es [Fundación Ramón Domínguez, Complexo Hospitalario Universitario de Santiago de Compostela 15706, Spain and Grupo de Imagen Molecular, Instituto de Investigacións Sanitarias de Santiago de Compostela (IDIS), Galicia 15782 (Spain); Falcon, Carles; Ros, Domènec [Institut d’Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona 08036, Spain and CIBER en Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), Barcelona 08036 (Spain); Pavía, Javier [Institut d’Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona 080836 (Spain); CIBER en Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), Barcelona 08036 (Spain); and Servei de Medicina Nuclear, Hospital Clínic, Barcelona 08036 (Spain)

    2015-02-15

    Purpose: Single photon emission computed tomography (SPECT) has become an important noninvasive imaging technique in small-animal research. Due to the high resolution required in small-animal SPECT systems, the spatially variant system response needs to be included in the reconstruction algorithm. Accurate modeling of the system response should result in a major improvement in the quality of reconstructed images. The aim of this study was to quantitatively assess the impact that an accurate modeling of spatially variant collimator/detector response has on image-quality parameters, using a low magnification SPECT system equipped with a pinhole collimator and a small gamma camera. Methods: Three methods were used to model the point spread function (PSF). For the first, only the geometrical pinhole aperture was included in the PSF. For the second, the septal penetration through the pinhole collimator was added. In the third method, the measured intrinsic detector response was incorporated. Tomographic spatial resolution was evaluated and contrast, recovery coefficients, contrast-to-noise ratio, and noise were quantified using a custom-built NEMA NU 4–2008 image-quality phantom. Results: A high correlation was found between the experimental data corresponding to intrinsic detector response and the fitted values obtained by means of an asymmetric Gaussian distribution. For all PSF models, resolution improved as the distance from the point source to the center of the field of view increased and when the acquisition radius diminished. An improvement of resolution was observed after a minimum of five iterations when the PSF modeling included more corrections. Contrast, recovery coefficients, and contrast-to-noise ratio were better for the same level of noise in the image when more accurate models were included. Ring-type artifacts were observed when the number of iterations exceeded 12. Conclusions: Accurate modeling of the PSF improves resolution, contrast, and recovery

  14. Accurate measurement of indoor radon concentration using a low-effective volume radon monitor

    International Nuclear Information System (INIS)

    Tanaka, Aya; Minami, Nodoka; Mukai, Takahiro; Yasuoka, Yumi; Iimoto, Takeshi; Omori, Yasutaka; Nagahama, Hiroyuki; Muto, Jun

    2017-01-01

    AlphaGUARD is a low-effective volume detector and one of the most popular portable radon monitors which is currently available. This study investigated whether AlphaGUARD can accurately measure the variable indoor radon levels. The consistency of the radon-concentration data obtained by AlphaGUARD is evaluated against simultaneous measurements by two other monitors (each ∼10 times more sensitive than AlphaGUARD). When accurately measuring radon concentration with AlphaGUARD, we found that the net counts of the AlphaGUARD were required of at least 500 counts, <25% of the relative percent difference. AlphaGUARD can provide accurate measurements of radon concentration for the world average level (∼50 Bq m -3 ) and the reference level of workplace (1000 Bq m -3 ), using integrated data over at least 3 h and 10 min, respectively. (authors)

  15. Accurate wavelength prediction of photonic crystal resonant reflection and applications in refractive index measurement

    DEFF Research Database (Denmark)

    Hermannsson, Pétur Gordon; Vannahme, Christoph; Smith, Cameron L. C.

    2014-01-01

    and superstrate materials. The importance of accounting for material dispersion in order to obtain accurate simulation results is highlighted, and a method for doing so using an iterative approach is demonstrated. Furthermore, an application for the model is demonstrated, in which the material dispersion......In the past decade, photonic crystal resonant reflectors have been increasingly used as the basis for label-free biochemical assays in lab-on-a-chip applications. In both designing and interpreting experimental results, an accurate model describing the optical behavior of such structures...... is essential. Here, an analytical method for precisely predicting the absolute positions of resonantly reflected wavelengths is presented. The model is experimentally verified to be highly accurate using nanoreplicated, polymer-based photonic crystal grating reflectors with varying grating periods...

  16. Fast and accurate calculation of dilute quantum gas using Uehling–Uhlenbeck model equation

    Energy Technology Data Exchange (ETDEWEB)

    Yano, Ryosuke, E-mail: ryosuke.yano@tokiorisk.co.jp

    2017-02-01

    The Uehling–Uhlenbeck (U–U) model equation is studied for the fast and accurate calculation of a dilute quantum gas. In particular, the direct simulation Monte Carlo (DSMC) method is used to solve the U–U model equation. DSMC analysis based on the U–U model equation is expected to enable the thermalization to be accurately obtained using a small number of sample particles and the dilute quantum gas dynamics to be calculated in a practical time. Finally, the applicability of DSMC analysis based on the U–U model equation to the fast and accurate calculation of a dilute quantum gas is confirmed by calculating the viscosity coefficient of a Bose gas on the basis of the Green–Kubo expression and the shock layer of a dilute Bose gas around a cylinder.

  17. Radioactive source monitoring system based on RFID and GPRS

    International Nuclear Information System (INIS)

    He Haiyang; Zhou Hongliang; Zhang Hongjian; Zhang Sheng; Zhou Junru; Weng Guojie

    2011-01-01

    Nuclear radiation produced by radioactive source is harmful to the health of human body, and the lost and theft of radioactive source will cause environmental pollution and social panic. In order to solve the abnormal leaks, accidental loss, theft and other problems of the radioactive source, a radioactive source monitoring system based on RFID, GPS, GPRS and GSM technology is put forward. Radiation dose detector and GPS wireless location module are used to obtain the information of radiation dose and location respectively, RFID reader reads the status of a tag fixed on the bottom of the radioactive source. All information is transmitted to the remote monitoring center via GPRS wireless transmission. There will be an audible and visual alarm when radiation dose is out of limits or the state of radioactive source is abnormal, and the monitoring center will send alarming text messages to the managers through GSM Modem at the same time. Thus, the functions of monitoring and alarming are achieved. The system has already been put into operation and is being kept in functional order. It can provide stable statistics as well as accurate alarm, improving the supervision of radioactive source effectively. (authors)

  18. Sources, instrumentation and detectors for protein crystallography

    CERN Document Server

    Nave, C

    2001-01-01

    Some of the requirements for protein crystallography experiments on a synchrotron are described. Although data from different types of crystal are often collected without changing the X-ray beam properties, there are benefits if the incident beam is matched to a particular crystal and its diffraction pattern. These benefits are described with some examples. Radiation damage and other effects impose limits on the dose and dose rate on a protein crystal if the maximum amount of data is to be obtained. These limitations have possible consequences for the X-ray source required. Presently available commercial detector systems provide excellent data for protein crystallography but do not quite reach the specifications of the 'ideal' detector. In order to collect the most accurate data (e.g. for very weak anomalous scattering applications) detectors that produce near photon counting statistics over a wide dynamic range are required. It is possible that developments in 'pixel' detectors will allow these demanding exp...

  19. Precise and accurate isotope ratio measurements by ICP-MS.

    Science.gov (United States)

    Becker, J S; Dietze, H J

    2000-09-01

    The precise and accurate determination of isotope ratios by inductively coupled plasma mass spectrometry (ICP-MS) and laser ablation ICP-MS (LA-ICP-MS) is important for quite different application fields (e.g. for isotope ratio measurements of stable isotopes in nature, especially for the investigation of isotope variation in nature or age dating, for determining isotope ratios of radiogenic elements in the nuclear industry, quality assurance of fuel material, for reprocessing plants, nuclear material accounting and radioactive waste control, for tracer experiments using stable isotopes or long-lived radionuclides in biological or medical studies). Thermal ionization mass spectrometry (TIMS), which used to be the dominant analytical technique for precise isotope ratio measurements, is being increasingly replaced for isotope ratio measurements by ICP-MS due to its excellent sensitivity, precision and good accuracy. Instrumental progress in ICP-MS was achieved by the introduction of the collision cell interface in order to dissociate many disturbing argon-based molecular ions, thermalize the ions and neutralize the disturbing argon ions of plasma gas (Ar+). The application of the collision cell in ICP-QMS results in a higher ion transmission, improved sensitivity and better precision of isotope ratio measurements compared to quadrupole ICP-MS without the collision cell [e.g., for 235U/238U approximately 1 (10 microg x L(-1) uranium) 0.07% relative standard deviation (RSD) vs. 0.2% RSD in short-term measurements (n = 5)]. A significant instrumental improvement for ICP-MS is the multicollector device (MC-ICP-MS) in order to obtain a better precision of isotope ratio measurements (with a precision of up to 0.002%, RSD). CE- and HPLC-ICP-MS are used for the separation of isobaric interferences of long-lived radionuclides and stable isotopes by determination of spallation nuclide abundances in an irradiated tantalum target.

  20. Obtaining of polycrystalline silicon for semiconductor industry

    International Nuclear Information System (INIS)

    Mukashev, F.; Nauryzbaev, M.; Kolesnikov, B.; Ivanov, Y.

    1996-01-01

    The purpose of the project is to create pilot equipment and optimize the process of obtaining polycrystalline silicon on semi-industrial level. In the past several decades, the historical experience in the developing countries has shown that one of the most promising ways to improve the economy,of a country is to establish semiconductor industry. First of all, the results can help increase defense, national security and create industrial production. The silane method, which has been traditionally' used for obtaining technical and polycrystalline silicon, is to obtain and then to pyrolyzed mono-and poly silanes. Although the traditional methods of obtaining silicon hydrides have specific advantages, such as utilizing by-products, they also have clear shortcomings, i.e. either low output of the ultimate product ( through hydrolysis of Mg 2 Si) or high contents of by-products in it or high contents of dissolving vapors (through decomposing Mg 2 Si in non-water solutions)

  1. Treatment of biomass to obtain ethanol

    Science.gov (United States)

    Dunson, Jr., James B.; Elander, Richard T [Evergreen, CO; Tucker, III, Melvin P.; Hennessey, Susan Marie [Avondale, PA

    2011-08-16

    Ethanol was produced using biocatalysts that are able to ferment sugars derived from treated biomass. Sugars were obtained by pretreating biomass under conditions of high solids and low ammonia concentration, followed by saccharification.

  2. Silicon dioxide obtained by Polymeric Precursor Method

    International Nuclear Information System (INIS)

    Oliveira, C.T.; Granado, S.R.; Lopes, S.A.; Cavalheiro, A.A.

    2011-01-01

    The Polymeric Precursor Method is able for obtaining several oxide material types with high surface area even obtained in particle form. Several MO 2 oxide types such as titanium, silicon and zirconium ones can be obtained by this methodology. In this work, the synthesis of silicon oxide was monitored by thermal analysis, XRD and surface area analysis in order to demonstrate the influence of the several synthesis and calcining parameters. Surface area values as higher as 370m2/g and increasing in the micropore volume nm were obtained when the material was synthesized by using ethylene glycol as polymerizing agent. XRD analysis showed that the material is amorphous when calcinated at 600°C in despite of the time of calcining, but the material morphology is strongly influenced by the polymeric resin composition. Using Glycerol as polymerizing agent, the pore size increase and the surface area goes down with the increasing in decomposition time, when compared to ethylene glycol. (author)

  3. Process for obtaining cobalt and lanthanum nickelate

    International Nuclear Information System (INIS)

    Tapcov, V.; Samusi, N.; Gulea, A.; Horosun, I.; Stasiuc, V.; Petrenco, P.

    1999-01-01

    The invention relates to the process for obtaining polycrystalline ceramics of cobalt and lanthanum nickelate with the perovskite structure from coordinative hetero metallic compounds. The obtained products can be utilized in the industry in the capacity of catalysts. Summary of the invention consists in obtaining polycrystalline ceramics LaCoO 3 and LaNiO 3 with the perovskite structure by pyrolysis of the parent compounds, namely, the coordinative hetero metallic compounds of the lanthanum cobalt or lanthanum nickel. The pyrolysis of the parent compound runs during one hour at 800 C. The technical result of the invention consists in lowering the temperature of the parent compound pyrolysis containing the precise ratio of metals necessary for ceramics obtaining

  4. Effects of source shape on the numerical aperture factor with a geometrical-optics model.

    Science.gov (United States)

    Wan, Der-Shen; Schmit, Joanna; Novak, Erik

    2004-04-01

    We study the effects of an extended light source on the calibration of an interference microscope, also referred to as an optical profiler. Theoretical and experimental numerical aperture (NA) factors for circular and linear light sources along with collimated laser illumination demonstrate that the shape of the light source or effective aperture cone is critical for a correct NA factor calculation. In practice, more-accurate results for the NA factor are obtained when a linear approximation to the filament light source shape is used in a geometric model. We show that previously measured and derived NA factors show some discrepancies because a circular rather than linear approximation to the filament source was used in the modeling.

  5. Positron sources

    International Nuclear Information System (INIS)

    Chehab, R.

    1994-01-01

    A tentative survey of positron sources is given. Physical processes on which positron generation is based are indicated and analyzed. Explanation of the general features of electromagnetic interactions and nuclear β + decay makes it possible to predict the yield and emittance for a given optical matching system between the positron source and the accelerator. Some kinds of matching systems commonly used - mainly working with solenoidal field - are studied and the acceptance volume calculated. Such knowledge is helpful in comparing different matching systems. Since for large machines, a significant distance exists between the positron source and the experimental facility, positron emittance has to be preserved during beam transfer over large distances and methods used for that purpose are indicated. Comparison of existing positron sources leads to extrapolation to sources for future linear colliders. Some new ideas associated with these sources are also presented. (orig.)

  6. Fast and Accurate Icepak-PSpice Co-Simulation of IGBTs under Short-Circuit with an Advanced PSpice Model

    DEFF Research Database (Denmark)

    Wu, Rui; Iannuzzo, Francesco; Wang, Huai

    2014-01-01

    A basic problem in the IGBT short-circuit failure mechanism study is to obtain realistic temperature distribution inside the chip, which demands accurate electrical simulation to obtain power loss distribution as well as detailed IGBT geometry and material information. This paper describes an unp...

  7. Accurate position estimation methods based on electrical impedance tomography measurements

    Science.gov (United States)

    Vergara, Samuel; Sbarbaro, Daniel; Johansen, T. A.

    2017-08-01

    Electrical impedance tomography (EIT) is a technology that estimates the electrical properties of a body or a cross section. Its main advantages are its non-invasiveness, low cost and operation free of radiation. The estimation of the conductivity field leads to low resolution images compared with other technologies, and high computational cost. However, in many applications the target information lies in a low intrinsic dimensionality of the conductivity field. The estimation of this low-dimensional information is addressed in this work. It proposes optimization-based and data-driven approaches for estimating this low-dimensional information. The accuracy of the results obtained with these approaches depends on modelling and experimental conditions. Optimization approaches are sensitive to model discretization, type of cost function and searching algorithms. Data-driven methods are sensitive to the assumed model structure and the data set used for parameter estimation. The system configuration and experimental conditions, such as number of electrodes and signal-to-noise ratio (SNR), also have an impact on the results. In order to illustrate the effects of all these factors, the position estimation of a circular anomaly is addressed. Optimization methods based on weighted error cost functions and derivate-free optimization algorithms provided the best results. Data-driven approaches based on linear models provided, in this case, good estimates, but the use of nonlinear models enhanced the estimation accuracy. The results obtained by optimization-based algorithms were less sensitive to experimental conditions, such as number of electrodes and SNR, than data-driven approaches. Position estimation mean squared errors for simulation and experimental conditions were more than twice for the optimization-based approaches compared with the data-driven ones. The experimental position estimation mean squared error of the data-driven models using a 16-electrode setup was less

  8. Open source posturography.

    Science.gov (United States)

    Rey-Martinez, Jorge; Pérez-Fernández, Nicolás

    2016-12-01

    The proposed validation goal of 0.9 in intra-class correlation coefficient was reached with the results of this study. With the obtained results we consider that the developed software (RombergLab) is a validated balance assessment software. The reliability of this software is dependent of the used force platform technical specifications. Develop and validate a posturography software and share its source code in open source terms. Prospective non-randomized validation study: 20 consecutive adults underwent two balance assessment tests, six condition posturography was performed using a clinical approved software and force platform and the same conditions were measured using the new developed open source software using a low cost force platform. Intra-class correlation index of the sway area obtained from the center of pressure variations in both devices for the six conditions was the main variable used for validation. Excellent concordance between RombergLab and clinical approved force platform was obtained (intra-class correlation coefficient =0.94). A Bland and Altman graphic concordance plot was also obtained. The source code used to develop RombergLab was published in open source terms.

  9. Comparative Studies on Pectinases obtained from Aspergillus ...

    African Journals Online (AJOL)

    Prof. Ogunji

    Abstract. Pectinase was produced from Aspergillus species (A. fumigatus, and A. niger) in a submerged fermentation system after 4 and 5 days of fermentation, respectively using pectin extracted from different agro-wastes (mango, orange and pineapple peels) as the carbon sources. The pectin was extracted from mango, ...

  10. Sources management

    International Nuclear Information System (INIS)

    Mansoux, H.; Gourmelon; Scanff, P.; Fournet, F.; Murith, Ch.; Saint-Paul, N.; Colson, P.; Jouve, A.; Feron, F.; Haranger, D.; Mathieu, P.; Paycha, F.; Israel, S.; Auboiroux, B.; Chartier, P.

    2005-01-01

    Organized by the section of technical protection of the French society of radiation protection ( S.F.R.P.), these two days had for objective to review the evolution of the rule relative to the sources of ionising radiations 'sealed and unsealed radioactive sources, electric generators'. They addressed all the actors concerned by the implementation of the new regulatory system in the different sectors of activities ( research, medicine and industry): Authorities, manufacturers, and suppliers of sources, holders and users, bodies involved in the approval of sources, carriers. (N.C.)

  11. Accurately bearing measurement in non-cooperative passive location system

    International Nuclear Information System (INIS)

    Liu Zhiqiang; Ma Hongguang; Yang Lifeng

    2007-01-01

    The system of non-cooperative passive location based on array is proposed. In the system, target is detected by beamforming and Doppler matched filtering; and bearing is measured by a long-base-ling interferometer which is composed of long distance sub-arrays. For the interferometer with long-base-line, the bearing is measured accurately but ambiguously. To realize unambiguous accurately bearing measurement, beam width and multiple constraint adoptive beamforming technique is used to resolve azimuth ambiguous. Theory and simulation result shows this method is effective to realize accurately bearing measurement in no-cooperate passive location system. (authors)

  12. Hydrogen atoms can be located accurately and precisely by x-ray crystallography.

    Science.gov (United States)

    Woińska, Magdalena; Grabowsky, Simon; Dominiak, Paulina M; Woźniak, Krzysztof; Jayatilaka, Dylan

    2016-05-01

    Precise and accurate structural information on hydrogen atoms is crucial to the study of energies of interactions important for crystal engineering, materials science, medicine, and pharmacy, and to the estimation of physical and chemical properties in solids. However, hydrogen atoms only scatter x-radiation weakly, so x-rays have not been used routinely to locate them accurately. Textbooks and teaching classes still emphasize that hydrogen atoms cannot be located with x-rays close to heavy elements; instead, neutron diffraction is needed. We show that, contrary to widespread expectation, hydrogen atoms can be located very accurately using x-ray diffraction, yielding bond lengths involving hydrogen atoms (A-H) that are in agreement with results from neutron diffraction mostly within a single standard deviation. The precision of the determination is also comparable between x-ray and neutron diffraction results. This has been achieved at resolutions as low as 0.8 Å using Hirshfeld atom refinement (HAR). We have applied HAR to 81 crystal structures of organic molecules and compared the A-H bond lengths with those from neutron measurements for A-H bonds sorted into bonds of the same class. We further show in a selection of inorganic compounds that hydrogen atoms can be located in bridging positions and close to heavy transition metals accurately and precisely. We anticipate that, in the future, conventional x-radiation sources at in-house diffractometers can be used routinely for locating hydrogen atoms in small molecules accurately instead of large-scale facilities such as spallation sources or nuclear reactors.

  13. Climate Feedback: a worldwide network of scientists collaborating to peer-review the media and foster more accurate climate coverage

    Science.gov (United States)

    Vincent, E. M.

    2016-12-01

    The public remains largely unaware of the pervasive impacts of climate change and this has been commonly attributed to the often inaccurate or misleading reporting of climate issues by mainstream media. Given the large influence of the media, using scientists' outreach time to try and improve the accuracy of climate news is an impactful leverage towards supporting science-based policies about climate change. Climate Feedback is a worldwide network of scientists who are working with journalists and editors to improve the accuracy of climate reporting. When a breaking climate news gets published, Climate Feedback invites scientists to collectively review the scientific credibility of the story using a method based on critical thinking theory that measures its accuracy, reasoning and objectivity. The use of web-annotation allows scientists with complementary expertise to collectively review the article and allows readers and authors to see precisely where and why the coverage is -or is not- based on science. Building on these reviews, we highlight best practices to help journalists and editors create more accurate content and share pedagogical resources to help readers identify claims that are consistent with current scientific knowledge and find the most reliable sources of information. In this talk, we will present the results we have obtained so far, which includes 1) identifying the most common pitfalls scientists have reported in climate coverage and 2) identifying the first trends and impacts of our actions. Beyond the publication of simply inaccurate information, we identified more subtle issues such as misrepresenting sources (either scientists or studies), lack of context or understanding of scientific concepts, logical flaws, over-hyping results/exaggeration... Our results increasingly allow to highlight that certain news sources (outlets, journalists, editors) are generally more trustworthy than others and we will show how some news outlets now take

  14. The Data Evaluation for Obtaining Accuracy and Reliability

    International Nuclear Information System (INIS)

    Kim, Chang Geun; Chae, Kyun Shik; Lee, Sang Tae; Bhang, Gun Woong

    2012-01-01

    Nemours scientific measurement results are flooded from the paper, data book, etc. as fast growing of internet. We meet many different measurement results on the same measurand. In this moment, we are face to choose most reliable one out of them. But it is not easy to choose and use the accurate and reliable data as we do at an ice cream parlor. Even expert users feel difficult to distinguish the accurate and reliable scientific data from huge amount of measurement results. For this reason, the data evaluation is getting more important as the fast growing of internet and globalization. Furthermore the expressions of measurement results are not in standardi-zation. As these need, the international movement has been enhanced. At the first step, the global harmonization of terminology used in metrology and the expression of uncertainty in measurement were published in ISO. These methods are wide spread to many area of science on their measurement to obtain the accuracy and reliability. In this paper, it is introduced that the GUM, SRD and data evaluation on atomic collisions.

  15. Vacuum Arc Ion Sources

    CERN Document Server

    Brown, I.

    2013-12-16

    The vacuum arc ion source has evolved into a more or less standard laboratory tool for the production of high-current beams of metal ions, and is now used in a number of different embodiments at many laboratories around the world. Applications include primarily ion implantation for material surface modification research, and good performance has been obtained for the injection of high-current beams of heavy-metal ions, in particular uranium, into particle accelerators. As the use of the source has grown, so also have the operational characteristics been improved in a variety of different ways. Here we review the principles, design, and performance of vacuum arc ion sources.

  16. Polarized electron sources

    International Nuclear Information System (INIS)

    Prepost, R.

    1994-01-01

    The fundamentals of polarized electron sources are described with particular application to the Stanford Linear Accelerator Center. The SLAC polarized electron source is based on the principle of polarized photoemission from Gallium Arsenide. Recent developments using epitaxially grown, strained Gallium Arsenide cathodes have made it possible to obtain electron polarization significantly in excess of the conventional 50% polarization limit. The basic principles for Gallium and Arsenide polarized photoemitters are reviewed, and the extension of the basic technique to strained cathode structures is described. Results from laboratory measurements of strained photocathodes as well as operational results from the SLAC polarized source are presented

  17. Polarized electron sources

    Energy Technology Data Exchange (ETDEWEB)

    Prepost, R. [Univ. of Wisconsin, Madison, WI (United States)

    1994-12-01

    The fundamentals of polarized electron sources are described with particular application to the Stanford Linear Accelerator Center. The SLAC polarized electron source is based on the principle of polarized photoemission from Gallium Arsenide. Recent developments using epitaxially grown, strained Gallium Arsenide cathodes have made it possible to obtain electron polarization significantly in excess of the conventional 50% polarization limit. The basic principles for Gallium and Arsenide polarized photoemitters are reviewed, and the extension of the basic technique to strained cathode structures is described. Results from laboratory measurements of strained photocathodes as well as operational results from the SLAC polarized source are presented.

  18. Addition of Adapted Optics towards obtaining a quantitative detection of diabetic retinopathy

    Science.gov (United States)

    Yust, Brian; Obregon, Isidro; Tsin, Andrew; Sardar, Dhiraj

    2009-04-01

    An adaptive optics system was assembled for correcting the aberrated wavefront of light reflected from the retina. The adaptive optics setup includes a superluminous diode light source, Hartmann-Shack wavefront sensor, deformable mirror, and imaging CCD camera. Aberrations found in the reflected wavefront are caused by changes in the index of refraction along the light path as the beam travels through the cornea, lens, and vitreous humour. The Hartmann-Shack sensor allows for detection of aberrations in the wavefront, which may then be corrected with the deformable mirror. It has been shown that there is a change in the polarization of light reflected from neovascularizations in the retina due to certain diseases, such as diabetic retinopathy. The adaptive optics system was assembled towards the goal of obtaining a quantitative measure of onset and progression of this ailment, as one does not currently exist. The study was done to show that the addition of adaptive optics results in a more accurate detection of neovascularization in the retina by measuring the expected changes in polarization of the corrected wavefront of reflected light.

  19. The results obtained by INR-Pitesti to an international in-situ intercomparison exercise

    International Nuclear Information System (INIS)

    Dobrin, Relu; Dulama, Cristian; Toma, Alexandru

    2008-01-01

    Full text: The determination of soil contamination, dose rate measurements and in-situ gamma spectrometry are well established and widely used measurement procedures, especially after large scale nuclear incidents. To support decision makers and first responders with a more comprehensive and accurate overview immediately after a large scale nuclear or radiological emergency, as releases from a nuclear power plant or a terrorist attack with a dirty bomb, a fast and clear presentation of measurement data is indispensable. In 2007, the Austrian Research Centers GmbH - ARC (Seibersdorf) in cooperation with IAEA and the Austrian NBC Defense School organized an international measurement campaign, 'In-Situ Intercomparison Scenario' ISIS 2007, with the focus on In-Situ Gamma Spectrometry and Dose Rate Measurements in Emergency Situations. 56 teams from 25 countries, worldwide, took part to this exercise. The only Romanian team was 'CROWN'. The CROWN Laboratory is part of the Radiation Protection Laboratory of the Institute for Nuclear Research - Pitesti, certified for characterization of radioactive wastes and nuclear materials. The paper presents results obtained during different tasks of the exercise: 'Dose rate mapping'; 'Localization - Drive by'; 'Simulation of a contamination'; 'Identification Shielded; 'Identification/Quantification'; 'Buried sources; 'Environmental measurement'. (authors)

  20. SU-E-T-209: Independent Dose Calculation in FFF Modulated Fields with Pencil Beam Kernels Obtained by Deconvolution

    International Nuclear Information System (INIS)

    Azcona, J; Burguete, J

    2014-01-01

    Purpose: To obtain the pencil beam kernels that characterize a megavoltage photon beam generated in a FFF linac by experimental measurements, and to apply them for dose calculation in modulated fields. Methods: Several Kodak EDR2 radiographic films were irradiated with a 10 MV FFF photon beam from a Varian True Beam (Varian Medical Systems, Palo Alto, CA) linac, at the depths of 5, 10, 15, and 20cm in polystyrene (RW3 water equivalent phantom, PTW Freiburg, Germany). The irradiation field was a 50 mm diameter circular field, collimated with a lead block. Measured dose leads to the kernel characterization, assuming that the energy fluence exiting the linac head and further collimated is originated on a point source. The three-dimensional kernel was obtained by deconvolution at each depth using the Hankel transform. A correction on the low dose part of the kernel was performed to reproduce accurately the experimental output factors. The kernels were used to calculate modulated dose distributions in six modulated fields and compared through the gamma index to their absolute dose measured by film in the RW3 phantom. Results: The resulting kernels properly characterize the global beam penumbra. The output factor-based correction was carried out adding the amount of signal necessary to reproduce the experimental output factor in steps of 2mm, starting at a radius of 4mm. There the kernel signal was in all cases below 10% of its maximum value. With this correction, the number of points that pass the gamma index criteria (3%, 3mm) in the modulated fields for all cases are at least 99.6% of the total number of points. Conclusion: A system for independent dose calculations in modulated fields from FFF beams has been developed. Pencil beam kernels were obtained and their ability to accurately calculate dose in homogeneous media was demonstrated

  1. Diagnostic value of sectional images obtained by emission tomography

    International Nuclear Information System (INIS)

    Roucayrol, J.C.

    1981-01-01

    It is now possible to obtain clear images of the various planes in and around a structure with ultra-sounds (echotomography), X-rays (computerized tomography) and recently, gamma-rays from radioactive substances (emission tomography). Axial transverse tomography, which is described here, is to conventional scintigraphy what CT scan is to radiography. It provides images of any structure capable of concentrating sufficiently a radioactive substance administered intravenously. These images are perpendicular to the longitudinal axis of the body. As shown by examples in the liver, lungs and myocardium, lesions which had passed unnoticed with other exploratory techniques can now be demonstrated, and the location, shape and extension of known lesions can be more accurately assessed. Emission tomography already has its place in modern diagnostic procedures side by side with echotomography and CT scan [fr

  2. JCZS: An Intermolecular Potential Database for Performing Accurate Detonation and Expansion Calculations

    Energy Technology Data Exchange (ETDEWEB)

    Baer, M.R.; Hobbs, M.L.; McGee, B.C.

    1998-11-03

    Exponential-13,6 (EXP-13,6) potential pammeters for 750 gases composed of 48 elements were determined and assembled in a database, referred to as the JCZS database, for use with the Jacobs Cowperthwaite Zwisler equation of state (JCZ3-EOS)~l) The EXP- 13,6 force constants were obtained by using literature values of Lennard-Jones (LJ) potential functions, by using corresponding states (CS) theory, by matching pure liquid shock Hugoniot data, and by using molecular volume to determine the approach radii with the well depth estimated from high-pressure isen- tropes. The JCZS database was used to accurately predict detonation velocity, pressure, and temperature for 50 dif- 3 Accurate predictions were also ferent explosives with initial densities ranging from 0.25 glcm3 to 1.97 g/cm . obtained for pure liquid shock Hugoniots, static properties of nitrogen, and gas detonations at high initial pressures.

  3. Kinetic determinations of accurate relative oxidation potentials of amines with reactive radical cations.

    Science.gov (United States)

    Gould, Ian R; Wosinska, Zofia M; Farid, Samir

    2006-01-01

    Accurate oxidation potentials for organic compounds are critical for the evaluation of thermodynamic and kinetic properties of their radical cations. Except when using a specialized apparatus, electrochemical oxidation of molecules with reactive radical cations is usually an irreversible process, providing peak potentials, E(p), rather than thermodynamically meaningful oxidation potentials, E(ox). In a previous study on amines with radical cations that underwent rapid decarboxylation, we estimated E(ox) by correcting the E(p) from cyclic voltammetry with rate constants for decarboxylation obtained using laser flash photolysis. Here we use redox equilibration experiments to determine accurate relative oxidation potentials for the same amines. We also describe an extension of these experiments to show how relative oxidation potentials can be obtained in the absence of equilibrium, from a complete kinetic analysis of the reversible redox kinetics. The results provide support for the previous cyclic voltammetry/laser flash photolysis method for determining oxidation potentials.

  4. Accurate characterization of organic thin film transistors in the presence of gate leakage current

    Directory of Open Access Journals (Sweden)

    Vinay K. Singh

    2011-12-01

    Full Text Available The presence of gate leakage through polymer dielectric in organic thin film transistors (OTFT prevents accurate estimation of transistor characteristics especially in subthreshold regime. To mitigate the impact of gate leakage on transfer characteristics and allow accurate estimation of mobility, subthreshold slope and on/off current ratio, a measurement technique involving simultaneous sweep of both gate and drain voltages is proposed. Two dimensional numerical device simulation is used to illustrate the validity of the proposed technique. Experimental results obtained with Pentacene/PMMA OTFT with significant gate leakage show a low on/off current ratio of ∼ 102 and subthreshold is 10 V/decade obtained using conventional measurement technique. The proposed technique reveals that channel on/off current ratio is more than two orders of magnitude higher at ∼104 and subthreshold slope is 4.5 V/decade.

  5. Accurate modeling and maximum power point detection of ...

    African Journals Online (AJOL)

    Accurate modeling and maximum power point detection of photovoltaic ... Determination of MPP enables the PV system to deliver maximum available power. ..... adaptive artificial neural network: Proposition for a new sizing procedure.

  6. ACCURATE ESTIMATES OF CHARACTERISTIC EXPONENTS FOR SECOND ORDER DIFFERENTIAL EQUATION

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    In this paper, a second order linear differential equation is considered, and an accurate estimate method of characteristic exponent for it is presented. Finally, we give some examples to verify the feasibility of our result.

  7. Importance of molecular diagnosis in the accurate diagnosis of ...

    Indian Academy of Sciences (India)

    1Department of Health and Environmental Sciences, Kyoto University Graduate School of Medicine, Yoshida Konoecho, ... of molecular diagnosis in the accurate diagnosis of systemic carnitine deficiency. .... 'affecting protein function' by SIFT.

  8. Kinetic parameters for source driven systems

    International Nuclear Information System (INIS)

    Dulla, S.; Ravetto, P.; Carta, M.; D'Angelo, A.

    2006-01-01

    The definition of the characteristic kinetic parameters of a subcritical source-driven system constitutes an interesting problem in reactor physics with important consequences for practical applications. Consistent and physically meaningful values of the parameters allow to obtain accurate results from kinetic simulation tools and to correctly interpret kinetic experiments. For subcritical systems a preliminary problem arises for the adoption of a suitable weighting function to be used in the projection procedure to derive a point model. The present work illustrates a consistent factorization-projection procedure which leads to the definition of the kinetic parameters in a straightforward manner. The reactivity term is introduced coherently with the generalized perturbation theory applied to the source multiplication factor ks, which is thus given a physical role in the kinetic model. The effective prompt lifetime is introduced on the assumption that a neutron generation can be initiated by both the fission process and the source emission. Results are presented for simplified configurations to fully comprehend the physical features and for a more complicated highly decoupled system treated in transport theory. (authors)

  9. Source location in plates based on the multiple sensors array method and wavelet analysis

    International Nuclear Information System (INIS)

    Yang, Hong Jun; Shin, Tae Jin; Lee, Sang Kwon

    2014-01-01

    A new method for impact source localization in a plate is proposed based on the multiple signal classification (MUSIC) and wavelet analysis. For source localization, the direction of arrival of the wave caused by an impact on a plate and the distance between impact position and sensor should be estimated. The direction of arrival can be estimated accurately using MUSIC method. The distance can be obtained by using the time delay of arrival and the group velocity of the Lamb wave in a plate. Time delay is experimentally estimated using the continuous wavelet transform for the wave. The elasto dynamic theory is used for the group velocity estimation.

  10. Source location in plates based on the multiple sensors array method and wavelet analysis

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Hong Jun; Shin, Tae Jin; Lee, Sang Kwon [Inha University, Incheon (Korea, Republic of)

    2014-01-15

    A new method for impact source localization in a plate is proposed based on the multiple signal classification (MUSIC) and wavelet analysis. For source localization, the direction of arrival of the wave caused by an impact on a plate and the distance between impact position and sensor should be estimated. The direction of arrival can be estimated accurately using MUSIC method. The distance can be obtained by using the time delay of arrival and the group velocity of the Lamb wave in a plate. Time delay is experimentally estimated using the continuous wavelet transform for the wave. The elasto dynamic theory is used for the group velocity estimation.

  11. Sourcing Excellence

    DEFF Research Database (Denmark)

    Adeyemi, Oluseyi

    2011-01-01

    Sourcing Excellence is one of the key performance indicators (KPIs) in this world of ever changing sourcing strategies. Manufacturing companies need to access and diagnose the reliability and competencies of existing suppliers in order to coordinate and develop them. This would help in managing...

  12. Obtaining of ceramics biphasic dense and porous

    International Nuclear Information System (INIS)

    Pallone, E.M.J.A.; Rigo, E.C.S.; Fraga, A.F.

    2010-01-01

    Among the bioceramic hydroxyapatite (HAP) and beta-tricalcium phosphate (beta-TCP) are materials commonly used in biomedical field. Their combined properties result in a material with absorbable and at the same time with bioactive surface. Called biphasic ceramics such materials respond more quickly when exposed to physiological environment. In this work, powders of HAP/beta-TCP were obtained by chemical precipitation. After obtaining the post-phase was added at a ratio of 0, 15% and 30w% aqueous solutions of corn starch in order to obtain porous bodies. After mixing the resulting solutions were dried, resigned in tablet form and sintered at 1300 deg C. The initial powder was characterized by X-ray diffraction with Rietveld refinement to quantify the phases present. Bodies-of-evidence has been characterized by calculating the bulk density, X-ray diffraction (XRD), scanning electron microscopy and diametral compression. (author)

  13. Organoclays obtaining starting up of clays sodium

    International Nuclear Information System (INIS)

    Silva, M.M. da; Mota, M.F.; Oliveira, G.C. de; Rodrigues, M.G.F.

    2012-01-01

    Clays have several applications in many areas of fields of technology, however, modification of these materials using organic compounds can be performed to obtain further hydrophobic materials, for applications in the adsorption of organic pollutants. This study aimed to analyze the effects of modifying two clays using sodium quaternary ammonium surfactants through ion exchange reaction process, in obtaining organoclays. The samples with sodium and organoclays were characterized by the techniques of X-ray diffraction (XRD), Infrared Spectroscopy in the region (IV), Gravimetric and Differential Thermal Analysis (DTA / TG) and organic adsorption tests. The results show that the process of obtaining organoclay is efficient, and materials have the potential for future applications in removing organic contaminants. (author)

  14. Obtain of uranium concentrates from fertil liquids

    International Nuclear Information System (INIS)

    Narvaez Castillo, W.A.

    1992-01-01

    This research tried to encounter the form to remove uranium from the rock in the best way, for that it was used different process like leaching, extraction, concentration and precipitation. To leach the mineral was chosen basic leaching, using a mixture of carbonate-sodium bicarbonate, this method is more adequated for the basic nature of the mineral. In extraction was used specific uranium ionic interchanges, so was chosen a tertiary amine like Alamina 336. The concentration phase is intimately binding with the extraction by ionic interchange, for the capability of resine's extraction to obtain concentrated liquids. When the liquids were obtained with high concentration of uranium in the same time were purified and then were precipitated, for that we employed a precipitant agent like: Sodium hydroxide, Amonium hydroxide, Magnesium hydroxide, Hydrogen peroxide and phosphates. With all concentrates we obtain the YELLOW CAKE

  15. Accurate Alignment of Plasma Channels Based on Laser Centroid Oscillations

    International Nuclear Information System (INIS)

    Gonsalves, Anthony; Nakamura, Kei; Lin, Chen; Osterhoff, Jens; Shiraishi, Satomi; Schroeder, Carl; Geddes, Cameron; Toth, Csaba; Esarey, Eric; Leemans, Wim

    2011-01-01

    A technique has been developed to accurately align a laser beam through a plasma channel by minimizing the shift in laser centroid and angle at the channel outptut. If only the shift in centroid or angle is measured, then accurate alignment is provided by minimizing laser centroid motion at the channel exit as the channel properties are scanned. The improvement in alignment accuracy provided by this technique is important for minimizing electron beam pointing errors in laser plasma accelerators.

  16. An accurate metric for the spacetime around neutron stars

    OpenAIRE

    Pappas, George

    2016-01-01

    The problem of having an accurate description of the spacetime around neutron stars is of great astrophysical interest. For astrophysical applications, one needs to have a metric that captures all the properties of the spacetime around a neutron star. Furthermore, an accurate appropriately parameterised metric, i.e., a metric that is given in terms of parameters that are directly related to the physical structure of the neutron star, could be used to solve the inverse problem, which is to inf...

  17. Accurate forced-choice recognition without awareness of memory retrieval

    OpenAIRE

    Voss, Joel L.; Baym, Carol L.; Paller, Ken A.

    2008-01-01

    Recognition confidence and the explicit awareness of memory retrieval commonly accompany accurate responding in recognition tests. Memory performance in recognition tests is widely assumed to measure explicit memory, but the generality of this assumption is questionable. Indeed, whether recognition in nonhumans is always supported by explicit memory is highly controversial. Here we identified circumstances wherein highly accurate recognition was unaccompanied by hallmark features of explicit ...

  18. Accurate radiotherapy positioning system investigation based on video

    International Nuclear Information System (INIS)

    Tao Shengxiang; Wu Yican

    2006-01-01

    This paper introduces the newest research production on patient positioning method in accurate radiotherapy brought by Accurate Radiotherapy Treating System (ARTS) research team of Institute of Plasma Physics of Chinese Academy of Sciences, such as the positioning system based on binocular vision, the position-measuring system based on contour matching and the breath gate controlling system for positioning. Their basic principle, the application occasion and the prospects are briefly depicted. (authors)

  19. Positron sources

    International Nuclear Information System (INIS)

    Chehab, R.

    1989-01-01

    A tentative survey of positron sources is given. Physical processes on which positron generation is based are indicated and analyzed. Explanation of the general features of electromagnetic interactions and nuclear β + decay makes it possible to predict the yield and emittance for a given optical matching system between the positron source and the accelerator. Some kinds of matching systems commonly used - mainly working with solenoidal fields - are studied and the acceptance volume calculated. Such knowledge is helpful in comparing different matching systems. Since for large machines, a significant distance exists between the positron source and the experimental facility, positron emittance has to be preserved during beam transfer over large distances and methods used for that purpose are indicated. Comparison of existing positron sources leads to extrapolation to sources for future linear colliders

  20. Experiments for obtaining field influence mass particles.

    CERN Document Server

    Yahalomi, E

    2010-01-01

    Analyzing time dilation experiments the existence of a universal field interacting with moving mass particles is obtained. It is found that mass particle changes its properties depend on its velocity relative to this universal scalar field and not on its velocity relative to the laboratory. High energy proton momentum, energy and mass were calculated obtaining new results. Experiments in high energy accelerators are suggested as additional proofs for the existence of this universal field. This universal field may explain some results of other high energy experiments.

  1. Accurate mode characterization of two-mode optical fibers by in-fiber acousto-optics.

    Science.gov (United States)

    Alcusa-Sáez, E; Díez, A; Andrés, M V

    2016-03-07

    Acousto-optic interaction in optical fibers is exploited for the accurate and broadband characterization of two-mode optical fibers. Coupling between LP 01 and LP 1m modes is produced in a broadband wavelength range. Difference in effective indices, group indices, and chromatic dispersions between the guided modes, are obtained from experimental measurements. Additionally, we show that the technique is suitable to investigate the fine modes structure of LP modes, and some other intriguing features related with modes' cut-off.

  2. A new model for the accurate calculation of natural gas viscosity

    OpenAIRE

    Xiaohong Yang; Shunxi Zhang; Weiling Zhu

    2017-01-01

    Viscosity of natural gas is a basic and important parameter, of theoretical and practical significance in the domain of natural gas recovery, transmission and processing. In order to obtain the accurate viscosity data efficiently at a low cost, a new model and its corresponding functional relation are derived on the basis of the relationship among viscosity, temperature and density derived from the kinetic theory of gases. After the model parameters were optimized using a lot of experimental ...

  3. Accurate collision integrals for the attractive static screened Coulomb potential with application to electrical conductivity

    International Nuclear Information System (INIS)

    Macdonald, J.

    1991-01-01

    The results of accurate calculations of collision integrals for the attractive static screened Coulomb potential are presented. To obtain high accuracy with minimal computational cost, the integrals are evaluated by a quadrature method based on the Whittaker cardinal function. The collision integrals for the attractive potential are needed for calculation of the electrical conductivity of a dense fully or partially ionized plasma, and the results presented here are appropriate for the conditions in the nondegenerate envelopes of white dwarf stars. 25 refs

  4. Obtaining the Andersen's chart, triangulation algorithm

    DEFF Research Database (Denmark)

    Sabaliauskas, Tomas; Ibsen, Lars Bo

    Andersen’s chart (Andersen & Berre, 1999) is a graphical method of observing cyclic soil response. It allows observing soil response to various stress amplitudes that can lead to liquefaction, excess plastic deformation or stabilizing soil response. The process of obtaining the original chart has...

  5. Purification of alcohol obtained from molasses

    Energy Technology Data Exchange (ETDEWEB)

    Visnevskaya, G L; Egorov, A S; Sokol' skaya, E V

    1960-01-01

    A study of the composition of alcohol liquids on different plates of a fractionation column of indirect action during purification of alcohol obtained from normal and defective molasses, and from starch raw material, showed that there were two local strength minima in the lower part of the column and on the plates (adjacent and feed). Aldehydes behaved as a typical head impurity; a noticeable increase in their concentration occurred only on the highest plates in the fractionation column. In the zone of the column containing liquids of a strength of 86 to 94% alcohol by weight a sharply pronounced local maximum of ester accumulation were observed, provisionally designated as intermediate, whose presence is apparently one of the causes of the specific sharp taste of alcohol obtained from molasses. These esters hinder the obtaining of high-grade alcohols which are standard in respect to ester content and oxidizability test. Reduction with 0.05N KMnO/sub 4/ occurs most rapidly with alcohol liquids in the zone of ester accumulation; purification of alcohols obtained from grain and potato raw material resulted in no zones of ester accumulation in the column.

  6. Obtaining shale distillate free from sulphur

    Energy Technology Data Exchange (ETDEWEB)

    Heyl, G E

    1917-09-14

    A process whereby, from sulfur-containing shale, products free from sulfur may be obtained, consisting of mixing with the finely ground shale a portion of iron salts containing sufficient metal to unite with all the sulfur in the shale and form sulfide therewith, grinding the mixture to a fine state of subdivision and subsequently subjecting it to destructive distillation.

  7. Obtaining a minimal set of rewrite rules

    CSIR Research Space (South Africa)

    Davel, M

    2005-11-01

    Full Text Available In this paper the authors describe a new approach to rewrite rule extraction and analysis, using Minimal Representation Graphs. This approach provides a mechanism for obtaining the smallest possible rule set – within a context-dependent rewrite rule...

  8. Obtainment of tantalum oxide from national ores

    International Nuclear Information System (INIS)

    Pinatti, D.G.; Ribeiro, S.; Martins, A.H.

    1988-01-01

    The experimental results of tantalum oxides (Ta 2 O 5 ) obtainment from Brazilian ores of tantalite and columbite are described. This study is a part of the technologic and scientific research design of refractory metals (Ti, Zr, Hf, V, Nb, Ta, Cr, Mo and W) and correlate ceramics. (C.G.C.) [pt

  9. Isolation and characterization of microcrystalline cellulose obtained ...

    African Journals Online (AJOL)

    In this study, microcrystalline cellulose, coded MCC-PNF, was obtained from palm nut (Elaeis guineensis) fibres. MCC-PNF was examined for its physicochemical and powder properties. The powder properties of MCC-PNF were compared to those of the best commercial microcrystalline cellulose grade, Avicel PH 101.

  10. 47 CFR 54.615 - Obtaining services.

    Science.gov (United States)

    2010-10-01

    ... provided under § 54.621, that the requester cannot obtain toll-free access to an Internet service provider... thing of value; (6) If the service or services are being purchased as part of an aggregated purchase... submitted and select the most cost-effective alternative. (b) Receiving supported rate. Except with regard...

  11. SU-E-T-284: Revisiting Reference Dosimetry for the Model S700 Axxent 50 KVp Electronic Brachytherapy Source

    International Nuclear Information System (INIS)

    Hiatt, JR; Rivard, MJ

    2014-01-01

    Purpose: The model S700 Axxent electronic brachytherapy source by Xoft was characterized in 2006 by Rivard et al. The source design was modified in 2006 to include a plastic centering insert at the source tip to more accurately position the anode. The objectives of the current study were to establish an accurate Monte Carlo source model for simulation purposes, to dosimetrically characterize the new source and obtain its TG-43 brachytherapy dosimetry parameters, and to determine dose differences between the source with and without the centering insert. Methods: Design information from dissected sources and vendor-supplied CAD drawings were used to devise the source model for radiation transport simulations of dose distributions in a water phantom. Collision kerma was estimated as a function of radial distance, r, and polar angle, θ, for determination of reference TG-43 dosimetry parameters. Simulations were run for 10 10 histories, resulting in statistical uncertainties on the transverse plane of 0.03% at r=1 cm and 0.08% at r=10 cm. Results: The dose rate distribution the transverse plane did not change beyond 2% between the 2006 model and the current study. While differences exceeding 15% were observed near the source distal tip, these diminished to within 2% for r>1.5 cm. Differences exceeding a factor of two were observed near θ=150° and in contact with the source, but diminished to within 20% at r=10 cm. Conclusions: Changes in source design influenced the overall dose rate and distribution by more than 2% over a third of the available solid angle external from the source. For clinical applications using balloons or applicators with tissue located within 5 cm from the source, dose differences exceeding 2% were observed only for θ>110°. This study carefully examined the current source geometry and presents a modern reference TG-43 dosimetry dataset for the model S700 source

  12. An efficient discontinuous Galerkin finite element method for highly accurate solution of maxwell equations

    KAUST Repository

    Liu, Meilin

    2012-08-01

    A discontinuous Galerkin finite element method (DG-FEM) with a highly accurate time integration scheme for solving Maxwell equations is presented. The new time integration scheme is in the form of traditional predictor-corrector algorithms, PE CE m, but it uses coefficients that are obtained using a numerical scheme with fully controllable accuracy. Numerical results demonstrate that the proposed DG-FEM uses larger time steps than DG-FEM with classical PE CE) m schemes when high accuracy, which could be obtained using high-order spatial discretization, is required. © 1963-2012 IEEE.

  13. An efficient discontinuous Galerkin finite element method for highly accurate solution of maxwell equations

    KAUST Repository

    Liu, Meilin; Sirenko, Kostyantyn; Bagci, Hakan

    2012-01-01

    A discontinuous Galerkin finite element method (DG-FEM) with a highly accurate time integration scheme for solving Maxwell equations is presented. The new time integration scheme is in the form of traditional predictor-corrector algorithms, PE CE m, but it uses coefficients that are obtained using a numerical scheme with fully controllable accuracy. Numerical results demonstrate that the proposed DG-FEM uses larger time steps than DG-FEM with classical PE CE) m schemes when high accuracy, which could be obtained using high-order spatial discretization, is required. © 1963-2012 IEEE.

  14. Boson-mediated interactions between static sources

    International Nuclear Information System (INIS)

    Bolsterli, M.

    1983-01-01

    The techniques are now available for doing accurate computations of static potentials arising from the exchange of virtual mesons. Such computations must take account of the fact that different approximation methods must be used in the regions where R is large and where R is small. In the asymptotic region, the distorted-field approximation provides an appropriate starting-point, but it must be improved before trustworthy results are obtained for all but the largest values of R. In the region of small R, accurate strong-coupling methods are based on the use of states with coherent meson pairs. For small R, it is also important to take account of the possibility of meson emission or near-emission. Current work is aimed at applying the techniques described to the case of static sources interacting via pion field. In particular, it will be interesting to see how sensitive the potential is to the value of the cutoff Λ. Other areas of application are the study of the effects of nonlinearity and models of quark-quark and quark-antiquark potentials. 17 references

  15. Dosimetry of linear sources

    International Nuclear Information System (INIS)

    Mafra Neto, F.

    1992-01-01

    The dose of gamma radiation from a linear source of cesium 137 is obtained, presenting two difficulties: oblique filtration of radiation when cross the platinum wall, in different directions, and dose connection due to the scattering by the material mean of propagation. (C.G.C.)

  16. Negative ion sources

    International Nuclear Information System (INIS)

    Ishikawa, Junzo; Takagi, Toshinori

    1983-01-01

    Negative ion sources have been originally developed at the request of tandem electrostatic accelerators, and hundreds of nA to several μA negative ion current has been obtained so far for various elements. Recently, the development of large current hydrogen negative ion sources has been demanded from the standpoint of the heating by neutral particle beam injection in nuclear fusion reactors. On the other hand, the physical properties of negative ions are interesting in the thin film formation using ions. Anyway, it is the present status that the mechanism of negative ion action has not been so fully investigated as positive ions because the history of negative ion sources is short. In this report, the many mechanisms about the generation of negative ions proposed so far are described about negative ion generating mechanism, negative ion source plasma, and negative ion generation on metal surfaces. As a result, negative ion sources are roughly divided into two schemes, plasma extraction and secondary ion extraction, and the former is further classified into the PIG ion source and its variation and Duoplasmatron and its variation; while the latter into reflecting and sputtering types. In the second half of the report, the practical negative ion sources of each scheme are described. If the mechanism of negative ion generation will be investigated more in detail and the development will be continued under the unified know-how as negative ion sources in future, the development of negative ion sources with which large current can be obtained for any element is expected. (Wakatsuki, Y.)

  17. Improved management of radiotherapy departments through accurate cost data

    International Nuclear Information System (INIS)

    Kesteloot, K.; Lievens, Y.; Schueren, E. van der

    2000-01-01

    Escalating health care expenses urge Governments towards cost containment. More accurate data on the precise costs of health care interventions are needed. We performed an aggregate cost calculation of radiation therapy departments and treatments and discussed the different cost components. The costs of a radiotherapy department were estimated, based on accreditation norms for radiotherapy departments set forth in the Belgian legislation. The major cost components of radiotherapy are the cost of buildings and facilities, equipment, medical and non-medical staff, materials and overhead. They respectively represent around 3, 30, 50, 4 and 13% of the total costs, irrespective of the department size. The average cost per patient lowers with increasing department size and optimal utilization of resources. Radiotherapy treatment costs vary in a stepwise fashion: minor variations of patient load do not affect the cost picture significantly due to a small impact of variable costs. With larger increases in patient load however, additional equipment and/or staff will become necessary, resulting in additional semi-fixed costs and an important increase in costs. A sensitivity analysis of these two major cost inputs shows that a decrease in total costs of 12-13% can be obtained by assuming a 20% less than full time availability of personnel; that due to evolving seniority levels, the annual increase in wage costs is estimated to be more than 1%; that by changing the clinical life-time of buildings and equipment with unchanged interest rate, a 5% reduction of total costs and cost per patient can be calculated. More sophisticated equipment will not have a very large impact on the cost (±4000 BEF/patient), provided that the additional equipment is adapted to the size of the department. That the recommendations we used, based on the Belgian legislation, are not outrageous is shown by replacing them by the USA Blue book recommendations. Depending on the department size, costs in

  18. Polymeric materials obtained by electron beam irradiation

    International Nuclear Information System (INIS)

    Dragusin, M.; Moraru, R.; Martin, D.; Radoiu, M.; Marghitu, S.; Oproiu, C.

    1995-01-01

    Research activities in the field of electron beam irradiation of monomer aqueous solution to produce polymeric materials used for waste waters treatment, agriculture and medicine are presented. The technologies and special features of these polymeric materials are also described. The influence of the chemical composition of the solution to ba irradiated, absorbed dose level and absorbed dose rate level are discussed. Two kinds of polyelectrolytes, PA and PV types and three kinds of hydrogels, pAAm, pAAmNa and pNaAc types, the production of which was first developed with IETI-10000 Co-60 source and then adapted to the linacs built in Accelerator Laboratory, are described. (author)

  19. FY1995 study of the development of high resolution sub-surface fluid monitoring system using accurately controlled routine operated seismic system; 1995 nendo seimitsu seigyo shingen ni yoru chika ryutai koseido monitoring no kenkyu

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-03-01

    The development of new seismic sounding system based on the new concept of ACROSS (Accurately Controlled Routine-Operated Signal System) are aimed. The system includes not only new seismic sources but also the analyzing software specialize for the monitoring of the change in subsurface velocity structure, especially in the area of fluid resources. Powerful sources with good portability are strongly required for the practical data acquisition. Portable ACROSS sources (HIT) are developed. The system is mainly used to obtain the high resolution structure with relatively short penetration distance. The principal specifications are as follows: (1) 100Hz in maximum. (2) Linearly oscillating single force. This is generated by the combined two rotator moving opposite directions. (3) Variable force with little work. (4) Very simple source-ground coupler just put even on the soft ground. The system was operated at Yamagawa geothermal plant for two months. The result of the experiments are: (1) We confirmed the stability of the source over wide frequency range up to 100Hz. (2) We confirmed that amplitude and phase of ACROSS signal can be obtained very precisely. (3) Very small change of signal which arise from subsurface velocity change are detected. This indicates that the system can detect the slight velocity change due to variation of subsurface fluid system. (NEDO)

  20. Superconductive ceramics obtained with sol gel method

    International Nuclear Information System (INIS)

    Arcangeli, A.; Mosci, A.; Nardi, A.; Vatteroni, R.; Zondini, C.

    1988-01-01

    Several sol gel routes have been considered, studied and developed to produce large quantities of granulates which can be processed to obtain ceramics having good superconducting characteristics. In the considered process a mixture of commercial nitrates is atomized, at room temperature, in a solution 1:1 of Primene JMT and Benzene and a pale blue gel of the starting elements is suddently formed. The granulates obtained are free flowing, very reactive and well suited for pressing. For their intrinsic characteristics they could be very good precursors for the production of large quantities of superconductive ceramics in different forms. The precipitated gel is dried, calcinated, pressed in the form of cylindrical pellets which are sintered up to 960 degrees C. No griding or different thermal treatments are needed. The sintered material has low electric resistence, shows a clear Meissner effect and has a transition temperature of between 91 and 95 K

  1. Acoustic barriers obtained from industrial wastes.

    Science.gov (United States)

    Garcia-Valles, M; Avila, G; Martinez, S; Terradas, R; Nogués, J M

    2008-07-01

    Acoustic pollution is an environmental problem that is becoming increasingly more important in our society. Likewise, the accumulation of generated waste and the need for waste management are also becoming more and more pressing. In this study we describe a new material--called PROUSO--obtained from industrial wastes. PROUSO has a variety of commercial and engineering, as well as building, applications. The main raw materials used for this environmentally friendly material come from slag from the aluminium recycling process, dust from the marble industry, foundry sands, and recycled expanded polystyrene from recycled packaging. Some natural materials, such as plastic clays, are also used. To obtain PROUSO we used a conventional ceramic process, forming new mineral phases and incorporating polluted elements into the structure. Its physical properties make PROUSO an excellent acoustic and thermal insulation material. It absorbs 95% of the sound in the frequency band of the 500 Hz. Its compressive strength makes it ideal for use in ceramic wall building.

  2. Processing of hydroxyapatite obtained by combustion synthesis

    International Nuclear Information System (INIS)

    Canillas, M.; Rivero, R.; García-Carrodeguas, R.; Barba, F.; Rodríguez, M.A.

    2017-01-01

    One of the reasons of implants failure are the stress forces appearing in the material–tissue interface due to the differences between their mechanical properties. For this reason, similar mechanical properties to the surrounding tissue are desirable. The synthesis of hydroxyapatite by solution combustion method and its processing have been studied in order to obtain fully dense ceramic bodies with improved mechanical strength. Combustion synthesis provides nanostructured powders characterized by a high surface area to facilitate the following sintering. Moreover, synthesis was conducted in aqueous and oxidizing media. Oxidizing media improve homogenization and increase the energy released during combustion. It gives rise to particles whose morphology and size suggest lower surface energies compared with aqueous media. The obtained powders were sintered by using a controlled sintering rate schedule. Lower surfaces energies minimize the shrinkage during sintering and relative densities measurements and diametral compression test confirm improved densification and consequently mechanical properties. [es

  3. Process for obtaining luminescent glass layers

    International Nuclear Information System (INIS)

    Heindi, R.; Robert, A.

    1984-01-01

    Process for obtaining luminescent glass layers, application to the production of devices provided with said layers and to the construction of photoscintillators. The process comprises projecting onto a support, by cathodic sputtering, the material of at least one target, each target including silica and at least one chemical compound able to give luminescent centers, such as a cerium oxide, so as to form at least one luminescent glass layer of the said support. The layer or layers formed preferably undergo a heat treatment such as annealing in order to increase the luminous efficiency thereof. It is in this way possible to form a scintillating glass layer on the previously frosted entrance window of a photomultiplier in order to obtain an integrated photoscintillator

  4. Processing of hydroxyapatite obtained by combustion synthesis

    Directory of Open Access Journals (Sweden)

    M. Canillas

    2017-09-01

    Full Text Available One of the reasons of implants failure are the stress forces appearing in the material–tissue interface due to the differences between their mechanical properties. For this reason, similar mechanical properties to the surrounding tissue are desirable. The synthesis of hydroxyapatite by solution combustion method and its processing have been studied in order to obtain fully dense ceramic bodies with improved mechanical strength. Combustion synthesis provides nanostructured powders characterized by a high surface area to facilitate the following sintering. Moreover, synthesis was conducted in aqueous and oxidizing media. Oxidizing media improve homogenization and increase the energy released during combustion. It gives rise to particles whose morphology and size suggest lower surface energies compared with aqueous media. The obtained powders were sintered by using a controlled sintering rate schedule. Lower surfaces energies minimize the shrinkage during sintering and relative densities measurements and diametral compression test confirm improved densification and consequently mechanical properties.

  5. Process for obtaining ammonium uranyl tri carbonate

    International Nuclear Information System (INIS)

    Santos, L.R. dos; Riella, H.G.

    1992-01-01

    The procedure adopted for obtaining Ammonium Uranyl Carbonate (AUC) from uranium hexafluoride (U F 6 ) in a aqueous solutions of ammonium hydrogen carbonate is described in this work. The precipitation is made in temperature and pH controlled. This process consists of three steps: evaporation of U F 6 , AUC precipitation and filtration of the AUC slurry. An attempt is made of correlate the parameters involved in the precipitation process of AUC with its and U O 2 characteristics. (author)

  6. Superconducting materials fabrication process and materials obtained

    International Nuclear Information System (INIS)

    Lafon, M.O.; Magnier, C.

    1989-01-01

    The preparation process of a fine powder of YBaCuO type superconductors of easy sintering comprises: mixing in presence of alcohol an aqueous solution of rare earth nitrate or acetate, alkaline earth nitrate or acetate and copper nitrate or acetate and an oxalic acid solution, the pH value of the mixture is comprised between 2 and 4, the obtained precipitate is separated, dried, calcined and eventually crushed [fr

  7. PULP OBTAINING METHOD FOR PACKAGE PRODUCTION

    Directory of Open Access Journals (Sweden)

    V. V. Kuzmich

    2015-01-01

    Full Text Available The paper presents a new method for obtaining pulp which is used for production of cardboard, paper and package while using carbon dioxide and hydrazine hydrate and neutral-sulfite  shive cooking. Output increase of  the desired product can be explained by reduction in destruction of plant raw material carbohydrates during its cooking process. Quality improvement of the desired product (improvement in bleaching and output is attributed to the fact that usage of carbon dioxide and hydrazine contributes to provision of polysaccharide chain resistance to destruction due to the presence of  end links having structure of metasaccharinic and aldonic acids.The author has developed a new method for pulp obtaining on the basis of the executed investigations  and literature data.  СО2 and hydrazine hydrate have been used for obtaining pulp. Method invention concerns pulp obtaining and it can be used for paper and cardboard package manufacturing in pulp and paper industry.The method is to be carried in the following way: pulp-containing plant raw material is loaded into an autoclave and then aqua solution of sodium monosulfite containing hydrazine hydrate that constitutes 4–5 % of absolute dry pulp-containing raw material mass with liquid module 1:6–1:8 is supplied into the autoclave. The autoclave is closed for operation under pressure and the solution is carbonated under pressure which constitutes 5–8 % of absolute dry plant raw material (shover. Temperature is subsequently raised up to 180 °С in the space of 2 hours and cooking is carried out in the course of 4 hours. Usage of  the proposed method for shover cooking makes it possible to reduce monosulfite cooking process and improve qualitative characteristics and output of the desired product.  In addition to above mentioned fact there is a possibility to improve bleaching and final product output. 

  8. Lifetime obtained by ion beam assisted deposition

    Energy Technology Data Exchange (ETDEWEB)

    Chakaroun, M. [XLIM-MINACOM-UMR 6172, Faculte des Sciences et Techniques, 123 av. Albert Thomas, 87060 Limoges cedex (France); Antony, R. [XLIM-MINACOM-UMR 6172, Faculte des Sciences et Techniques, 123 av. Albert Thomas, 87060 Limoges cedex (France)], E-mail: remi.antony@unilim.fr; Taillepierre, P.; Moliton, A. [XLIM-MINACOM-UMR 6172, Faculte des Sciences et Techniques, 123 av. Albert Thomas, 87060 Limoges cedex (France)

    2007-09-15

    We have fabricated green organic light-emitting diodes based on tris-(8-hydroxyquinoline)aluminium (Alq3) thin films. In order to favor the charge carriers transport from the anode, we have deposited a N,N'-diphenyl-N,N'-bis (3-methylphenyl)-1,1'-diphenyl-4,4'-diamine (TPD) layer (hole transport layer) on a ITO anode. Cathode is obtained with a calcium layer covered with a silver layer. This silver layer is used to protect the other layers against oxygen during the OLED use. All the depositions are performed under vacuum and the devices are not exposed to air during their realisation. In order to improve the silver layer characteristics, we have realized this layer with the ion beam assisted deposition process. The aim of this process is to densify the layer and then reduce the permeation of H{sub 2}O and O{sub 2}. We have used argon ions to assist the silver deposition. All the OLEDs optoelectronic characterizations (I = f(V), L = f(V)) are performed in the ambient air. We compare the results obtained with the assisted layer with those obtained with a classical cathode realized by thermal unassisted evaporation. We have realized lifetime measurements in the ambient air and we discuss about the assisted layer influence on the OLEDs performances.

  9. An accurate model for numerical prediction of piezoelectric energy harvesting from fluid structure interaction problems

    International Nuclear Information System (INIS)

    Amini, Y; Emdad, H; Farid, M

    2014-01-01

    Piezoelectric energy harvesting (PEH) from ambient energy sources, particularly vibrations, has attracted considerable interest throughout the last decade. Since fluid flow has a high energy density, it is one of the best candidates for PEH. Indeed, a piezoelectric energy harvesting process from the fluid flow takes the form of natural three-way coupling of the turbulent fluid flow, the electromechanical effect of the piezoelectric material and the electrical circuit. There are some experimental and numerical studies about piezoelectric energy harvesting from fluid flow in literatures. Nevertheless, accurate modeling for predicting characteristics of this three-way coupling has not yet been developed. In the present study, accurate modeling for this triple coupling is developed and validated by experimental results. A new code based on this modeling in an openFOAM platform is developed. (paper)

  10. Obtaining and Estimating Low Noise Floors in Vibration Sensors

    DEFF Research Database (Denmark)

    Brincker, Rune; Larsen, Jesper Abildgaard

    2007-01-01

    For some applications like seismic applications and measuring ambient vibrations in structures, it is essential that the noise floors of the sensors and other system components are low and known to the user. Some of the most important noise sources are reviewed and it is discussed how the sensor...... can be designed in order to obtain a low noise floor. Techniques to estimate the noise floors for sensors are reviewed and are demonstrated on a commercial commonly used sensor for vibration testing. It is illustrated how the noise floor can be calculated using the coherence between simultaneous...

  11. Accurate localization of intracavitary brachytherapy applicators from 3D CT imaging studies

    International Nuclear Information System (INIS)

    Lerma, F.A.; Williamson, J.F.

    2002-01-01

    Purpose: To present an accurate method to identify the positions and orientations of intracavitary (ICT) brachytherapy applicators imaged in 3D CT scans, in support of Monte Carlo photon-transport simulations, enabling accurate dose modeling in the presence of applicator shielding and interapplicator attenuation. Materials and methods: The method consists of finding the transformation that maximizes the coincidence between the known 3D shapes of each applicator component (colpostats and tandem) with the volume defined by contours of the corresponding surface on each CT slice. We use this technique to localize Fletcher-Suit CT-compatible applicators for three cervix cancer patients using post-implant CT examinations (3 mm slice thickness and separation). Dose distributions in 1-to-1 registration with the underlying CT anatomy are derived from 3D Monte Carlo photon-transport simulations incorporating each applicator's internal geometry (source encapsulation, high-density shields, and applicator body) oriented in relation to the dose matrix according to the measured localization transformations. The precision and accuracy of our localization method are assessed using CT scans, in which the positions and orientations of dense rods and spheres (in a precision-machined phantom) were measured at various orientations relative to the gantry. Results: Using this method, we register 3D Monte Carlo dose calculations directly onto post insertion patient CT studies. Using CT studies of a precisely machined phantom, the absolute accuracy of the method was found to be ±0.2 mm in plane, and ±0.3 mm in the axial direction while its precision was ±0.2 mm in plane, and ±0.2 mm axially. Conclusion: We have developed a novel, and accurate technique to localize intracavitary brachytherapy applicators in 3D CT imaging studies, which supports 3D dose planning involving detailed 3D Monte Carlo dose calculations, modeling source positions, shielding and interapplicator shielding

  12. Fast and accurate computation of projected two-point functions

    Science.gov (United States)

    Grasshorn Gebhardt, Henry S.; Jeong, Donghui

    2018-01-01

    We present the two-point function from the fast and accurate spherical Bessel transformation (2-FAST) algorithm1Our code is available at https://github.com/hsgg/twoFAST. for a fast and accurate computation of integrals involving one or two spherical Bessel functions. These types of integrals occur when projecting the galaxy power spectrum P (k ) onto the configuration space, ξℓν(r ), or spherical harmonic space, Cℓ(χ ,χ'). First, we employ the FFTLog transformation of the power spectrum to divide the calculation into P (k )-dependent coefficients and P (k )-independent integrations of basis functions multiplied by spherical Bessel functions. We find analytical expressions for the latter integrals in terms of special functions, for which recursion provides a fast and accurate evaluation. The algorithm, therefore, circumvents direct integration of highly oscillating spherical Bessel functions.

  13. Treatment planning source assessment

    International Nuclear Information System (INIS)

    Calzetta Larrieu, O.; Blaumann, H.; Longhino, J.

    2000-01-01

    The reactor RA-6 NCT system was improved during the last year mainly in two aspects: the facility itself getting lower contamination factors and using better measurements techniques to obtain lower uncertainties in its characterization. In this job we show the different steps to get the source to be used in the treatment planning code representing the NCT facility. The first one was to compare the dosimetry in a water phantom between the calculation using the entire facility including core, filter and shields and a surface source at the end of the beam. The second one was to transform this particle by particle source in a distribution one regarding the minimum spatial, energy and angular resolution to get similar results. Finally we compare calculation and experimental values with and without the water phantom to adjust the distribution source. The results are discussed. (author)

  14. Physicochemical characteristics of ozonated sunflower oils obtained by different procedures

    Energy Technology Data Exchange (ETDEWEB)

    Diaz, M. F.; Sanchez, Y.; Gomez, M.; Hernandez, F.; Veloso, M. C.; Pereira, P. A.; Mangrich, A. S.; Andrade, J. B.

    2012-07-01

    Two ozonation procedures for sunflower oils at different applied ozone dosages were carried out. Ozone was obtained from medicinal oxygen and from air. Peroxide, acidity, and iodine indexes, along with density, viscosity and antimicrobial activity were determined. The fatty acid compositions of the samples were analyzed using GC. The content of oxygen was determined using an elemental analysis. Electronic Paramagnetic Resonance was used to measure the organic free radicals. The reactions were achieved up to peroxide index values of 658 and 675 mmolequiv kg1 using medicinal oxygen and air for 5 and 8 hours, respectively. The samples of ozonized sunflower oil did not present organic free radicals, which is a very important issue if these oils are to be used as drugs. The ozonation reaction is more rapid with medicinal oxygen (5 hours) than with air (8 hours). Ozonized sunflower oil with oxygen as an ozone source was obtained with high potential for antimicrobial activity. (Author) 34 refs.

  15. Accurate monitoring developed by EDF for FA-3-EPRTM and UK-EPRTM: chemistry-radiochemistry design and procedures

    International Nuclear Information System (INIS)

    Tigeras, Arancha; Bouhrizi, Sofia; Pierre, Marine; L'Orphelin, Jean-Matthieu

    2012-09-01

    The monitoring of chemistry and radiochemistry parameters is a fundamental need in nuclear power plants in order to ensure: - The reactivity control in real time, - The barrier integrity surveillance by means of the fuel cladding failures detection and the primary-pressure boundary components control, - The water quality to limit the radiation build-up and the material corrosion permitting to prepare the maintenance, radioprotection and waste operations. - The efficiency of treatment systems and hence the minimization of chemical and radiochemical substances discharges The relevant chemistry and radiochemistry parameters to be monitored are selected depending on the chemistry conditioning of systems, the source term evaluations, the corrosion mechanisms and the radioactivity consequences. In spite of the difficulties for obtaining representative samples under all circumstances, the EPR M design provides the appropriate provisions and analytical procedures for ensuring the reliable and accurate monitoring of parameters in compliance with the specification requirements. The design solutions, adopted for Flamanville 3-EPR M and UK-EPR M , concerning the sampling conditions and locations, the on-line and analytical equipment, the procedures and the results transmission to control room and chemistry laboratory are supported by ALARP considerations, international experience and researches concerning the nuclides behavior (corrosion product and actinides solubility, fission product degassing, impurities and additives reactions also). This paper details the means developed by EDF for making successful and meaningful sampling and measurements to achieve the essential objectives associated with the monitoring. (authors)

  16. Media and Information Literacy (MIL) in journalistic learning: strategies for accurately engaging with information and reporting news

    Science.gov (United States)

    Inayatillah, F.

    2018-01-01

    In the era of digital technology, there is abundant information from various sources. This ease of access needs to be accompanied by the ability to engage with the information wisely. Thus, information and media literacy is required. From the results of preliminary observations, it was found that the students of Universitas Negeri Surabaya, whose major is Indonesian Literature, and they take journalistic course lack of the skill of media and information literacy (MIL). Therefore, they need to be equipped with MIL. The method used is descriptive qualitative, which includes data collection, data analysis, and presentation of data analysis. Observation and documentation techniques were used to obtain data of MIL’s impact on journalistic learning for students. This study aims at describing the important role of MIL for students of journalistic and its impact on journalistic learning for students of Indonesian literature batch 2014. The results of this research indicate that journalistic is a science that is essential for students because it affects how a person perceives news report. Through the reinforcement of the course, students can avoid a hoax. MIL-based journalistic learning makes students will be more skillful at absorbing, processing, and presenting information accurately. The subject influences students in engaging with information so that they can report news credibly.

  17. Validity and Reliability of Scores Obtained on Multiple-Choice Questions: Why Functioning Distractors Matter

    Science.gov (United States)

    Ali, Syed Haris; Carr, Patrick A.; Ruit, Kenneth G.

    2016-01-01

    Plausible distractors are important for accurate measurement of knowledge via multiple-choice questions (MCQs). This study demonstrates the impact of higher distractor functioning on validity and reliability of scores obtained on MCQs. Freeresponse (FR) and MCQ versions of a neurohistology practice exam were given to four cohorts of Year 1 medical…

  18. Characterization of Wastewaters obtained from Hatay Tanneries

    Directory of Open Access Journals (Sweden)

    Şana Sungur

    2017-06-01

    Full Text Available The leather tanning industry is one of the most significant pollutants in terms of both conventional and toxic parameters. On the other hand, leather industry has an important economic role both in Turkey and in the World. In this study, wastewater samples were taken from 15 different tanneries in the Hatay Region. Wastewaters obtained from liming process and chromium tanning process was analyzed. Sulfide, chromium (III, chromium (VI, oil and grease, total suspended solids (TSS, organic matters, biochemical oxygen demand (BOD, chemical oxygen demand (COD, pH and alkalinity were determined according to Turkish Standard Methods. The determined averages values belong to wastewaters obtained from liming process were as following: pH 11.71; COD 16821 mg L-1; BOD 4357 mg L-1; TSS 39023 mg L-1; oil and grease 364 mg L-1; S-2 concentration 802 mg L-1; alkalinity 2115 mg L-1. The determined averages values belong to wastewaters obtained from chromium tanning process were also as following: pH 4.23; COD 6740 mg L-1; BOD 377 mg L-1; Cr+3 concentrations 372 mg L-1; Cr+6 concentrations 127 mg L-1; TSS 14553 mg L-1; oil and grease 343 mg L-1. The results of all analyzes were higher than wastewater discharge standards. As a result, it’s necessary to use more effective treatments in order to reduce the negative impacts of leather tanning industry that affect environment, natural water resources and at last human health and welfare.

  19. Propensity for obtaining alcohol through shoulder tapping.

    Science.gov (United States)

    Toomey, Traci L; Fabian, Lindsey E A; Erickson, Darin J; Lenk, Kathleen M

    2007-07-01

    Underage youth often obtain alcohol from adults who illegally provide the alcohol. One method for obtaining alcohol from adults is shoulder tapping, where youth approach an adult outside an alcohol establishment and ask the adult to purchase alcohol for them. The goal of this study was to assess what percentage of the general and youth-targeted adult population approached outside of a convenience/liquor store will agree to purchase and then provide alcohol to individuals who appear under age 21. We conducted 2 waves of pseudo-underage shoulder tap request attempts, using requesters who were age 21 or older but appeared 18 to 20 years old. In both waves, requests were conducted at randomly selected liquor and convenience stores, requesters explained that the reason they were asking the adult was because they did not have their identification with them, and requesters asked the adults to purchase a 6-pack of beer. During wave 1, we conducted 102 attempts, with the requester approaching the first adult entering the store alone. During wave 2, we conducted 102 attempts where the requester approached the first casually dressed male entering the store alone who appeared to be 21 to 30 years old. During wave 1, 8% of the general sample of approached adults provided alcohol to the pseudo-underage requesters. The odds of adults providing alcohol in urban areas were 9.4 times greater than in suburban areas. During wave 2, 19% of the approached young men provided alcohol to the requesters. No requester, request attempt, establishment, or community characteristics were associated with request attempt outcomes during wave 2. A small percentage of the general population of adults will agree to provide alcohol to underage youth when approached outside an alcohol establishment. The likelihood of underage youth obtaining alcohol through shoulder tapping increases substantially if the youth approach young men.

  20. A general scheme for obtaining graviton spectrums

    International Nuclear Information System (INIS)

    GarcIa-Cuadrado, G

    2006-01-01

    The aim of this contribution is to present a general scheme for obtaining graviton spectra from modified gravity theories, based on a theory developed by Grishchuk in the mid 1970s. We try to be pedagogical, putting in order some basic ideas in a compact procedure and also giving a review of the current trends in this arena. With the aim to fill a gap for the interface between quantum field theorists and observational cosmologist in this matter, we highlight two interesting applications to cosmology: clues as to the nature of dark energy; and the possibility of reconstruction of the scalar potential in scalar-tensor gravity theories

  1. Obtaining high purity silica from rice hulls

    Directory of Open Access Journals (Sweden)

    José da Silva Júnior

    2010-01-01

    Full Text Available Many routes for extracting silica from rice hulls are based on direct calcining. These methods, though, often produce silica contaminated with inorganic impurities. This work presents the study of a strategy for obtaining silica from rice hulls with a purity level adequate for applications in electronics. The technique is based on two leaching steps, using respectively aqua regia and Piranha solutions, which extract the organic matrix and inorganic impurities. The material was characterized by Fourier-transform infrared spectroscopy (FTIR, powder x-ray diffraction (XRD, x-ray fluorescence (XRF, scanning electron microscopy (SEM, particle size analysis by laser diffraction (LPSA and thermal analysis.

  2. Diaphragms obtained by radiochemical grafting in PTFE

    International Nuclear Information System (INIS)

    Nenner, T.; Fahrasmane, A.

    1984-01-01

    Diaphragms for alkaline water electrolysis are prepared by radiochemical grafting of PTFE fabric with styrene, which is later on sulfonated, or with acrylic acid. The diaphragms obtained are mechanically resistant to potash at temperatures up to 200 0 C, but show some degrafting, which limits the lifetime. The sulfonated styrene group has been found to be more stable in electrolysis than the acrylic acid. In both cases, the incorporation of a cross-linking agent like divinyl benzene improves the lifetime of the diaphragms. Electrolysis during 500 hours at 120 0 C and 10 kAm 2 could be performed. (author)

  3. Method of obtaining an anode mass for primary chemical current sources

    Energy Technology Data Exchange (ETDEWEB)

    Cyrankowska, M.; Kwasnik, J.; Sobkowiak, J.

    1981-12-31

    The Zn powder is mixed with thickner protecting the Zn from the oxidation effect of the air during subsequent amalgamation. Alkaline electrolyte which governs dissolving of the ZnO film formed on the Zn grains is added to the dry mixture. The mixture is mixed until the formation of a uniform plastic mass, after which metal mecury is added to it. The method makes it possible to reduce corrosion of Zn both during preparation of the active mass and during assembly of the electrode.

  4. The profile of Brazilian agriculture as source of raw material to obtain organic cosmetics

    Directory of Open Access Journals (Sweden)

    Neila de Paula Pereira

    2017-05-01

    Full Text Available With one of the most notable floras in the world for sustainable research, the Brazilian Amazon region currently counts on financial incentives from the Brazilian Government for private national and foreign businesses. The ongoing implantation of a Biocosmetics Research and Development Network (REDEBIO aims to stimulate research involving natural resources from the Brazilian states that make up the zone defined as “Amazônia Legal”. The objective of this region, still under development in Brazil, is principally to aggregate value to products manufactured in small local industries through the use of sustainable technology currently being established. Certain certified raw materials already included in the country’s sustainability program, have also begun to be cultivated according to the requirements of organic cultivation (Neves, 2009. The majority are species of Amazonian vegetation: Euterpe oleracea (Açai, Orbignya martiana (Babaçu, Theobroma grandi-florum (Cupuaçu, Carapas guianensis (Andiroba, Pentaclethra macroloba (Pracaxi, Copaifera landesdorffi (Copaiba, Platonia insignis (Bacuri, Theobroma cacao (Cacao, Virola surinamensis (Ucuuba and Bertholletia excelsa (Brazil nut. These generate phytopreparations, such as oils, extracts, and dyes that are widely used in the manufacture of Brazilian organic cosmetics with scientifically proven topical and capillary benefits. In the final balance, Brazilian organic cosmetics should continue to gain force over the next few years, especially with the regulation of the organic cosmetics market that is being drafted by the Brazilian Ministry of Agriculture. Moreover, lines of ecologically aware products that provide quality of life for both for rural and metropolitan communities show a tendency to occupy greater space in the market.

  5. An Improved Cambridge Filter Pad Extraction Methodology to Obtain More Accurate Water and “Tar” Values: In Situ Cambridge Filter Pad Extraction Methodology

    OpenAIRE

    Ghosh David; Jeannet Cyril

    2014-01-01

    Previous investigations by others and internal investigations at Philip Morris International (PMI) have shown that the standard trapping and extraction procedure used for conventional cigarettes, defined in the International Standard ISO 4387 (Cigarettes -- Determination of total and nicotine-free dry particulate matter using a routine analytical smoking machine), is not suitable for high-water content aerosols. Errors occur because of water losses during the opening of the Cambridge filter p...

  6. Radiology clinical synopsis: a simple solution for obtaining an adequate clinical history for the accurate reporting of imaging studies on patients in intensive care units

    International Nuclear Information System (INIS)

    Cohen, Mervyn D.; Alam, Khurshaid

    2005-01-01

    Lack of clinical history on radiology requisitions is a universal problem. We describe a simple Web-based system that readily provides radiology-relevant clinical history to the radiologist reading radiographs of intensive care unit (ICU) patients. Along with the relevant history, which includes primary and secondary diagnoses, disease progression and complications, the system provides the patient's name, record number and hospital location. This information is immediately available to reporting radiologists. New clinical information is immediately entered on-line by the radiologists as they are reviewing images. After patient discharge, the data are stored and immediately available if the patient is readmitted. The system has been in routine clinical use in our hospital for nearly 2 years. (orig.)

  7. Performance improvement of optical semiconductor sources

    International Nuclear Information System (INIS)

    El Tokhy, M.E.M.E

    2009-01-01

    This thesis has been concerned with a detailed study of nano-technology quantum sources. From these sources, quantum cascaded laser (QCLs) and quantum dot lasers (QDs), are studied theoretically. Block diagram models based on VisSim environment in junction with mathematical models are developed to analyze these kinds of optical sources. The mathematical model is derived to express explicitly the performance of the device, while a block diagram model is implemented which implicitly describe the same device. By using these mathematical models, new expressions are obtained. Accurate and efficient modeling of these sources is being increasingly important in the design and optimization of optical integrated circuits and circuit component. In the case of QCLs, the diagram is used to calculate its characteristics such as potential voltage, output optical power, current, threshold current density, slope efficiency, differential efficiency, and optical gain of these devices. Furthermore, the effect of each parameter of the QCLs, such as number of periods; N p , operating temperature; T, waveguide losses; α w , mirror losses; α m , on its performance are discussed in details. To demonstrate these effects further, the changes in three dimensional are plotted. In another mean, improving the lasing properties of the QCLs through both block diagram and mathematical models is the main scope in this thesis. In order to enhance the performance of the underlined device, mathematical model parameters are tuned to obtain the optimum behavior. Additionally, it is important to model and analyze the effects of these physical parameters on the performance of QCLs. These parameters play the central role in specifying the optical characteristics of the considered laser source. Proposed relation that linked emitted power with QCLs parameters is deduced. Moreover, it is important to have a large amount of radiated power, where increasing the amount of radiated power represents the main

  8. Accurate isotope ratio mass spectrometry. Some problems and possibilities

    International Nuclear Information System (INIS)

    Bievre, P. de

    1978-01-01

    The review includes reference to 190 papers, mainly published during the last 10 years. It covers the following: important factors in accurate isotope ratio measurements (precision and accuracy of isotope ratio measurements -exemplified by determinations of 235 U/ 238 U and of other elements including 239 Pu/ 240 Pu; isotope fractionation -exemplified by curves for Rb, U); applications (atomic weights); the Oklo natural nuclear reactor (discovered by UF 6 mass spectrometry at Pierrelatte); nuclear and other constants; isotope ratio measurements in nuclear geology and isotope cosmology - accurate age determination; isotope ratio measurements on very small samples - archaeometry; isotope dilution; miscellaneous applications; and future prospects. (U.K.)

  9. ROLAIDS-CPM: A code for accurate resonance absorption calculations

    International Nuclear Information System (INIS)

    Kruijf, W.J.M. de.

    1993-08-01

    ROLAIDS is used to calculate group-averaged cross sections for specific zones in a one-dimensional geometry. This report describes ROLAIDS-CPM which is an extended version of ROLAIDS. The main extension in ROLAIDS-CPM is the possibility to use the collision probability method for a slab- or cylinder-geometry instead of the less accurate interface-currents method. In this way accurate resonance absorption calculations can be performed with ROLAIDS-CPM. ROLAIDS-CPM has been developed at ECN. (orig.)

  10. Accurate evaluation of exchange fields in finite element micromagnetic solvers

    Science.gov (United States)

    Chang, R.; Escobar, M. A.; Li, S.; Lubarda, M. V.; Lomakin, V.

    2012-04-01

    Quadratic basis functions (QBFs) are implemented for solving the Landau-Lifshitz-Gilbert equation via the finite element method. This involves the introduction of a set of special testing functions compatible with the QBFs for evaluating the Laplacian operator. The results by using QBFs are significantly more accurate than those via linear basis functions. QBF approach leads to significantly more accurate results than conventionally used approaches based on linear basis functions. Importantly QBFs allow reducing the error of computing the exchange field by increasing the mesh density for structured and unstructured meshes. Numerical examples demonstrate the feasibility of the method.

  11. Radio Astronomers Set New Standard for Accurate Cosmic Distance Measurement

    Science.gov (United States)

    1999-06-01

    A team of radio astronomers has used the National Science Foundation's Very Long Baseline Array (VLBA) to make the most accurate measurement ever made of the distance to a faraway galaxy. Their direct measurement calls into question the precision of distance determinations made by other techniques, including those announced last week by a team using the Hubble Space Telescope. The radio astronomers measured a distance of 23.5 million light-years to a galaxy called NGC 4258 in Ursa Major. "Ours is a direct measurement, using geometry, and is independent of all other methods of determining cosmic distances," said Jim Herrnstein, of the National Radio Astronomy Observatory (NRAO) in Socorro, NM. The team says their measurement is accurate to within less than a million light-years, or four percent. The galaxy is also known as Messier 106 and is visible with amateur telescopes. Herrnstein, along with James Moran and Lincoln Greenhill of the Harvard- Smithsonian Center for Astrophysics; Phillip Diamond, of the Merlin radio telescope facility at Jodrell Bank and the University of Manchester in England; Makato Inoue and Naomasa Nakai of Japan's Nobeyama Radio Observatory; Mikato Miyoshi of Japan's National Astronomical Observatory; Christian Henkel of Germany's Max Planck Institute for Radio Astronomy; and Adam Riess of the University of California at Berkeley, announced their findings at the American Astronomical Society's meeting in Chicago. "This is an incredible achievement to measure the distance to another galaxy with this precision," said Miller Goss, NRAO's Director of VLA/VLBA Operations. "This is the first time such a great distance has been measured this accurately. It took painstaking work on the part of the observing team, and it took a radio telescope the size of the Earth -- the VLBA -- to make it possible," Goss said. "Astronomers have sought to determine the Hubble Constant, the rate of expansion of the universe, for decades. This will in turn lead to an

  12. Carbon nanofibers obtained from electrospinning process

    Science.gov (United States)

    Bovi de Oliveira, Juliana; Müller Guerrini, Lília; Sizuka Oishi, Silvia; Rogerio de Oliveira Hein, Luis; dos Santos Conejo, Luíza; Cerqueira Rezende, Mirabel; Cocchieri Botelho, Edson

    2018-02-01

    In recent years, reinforcements consisting of carbon nanostructures, such as carbon nanotubes, fullerenes, graphenes, and carbon nanofibers have received significant attention due mainly to their chemical inertness and good mechanical, electrical and thermal properties. Since carbon nanofibers comprise a continuous reinforcing with high specific surface area, associated with the fact that they can be obtained at a low cost and in a large amount, they have shown to be advantageous compared to traditional carbon nanotubes. The main objective of this work is the processing of carbon nanofibers, using polyacrylonitrile (PAN) as a precursor, obtained by the electrospinning process via polymer solution, with subsequent use for airspace applications as reinforcement in polymer composites. In this work, firstly PAN nanofibers were produced by electrospinning with diameters in the range of (375 ± 85) nm, using a dimethylformamide solution. Using a furnace, the PAN nanofiber was converted into carbon nanofiber. Morphologies and structures of PAN and carbon nanofibers were investigated by scanning electron microscopy, Raman Spectroscopy, thermogravimetric analyses and differential scanning calorimeter. The resulting residual weight after carbonization was approximately 38% in weight, with a diameters reduction of 50%, and the same showed a carbon yield of 25%. From the analysis of the crystalline structure of the carbonized material, it was found that the material presented a disordered structure.

  13. Experimental methodology for obtaining sound absorption coefficients

    Directory of Open Access Journals (Sweden)

    Carlos A. Macía M

    2011-07-01

    Full Text Available Objective: the authors propose a new methodology for estimating sound absorption coefficients using genetic algorithms. Methodology: sound waves are generated and conducted along a rectangular silencer. The waves are then attenuated by the absorbing material covering the silencer’s walls. The attenuated sound pressure level is used in a genetic algorithm-based search to find the parameters of the proposed attenuation expressions that include geometric factors, the wavelength and the absorption coefficient. Results: a variety of adjusted mathematical models were found that make it possible to estimate the absorption coefficients based on the characteristics of a rectangular silencer used for measuring the attenuation of the noise that passes through it. Conclusions: this methodology makes it possible to obtain the absorption coefficients of new materials in a cheap and simple manner. Although these coefficients might be slightly different from those obtained through other methodologies, they provide solutions within the engineering accuracy ranges that are used for designing noise control systems.

  14. Alcoholic Beverages Obtained from Black Mulberry

    Directory of Open Access Journals (Sweden)

    Jacinto Darias-Martín

    2003-01-01

    Full Text Available Black mulberry (Morus nigra is a fruit not known only for its nutritional qualities and its flavour, but also for its traditional use in natural medicine as it has a high content of active therapeutic compounds. However, this fruit is not widely produced in Spain but some trees are still found growing in the Canary Islands, particularly on the edges of the ravine. The inhabitants of these islands (Tenerife, La Gomera, La Palma, El Hierro and Lanzarote collect the fruit and prepare homemade beverages for medicinal purposes. Numerous authors have reported that type II diabetes mellitus can be controlled by taking a mixture containing black mulberry and water. Apart from that, this fruit has been used for the treatment of mouth, tongue and throat inflammations. In this study we present some characteristics of black mulberry juice (TSS, pH, titratable acidity, citric acid, lactic acid, polyphenols, anthocyanins, the potassium etc. and alcoholic beverages (alcoholic grade, pH, total acidity, volatile acidity, tannins, phenols etc. obtained from black mulberry. Moreover, we have studied the quality of liquors obtained from black mulberry in Canary Islands.

  15. Cotton nanofibers obtained by different acid conditions

    International Nuclear Information System (INIS)

    Teixeira, Eliangela de M.; Oliveira, Caue Ribeiro de; Mattoso, Luiz H.C.; Correa, Ana Carolina; Palladin, Priscila

    2009-01-01

    The thermal stability of cellulose nanofibers is related to their application and especially to polymer processing which temperatures of processing are around 200 deg C. In this work, nanofibers of commercial cotton were obtained by acid hydrolysis employing different acids: sulfuric, hydrochloric and a mixture (2:1; sulfuric acid: hydrochloric acid).The morphology of the nanofibers were characterized by transmission microscopy (TEM), crystallinity by x-ray diffraction (XRD) and thermal stability in air atmosphere by thermogravimetric analysis (TGA). The results indicated a very similar morphology and crystallinity among them. The main differences were relative to aggregation state e and thermal stability. The aggregation state of the suspensions decreases in the order HCl 2 SO 4 :HCl 2 SO 4- . The hydrolysis with a mix of HCl and H 2 SO 4 resulted in cellulose nanofibers with higher thermal stability than those hydrolyzed with H 2 SO 4 . The hydrolysis employed with a mixture of sulphuric and hydrochloric acids also showed a better dispersion than those suspensions of nanofibers obtained by hydrolysis with only HCl. (author)

  16. Neutron source

    International Nuclear Information System (INIS)

    Cason, J.L. Jr.; Shaw, C.B.

    1975-01-01

    A neutron source which is particularly useful for neutron radiography consists of a vessel containing a moderating media of relatively low moderating ratio, a flux trap including a moderating media of relatively high moderating ratio at the center of the vessel, a shell of depleted uranium dioxide surrounding the moderating media of relatively high moderating ratio, a plurality of guide tubes each containing a movable source of neutrons surrounding the flux trap, a neutron shield surrounding one part of each guide tube, and at least one collimator extending from the flux trap to the exterior of the neutron source. The shell of depleted uranium dioxide has a window provided with depleted uranium dioxide shutters for each collimator. Reflectors are provided above and below the flux trap and on the guide tubes away from the flux trap

  17. Crowd Sourcing.

    Science.gov (United States)

    Baum, Neil

    2016-01-01

    The Internet has contributed new words and slang to our daily vernacular. A few terms, such as tweeting, texting, sexting, blogging, and googling, have become common in most vocabularies and in many languages, and are now included in the dictionary. A new buzzword making the rounds in industry is crowd sourcing, which involves outsourcing an activity, task, or problem by sending it to people or groups outside a business or a practice. Crowd sourcing allows doctors and practices to tap the wisdom of many instead of relying only on the few members of their close-knit group. This article defines "crowd sourcing," offers examples, and explains how to get started with this approach that can increase your ability to finish a task or solve problems that you don't have the time or expertise to accomplish.

  18. Energy sources

    International Nuclear Information System (INIS)

    Vajda, Gy.

    1998-01-01

    A comprehensive review is presented of the available sources of energy in the world is presented. About 80 percent of primary energy utilization is based on fossile fuels, and their dominant role is not expected to change in the foreseeable future. Data are given on petroleum, natural gas and coal based power production. The role and economic aspects of nuclear power are analyzed. A brief summary of renewable energy sources is presented. The future prospects of the world's energy resources are discussed, and the special position of Hungary regarding fossil, nuclear and renewable energy and the country's energy potential is evaluated. (R.P.)

  19. A radio/optical reference frame. 5: Additional source positions in the mid-latitude southern hemisphere

    Science.gov (United States)

    Russell, J. L.; Reynolds, J. E.; Jauncey, D. L.; de Vegt, C.; Zacharias, N.; Ma, C.; Fey, A. L.; Johnston, K. J.; Hindsley, R.; Hughes, J. A.; Malin, D. F.; White, G. L.; Kawaguchi, N.; Takahashi, Y.

    1994-01-01

    We report new accurate radio position measurements for 30 sources, preliminary positions for two sources, improved radio postions for nine additional sources which had limited previous observations, and optical positions and optical-radio differences for six of the radio sources. The Very Long Baseline Interferometry (VLBI) observations are part of the continuing effort to establish a global radio reference frame of about 400 compact, flat spectrum sources, which are evenly distributed across the sky. The observations were made using Mark III data format in four separate sessions in 1988-89 with radio telescopes at Tidbinbilla, Australia, Kauai, USA, and Kashima, Japan. We observed a total of 54 sources, including ten calibrators and three which were undetected. The 32 new source positions bring the total number in the radio reference frame catalog to 319 (172 northern and 147 southern) and fill in the zone -25 deg greater than delta greater than -45 deg which, prior to this list, had the lowest source density. The VLBI positions have an average formal precision of less than 1 mas, although unknown radio structure effects of about 1-2 mas may be present. The six new optical postion measurements are part of the program to obtain positions of the optical counterparts of the radio reference frame source and to map accurately the optical on to the radio reference frames. The optical measurements were obtained from United States Naval Observatory (USNO) Black Birch astrograph plates and source plates from the AAT, and Kitt Peak National Observatory (KPNO) 4 m, and the European Southern Observatory (ESO) Schmidt. The optical positions have an average precision of 0.07 sec, mostly due to the zero point error when adjusted to the FK5 optical frame using the IRS catalog. To date we have measured optical positions for 46 sources.

  20. Prevalence of accurate nursing documentation in patient records

    NARCIS (Netherlands)

    Paans, Wolter; Sermeus, Walter; Nieweg, Roos; van der Schans, Cees

    2010-01-01

    AIM: This paper is a report of a study conducted to describe the accuracy of nursing documentation in patient records in hospitals. Background.  Accurate nursing documentation enables nurses to systematically review the nursing process and to evaluate the quality of care. Assessing nurses' reports

  1. Using an eye tracker for accurate eye movement artifact correction

    NARCIS (Netherlands)

    Kierkels, J.J.M.; Riani, J.; Bergmans, J.W.M.; Boxtel, van G.J.M.

    2007-01-01

    We present a new method to correct eye movement artifacts in electroencephalogram (EEG) data. By using an eye tracker, whose data cannot be corrupted by any electrophysiological signals, an accurate method for correction is developed. The eye-tracker data is used in a Kalman filter to estimate which

  2. Feedforward signal prediction for accurate motion systems using digital filters

    NARCIS (Netherlands)

    Butler, H.

    2012-01-01

    A positioning system that needs to accurately track a reference can benefit greatly from using feedforward. When using a force actuator, the feedforward needs to generate a force proportional to the reference acceleration, which can be measured by means of an accelerometer or can be created by

  3. Fishing site mapping using local knowledge provides accurate and ...

    African Journals Online (AJOL)

    Accurate fishing ground maps are necessary for fisheries monitoring. In Velondriake locally managed marine area (LMMA) we observed that the nomenclature of shared fishing sites (FS) is villages dependent. Additionally, the level of illiteracy makes data collection more complicated, leading to data collectors improvising ...

  4. Laser guided automated calibrating system for accurate bracket ...

    African Journals Online (AJOL)

    It is widely recognized that accurate bracket placement is of critical importance in the efficient application of biomechanics and in realizing the full potential of a preadjusted edgewise appliance. Aim: The purpose of ... placement. Keywords: Hough transforms, Indirect bonding technique, Laser, Orthodontic bracket placement ...

  5. Foresight begins with FMEA. Delivering accurate risk assessments.

    Science.gov (United States)

    Passey, R D

    1999-03-01

    If sufficient factors are taken into account and two- or three-stage analysis is employed, failure mode and effect analysis represents an excellent technique for delivering accurate risk assessments for products and processes, and for relating them to legal liability. This article describes a format that facilitates easy interpretation.

  6. Accurately Detecting Students' Lies regarding Relational Aggression by Correctional Instructions

    Science.gov (United States)

    Dickhauser, Oliver; Reinhard, Marc-Andre; Marksteiner, Tamara

    2012-01-01

    This study investigates the effect of correctional instructions when detecting lies about relational aggression. Based on models from the field of social psychology, we predict that correctional instruction will lead to a less pronounced lie bias and to more accurate lie detection. Seventy-five teachers received videotapes of students' true denial…

  7. Fast and Accurate Residential Fire Detection Using Wireless Sensor Networks

    NARCIS (Netherlands)

    Bahrepour, Majid; Meratnia, Nirvana; Havinga, Paul J.M.

    2010-01-01

    Prompt and accurate residential fire detection is important for on-time fire extinguishing and consequently reducing damages and life losses. To detect fire sensors are needed to measure the environmental parameters and algorithms are required to decide about occurrence of fire. Recently, wireless

  8. Dense and accurate whole-chromosome haplotyping of individual genomes

    NARCIS (Netherlands)

    Porubsky, David; Garg, Shilpa; Sanders, Ashley D.; Korbel, Jan O.; Guryev, Victor; Lansdorp, Peter M.; Marschall, Tobias

    2017-01-01

    The diploid nature of the human genome is neglected in many analyses done today, where a genome is perceived as a set of unphased variants with respect to a reference genome. This lack of haplotype-level analyses can be explained by a lack of methods that can produce dense and accurate

  9. Accurate automatic tuning circuit for bipolar integrated filters

    NARCIS (Netherlands)

    de Heij, Wim J.A.; de Heij, W.J.A.; Hoen, Klaas; Hoen, Klaas; Seevinck, Evert; Seevinck, E.

    1990-01-01

    An accurate automatic tuning circuit for tuning the cutoff frequency and Q-factor of high-frequency bipolar filters is presented. The circuit is based on a voltage controlled quadrature oscillator (VCO). The frequency and the RMS (root mean square) amplitude of the oscillator output signal are

  10. Laser Guided Automated Calibrating System for Accurate Bracket ...

    African Journals Online (AJOL)

    Background: The basic premise of preadjusted bracket system is accurate bracket positioning. ... using MATLAB ver. 7 software (The MathWorks Inc.). These images are in the form of matrices of size 640 × 480. 650 nm (red light) type III diode laser is used as ... motion control and Pitch, Yaw, Roll degrees of freedom (DOF).

  11. A Simple and Accurate Method for Measuring Enzyme Activity.

    Science.gov (United States)

    Yip, Din-Yan

    1997-01-01

    Presents methods commonly used for investigating enzyme activity using catalase and presents a new method for measuring catalase activity that is more reliable and accurate. Provides results that are readily reproduced and quantified. Can also be used for investigations of enzyme properties such as the effects of temperature, pH, inhibitors,…

  12. How Accurate are Government Forecast of Economic Fundamentals?

    NARCIS (Netherlands)

    C-L. Chang (Chia-Lin); Ph.H.B.F. Franses (Philip Hans); M.J. McAleer (Michael)

    2009-01-01

    textabstractA government’s ability to forecast key economic fundamentals accurately can affect business confidence, consumer sentiment, and foreign direct investment, among others. A government forecast based on an econometric model is replicable, whereas one that is not fully based on an

  13. Quantifying Accurate Calorie Estimation Using the "Think Aloud" Method

    Science.gov (United States)

    Holmstrup, Michael E.; Stearns-Bruening, Kay; Rozelle, Jeffrey

    2013-01-01

    Objective: Clients often have limited time in a nutrition education setting. An improved understanding of the strategies used to accurately estimate calories may help to identify areas of focused instruction to improve nutrition knowledge. Methods: A "Think Aloud" exercise was recorded during the estimation of calories in a standard dinner meal…

  14. General approach for accurate resonance analysis in transformer windings

    NARCIS (Netherlands)

    Popov, M.

    2018-01-01

    In this paper, resonance effects in transformer windings are thoroughly investigated and analyzed. The resonance is determined by making use of an accurate approach based on the application of the impedance matrix of a transformer winding. The method is validated by a test coil and the numerical

  15. Novel multi-beam radiometers for accurate ocean surveillance

    DEFF Research Database (Denmark)

    Cappellin, C.; Pontoppidan, K.; Nielsen, P. H.

    2014-01-01

    Novel antenna architectures for real aperture multi-beam radiometers providing high resolution and high sensitivity for accurate sea surface temperature (SST) and ocean vector wind (OVW) measurements are investigated. On the basis of the radiometer requirements set for future SST/OVW missions...

  16. Planimetric volumetry of the prostate: how accurate is it?

    NARCIS (Netherlands)

    Aarnink, R. G.; Giesen, R. J.; de la Rosette, J. J.; Huynen, A. L.; Debruyne, F. M.; Wijkstra, H.

    1995-01-01

    Planimetric volumetry is used in clinical practice when accurate volume determination of the prostate is needed. The prostate volume is determined by discretization of the 3D prostate shape. The are of the prostate is calculated in consecutive ultrasonographic cross-sections. This area is multiplied

  17. Accurate conjugate gradient methods for families of shifted systems

    NARCIS (Netherlands)

    Eshof, J. van den; Sleijpen, G.L.G.

    We present an efficient and accurate variant of the conjugate gradient method for solving families of shifted systems. In particular we are interested in shifted systems that occur in Tikhonov regularization for inverse problems since these problems can be sensitive to roundoff errors. The

  18. Accurate 3D Mapping Algorithm for Flexible Antennas

    Directory of Open Access Journals (Sweden)

    Saed Asaly

    2018-01-01

    Full Text Available This work addresses the problem of performing an accurate 3D mapping of a flexible antenna surface. Consider a high-gain satellite flexible antenna; even a submillimeter change in the antenna surface may lead to a considerable loss in the antenna gain. Using a robotic subreflector, such changes can be compensated for. Yet, in order to perform such tuning, an accurate 3D mapping of the main antenna is required. This paper presents a general method for performing an accurate 3D mapping of marked surfaces such as satellite dish antennas. Motivated by the novel technology for nanosatellites with flexible high-gain antennas, we propose a new accurate mapping framework which requires a small-sized monocamera and known patterns on the antenna surface. The experimental result shows that the presented mapping method can detect changes up to 0.1-millimeter accuracy, while the camera is located 1 meter away from the dish, allowing an RF antenna optimization for Ka and Ku frequencies. Such optimization process can improve the gain of the flexible antennas and allow an adaptive beam shaping. The presented method is currently being implemented on a nanosatellite which is scheduled to be launched at the end of 2018.

  19. Device accurately measures and records low gas-flow rates

    Science.gov (United States)

    Branum, L. W.

    1966-01-01

    Free-floating piston in a vertical column accurately measures and records low gas-flow rates. The system may be calibrated, using an adjustable flow-rate gas supply, a low pressure gage, and a sequence recorder. From the calibration rates, a nomograph may be made for easy reduction. Temperature correction may be added for further accuracy.

  20. Polypropylene obtained through zeolite supported catalysts

    International Nuclear Information System (INIS)

    Bastos, Queli C.; Marques, Maria de Fatima V.

    2004-01-01

    Propylene polymerizations were carried out with φ 2 C(Flu)(Cp)ZrCl 2 and SiMe 2 (Ind)2ZrCl 2 catalysts supported on silica, zeolite sodic mordenite (NaM) and acid mordenite (HM). The polymerizations were performed at different temperatures and varying aluminium/zirconium molar ratios ([Al]/[Zr]). The effect of these reaction parameters on the catalyst activity was investigated using a proposed statistical experimental planning. In the case of f 2 C(Flu)(Cp)ZrCl 2 , SiO 2 and NaM were used as support and the catalyst performance evaluated using toluene and pentane as polymerization solvent. The molecular weight, molecular weight distribution, melting point and crystallinity of the polymers were examined. The results indicate very high activities for the syndiospecific heterogeneous system. Also, the polymers obtained had superior Mw and stereo regularity. (author)

  1. Polypropylene obtained through zeolite supported catalysts

    Directory of Open Access Journals (Sweden)

    Queli C. Bastos

    2004-01-01

    Full Text Available Propylene polymerizations were carried out with f2C(Flu(CpZrCl2 and SiMe2(Ind2ZrCl2 catalysts supported on silica, zeolite sodic mordenite (NaM and acid mordenite (HM. The polymerizations were performed at different temperatures and varying aluminium/zirconium molar ratios ([Al]/[Zr]. The effect of these reaction parameters on the catalyst activity was investigated using a proposed statistical experimental planning. In the case of f2C(Flu(CpZrCl2, SiO2 and NaM were used as support and the catalyst performance evaluated using toluene and pentane as polymerization solvent. The molecular weight, molecular weight distribution, melting point and crystallinity of the polymers were examined. The results indicate very high activities for the syndiospecific heterogeneous system. Also, the polymers obtained had superior Mw and stereoregularity.

  2. Process to Obtain Quick Counts from PREP

    Directory of Open Access Journals (Sweden)

    Martínez–Cruz M.Á.

    2011-10-01

    Full Text Available Considering the Preliminary Electoral Results Program (PERP as a database of the federal elections for president of the Mexican Republic, a methodology was developed in order to find representative samples of ballot boxes installed in the election’s day (quick count in different hours, due to its characteristics of gathering of information, the PREP in the first hours forms a non-representative sample of data. In a particular way, in the election of July 2, 2006, after 3 hours of opening the PREP, it was observed that the accuracy of the process of the quick counts was better than the one obtained by the IFE. Among other things, this allows to lower the cost, to increase the confidentiality of the ballot boxes used in the sampling and to distinguish in a precise moment the winning candidate long before PREP finishes.

  3. The method of obtaining of decorative varnish

    International Nuclear Information System (INIS)

    Salidzhanova, N.S.; Tashbekova, D.M.

    1997-01-01

    The method of obtaining of decorative varnish allowing to remove inhibition action of air oxygen and to improve the varnish hardness is described. It is includes the impregnation of texture paper with mixture of PE-284 type polyether lacquer on the basis of unsaturated oligoethylenglycolfumarath resin and cation type salt, putting it on wooden or asbestos cement slabs and further hardening by pulsed beams of accelerated electrons on moving belt. The radiation dose for one pulse is 1,10 -2 - 9,10 -3 MGy, the number of pulses is 180 - 250, the duration of pulses is 2.3 ms, their frequency is 50 KHz. Chloride, bromide, benzylbromide or iodide of N, N-dialkylaminoethyl (benzil) (met)acrylate are used as cation type salt. (author)

  4. Obtaining the electrostatic screening from first principles

    International Nuclear Information System (INIS)

    Shaviv, N.J.; Shaviv, G.

    2003-01-01

    We derive the electrostatic screening effect from first principles and show the basic properties of the screening process. We in particular show that under the conditions prevailing in the Sun the number of particles in the Debye sphere is of the order of unity. Consequently; fluctuations play a dominant role in the screening process. The fluctuations lead to an effective time dependent potential. Particles with low kinetic energy lose on the average energy to the plasma and vice versa with high energy particles. We derive general conditions on the screening energy and show under what conditions the Salpeter approximation is obtained. The connection between the screening and relaxation processes in the plasma is exposed

  5. ORIENTATION OF ENTERPRISES TOWARD OBTAINING COMPETITIVE

    Directory of Open Access Journals (Sweden)

    PAUL BOGDAN ZAMFIR

    2015-10-01

    Full Text Available In this paper I proposed to emphasize the importance of obtaining competitive advantage by companies on EU internal market. The huge EU market, offers for participating companies the possibility to achieve significant economies of scale and numerous niches (segments market, which can be covered with large quantities of goods, the condition is that niches to be discovered in time, and the firms to be able to adapt promptly at their needs. Thus, the most important positive effect derives from the fact that companies have at their disposal a vast market consisting approximately 500 million consumers, free of customs duties and other restrictions inhindering the movement of goods. On this background, the companies can achieve high series production and thereby can reduce their cost of production and increase their competitiveness. In this context, the companies must meet the standards of the European Union, if they really want to gain competitive advantage on EU market.

  6. Exploring the relationship between sequence similarity and accurate phylogenetic trees.

    Science.gov (United States)

    Cantarel, Brandi L; Morrison, Hilary G; Pearson, William

    2006-11-01

    We have characterized the relationship between accurate phylogenetic reconstruction and sequence similarity, testing whether high levels of sequence similarity can consistently produce accurate evolutionary trees. We generated protein families with known phylogenies using a modified version of the PAML/EVOLVER program that produces insertions and deletions as well as substitutions. Protein families were evolved over a range of 100-400 point accepted mutations; at these distances 63% of the families shared significant sequence similarity. Protein families were evolved using balanced and unbalanced trees, with ancient or recent radiations. In families sharing statistically significant similarity, about 60% of multiple sequence alignments were 95% identical to true alignments. To compare recovered topologies with true topologies, we used a score that reflects the fraction of clades that were correctly clustered. As expected, the accuracy of the phylogenies was greatest in the least divergent families. About 88% of phylogenies clustered over 80% of clades in families that shared significant sequence similarity, using Bayesian, parsimony, distance, and maximum likelihood methods. However, for protein families with short ancient branches (ancient radiation), only 30% of the most divergent (but statistically significant) families produced accurate phylogenies, and only about 70% of the second most highly conserved families, with median expectation values better than 10(-60), produced accurate trees. These values represent upper bounds on expected tree accuracy for sequences with a simple divergence history; proteins from 700 Giardia families, with a similar range of sequence similarities but considerably more gaps, produced much less accurate trees. For our simulated insertions and deletions, correct multiple sequence alignments did not perform much better than those produced by T-COFFEE, and including sequences with expressed sequence tag-like sequencing errors did not

  7. Avulsion research using flume experiments and highly accurate and temporal-rich SfM datasets

    Science.gov (United States)

    Javernick, L.; Bertoldi, W.; Vitti, A.

    2017-12-01

    SfM's ability to produce high-quality, large-scale digital elevation models (DEMs) of complicated and rapidly evolving systems has made it a valuable technique for low-budget researchers and practitioners. While SfM has provided valuable datasets that capture single-flood event DEMs, there is an increasing scientific need to capture higher temporal resolution datasets that can quantify the evolutionary processes instead of pre- and post-flood snapshots. However, flood events' dangerous field conditions and image matching challenges (e.g. wind, rain) prevent quality SfM-image acquisition. Conversely, flume experiments offer opportunities to document flood events, but achieving consistent and accurate DEMs to detect subtle changes in dry and inundated areas remains a challenge for SfM (e.g. parabolic error signatures).This research aimed at investigating the impact of naturally occurring and manipulated avulsions on braided river morphology and on the encroachment of floodplain vegetation, using laboratory experiments. This required DEMs with millimeter accuracy and precision and at a temporal resolution to capture the processes. SfM was chosen as it offered the most practical method. Through redundant local network design and a meticulous ground control point (GCP) survey with a Leica Total Station in red laser configuration (reported 2 mm accuracy), the SfM residual errors compared to separate ground truthing data produced mean errors of 1.5 mm (accuracy) and standard deviations of 1.4 mm (precision) without parabolic error signatures. Lighting conditions in the flume were limited to uniform, oblique, and filtered LED strips, which removed glint and thus improved bed elevation mean errors to 4 mm, but errors were further reduced by means of an open source software for refraction correction. The obtained datasets have provided the ability to quantify how small flood events with avulsion can have similar morphologic and vegetation impacts as large flood events

  8. Ion source

    International Nuclear Information System (INIS)

    1977-01-01

    The specifications of a set of point-shape electrodes of non-corrodable material that can hold a film of liquid material of equal thickness is described. Contained in a jacket, this set forms an ion source. The electrode is made of tungsten with a glassy carbon layer for insulation and an outer layer of aluminium-oxide ceramic material

  9. Accurate Medium-Term Wind Power Forecasting in a Censored Classification Framework

    DEFF Research Database (Denmark)

    Dahl, Christian M.; Croonenbroeck, Carsten

    2014-01-01

    We provide a wind power forecasting methodology that exploits many of the actual data's statistical features, in particular both-sided censoring. While other tools ignore many of the important “stylized facts” or provide forecasts for short-term horizons only, our approach focuses on medium......-term forecasts, which are especially necessary for practitioners in the forward electricity markets of many power trading places; for example, NASDAQ OMX Commodities (formerly Nord Pool OMX Commodities) in northern Europe. We show that our model produces turbine-specific forecasts that are significantly more...... accurate in comparison to established benchmark models and present an application that illustrates the financial impact of more accurate forecasts obtained using our methodology....

  10. An efficient and accurate method for calculating nonlinear diffraction beam fields

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Hyun Jo; Cho, Sung Jong; Nam, Ki Woong; Lee, Jang Hyun [Division of Mechanical and Automotive Engineering, Wonkwang University, Iksan (Korea, Republic of)

    2016-04-15

    This study develops an efficient and accurate method for calculating nonlinear diffraction beam fields propagating in fluids or solids. The Westervelt equation and quasilinear theory, from which the integral solutions for the fundamental and second harmonics can be obtained, are first considered. A computationally efficient method is then developed using a multi-Gaussian beam (MGB) model that easily separates the diffraction effects from the plane wave solution. The MGB models provide accurate beam fields when compared with the integral solutions for a number of transmitter-receiver geometries. These models can also serve as fast, powerful modeling tools for many nonlinear acoustics applications, especially in making diffraction corrections for the nonlinearity parameter determination, because of their computational efficiency and accuracy.

  11. A hybrid method for accurate star tracking using star sensor and gyros.

    Science.gov (United States)

    Lu, Jiazhen; Yang, Lie; Zhang, Hao

    2017-10-01

    Star tracking is the primary operating mode of star sensors. To improve tracking accuracy and efficiency, a hybrid method using a star sensor and gyroscopes is proposed in this study. In this method, the dynamic conditions of an aircraft are determined first by the estimated angular acceleration. Under low dynamic conditions, the star sensor is used to measure the star vector and the vector difference method is adopted to estimate the current angular velocity. Under high dynamic conditions, the angular velocity is obtained by the calibrated gyros. The star position is predicted based on the estimated angular velocity and calibrated gyros using the star vector measurements. The results of the semi-physical experiment show that this hybrid method is accurate and feasible. In contrast with the star vector difference and gyro-assisted methods, the star position prediction result of the hybrid method is verified to be more accurate in two different cases under the given random noise of the star centroid.

  12. Biodegradable Polyelectrolyte Obtained by Radiation Polymerization

    International Nuclear Information System (INIS)

    Craciun, G.; Martin, D.; Manaila, E.; Nemtanu, M.; Brasoveanu, M.; Ighigeanu, D.

    2009-01-01

    Poly electrolytes are water-soluble polymers carrying ionic charge along the polymer chain. Depending upon the charge, these polymers are anionic or cationic. The inherent solid - liquid separating efficiency makes these poly electrolytes a unique class of polymers which find extensive application in potable water, industrial raw and process water, municipal sewage treatment, mineral processing and metallurgy, oil drilling and recovery, etc. Also, due to their ability to produce advanced induced coagulation, a considerable amount of bacteria and viruses are precipitated together with the suspended solids. Especially the acrylamide polymers are very efficacious for water treatment but acrylamide is a toxic monomer and therefore their use are governed by international standards that provide the residual acrylamide monomer content (RAMC) in them be less than 0.05%. Under these circumstances our attention was focused on the following research steps that are presented in this paper: 1) Preparation of a special class of poly electrolytes, named Pn, with very low RAMC values, based on electron beam (EB), microwave (MW) and EB + MW induced co-polymerization of aqueous solutions containing appropriate mixtures of acrylamide (AMD) and acrylic acid (AA) monomers (AMD - AA co-polymers). The Pn were obtained by radiation technology with very small RAMC (under 0.01%) as well as in a wide range of molecular weights and charge densities. Very low AMD monomer content of Pn is due to the major advantages of radiation induced polymerization in aqueous solution containing monomers. Due to water presence in the EB irradiated system, irradiated water radicals facilitate the polymerization process and increase rate and level of monomers conversion in co-polymers. Also, once again, by the presence of water, which absorbs MW energy very strongly, the MW polymerization reaction rate is much enhanced resulting in a reaction time about 50-100 times lowers than by conventional heating. Also

  13. New biomaterials obtained with ionizing radiations

    International Nuclear Information System (INIS)

    Gaussens, G.

    1982-01-01

    In present-day surgery and medicine use is increasingly made of materials foreign to the organism in order to remedy a physiological defect either temporarily or permanently. These materials, known as ''biomaterials'', take widely varying forms: plastics, metals, cements, ceramics, etc. Biomaterials can be classified in accordance with their function: (a) Devices designed to be fully implanted in the human body in order to replace an anatomical structure, either temporarily or permanently, such as articular, vascular, mammary and osteosynthetic prostheses, etc.; (b) Devices having prolonged contact with mucous tissues, such as intra-uterine devices, contact lenses, etc.; (c) Extracorporeal devices designed to treat blood such as artificial kidneys, blood oxygenators, etc.; and (d) Biomaterials can also be taken to mean chemically inert, implantable materials designed to produce a continuous discharge of substances containing pharmacologically active molecules, such as contraceptive devices or ocular devices (for treating glaucoma). The two most important criteria for a biomaterial are those of biological compatibility and biological functionality. Techniques using ionizing radiation as an energy source provide an excellent tool for synthesizing or modifying the properties of plastics. The properties of polymers can be improved, new polymers can be synthesized without chemical additives (often the cause of incompatibility with tissue or blood) and without increased temperature, and polymerization can be induced in the solid state using deep-frozen monomers. Also, radiation-induced modifications in polymers can be applied to semi-finished or finished products. Examples are also given of marketed biomaterials that have been produced using radiation chemistry techniques

  14. Utilization of ion source 'SUPERSHYPIE' in the study of low energy ion-atom and ion-molecule collisions

    International Nuclear Information System (INIS)

    Bazin, V.; Boduch, P.; Chesnel, J.Y.; Fremont, F.; Lecler, D.; Pacquet, J. Y.; Gaubert, G.; Leroy, R.

    1999-01-01

    Modifications in the ECR 4M ion source are described, which conducted to realization of the advanced source 'SUPERSHYPIE'. The Ar 8+ ion collision with Cs(6s,6p) were studied by photon spectroscopy at low energy, where the process is dominated by simple electron capture. Results obtained with 'SUPERSHYPIE' source are presented. The source was utilized also in ion-molecule collisions (CO, H 2 ) to study the spectra of recoil ions and Auger electron spectra in the Ar 17+ He collisions. The excellent performances of 'SUPERSHYPIE' in high charge production and concerning its accurate and fine control and stability are illustrated and underlined as compared with those of ECR 4M source

  15. HOW TO OBTAIN BOOKS FOR YOUR GROUP

    CERN Multimedia

    Head Librarian

    2000-01-01

    The wide variety of scientific and technical activity engaged in by people working at CERN means that the Library cannot always provide a deep on-site coverage in areas which are outside the core subjects of particle physics and accelerators. As many of you have already experienced, one way of solving this is to borrow books from other libraries. Our Inter-Library Loan (ILL) service currently obtains about 1000 books on loan per year for readers at CERN. However, there may be books which groups need on a more permanent basis, in which case a loan from either our own collection or via ILL is not the appropriate solution. Instead, groups might prefer to purchase such books from their own budgets. To facilitate this, the CERN Library has set up a procedure with the SPL Division, by which you can submit your purchase request to us and be charged via a TID when you receive the book. In addition, via our database interface WebLib, we can provide you with a private virtual catalogue of your group's collection, which...

  16. Evaluation of biodiesel obtained from cottonseed oil

    Energy Technology Data Exchange (ETDEWEB)

    Rashid, Umer [Department of Chemistry and Biochemistry, University of Agriculture, Faisalabad-38040 (Pakistan); Department of Industrial Chemistry, Government College University, Faisalabad-38000 (Pakistan); Anwar, Farooq [Department of Chemistry and Biochemistry, University of Agriculture, Faisalabad-38040 (Pakistan); Knothe, Gerhard [United States Department of Agriculture, Agricultural Research Service, National Center for Agricultural Utilization Research, Peoria, IL 61604 (United States)

    2009-09-15

    Esters from vegetable oils have attracted a great deal of interest as substitutes for petrodiesel to reduce dependence on imported petroleum and provide a fuel with more benign environmental properties. In this work biodiesel was prepared from cottonseed oil by transesterification with methanol, using sodium hydroxide, potassium hydroxide, sodium methoxide and potassium methoxide as catalysts. A series of experiments were conducted in order to evaluate the effects of reaction variables such as methanol/oil molar ratio (3:1-15:1), catalyst concentration (0.25-1.50%), temperature (25-65 C), and stirring intensity (180-600 rpm) to achieve the maximum yield and quality. The optimized variables of 6:1 methanol/oil molar ratio (mol/mol), 0.75% sodium methoxide concentration (wt.%), 65 C reaction temperature, 600 rpm agitation speed and 90 min reaction time offered the maximum methyl ester yield (96.9%). The obtained fatty acid methyl esters (FAME) were analyzed by gas chromatography (GC) and {sup 1}H NMR spectroscopy. The fuel properties of cottonseed oil methyl esters (COME), cetane number, kinematic viscosity, oxidative stability, lubricity, cloud point, pour point, cold filter plugging point, flash point, ash content, sulfur content, acid value, copper strip corrosion value, density, higher heating value, methanol content, free and bound glycerol were determined and are discussed in the light of biodiesel standards such as ASTM D6751 and EN 14214. (author)

  17. Shielding design to obtain compact marine reactor

    International Nuclear Information System (INIS)

    Yamaji, Akio; Sako, Kiyoshi

    1994-01-01

    The marine reactors equipped in previously constructed nuclear ships are in need of the secondary shield which is installed outside the containment vessel. Most of the weight and volume of the reactor plants are occupied by this secondary shield. An advanced marine reactor called MRX (Marine Reactor X) has been designed to obtain a more compact and lightweight marine reactor with enhanced safety. The MRX is a new type of marine reactor which is an integral PWR (The steam generator is installed in the pressure vessel.) with adopting a water-filled containment vessel and a new shielding design method of no installation of the secondary shield. As a result, MRX is considerably lighter in weight and more compact in size as compared with the reactors equipped in previously constructed nuclear ships. For instance, the plant weight and volume of the containment vessel of MRX are about 50% and 70% of those of the Nuclear Ship MUTSU, in spite of the power of MRX is 2.8 times as large as the MUTSU's reactor. The shielding design calculation was made using the ANISN, DOT3.5, QAD-CGGP2 and ORIGEN codes. The computational accuracy was confirmed by experimental analyses. (author)

  18. Obtaining of the antioxidants by supercritical fluid extraction

    Directory of Open Access Journals (Sweden)

    Babović Nada V.

    2011-01-01

    Full Text Available One of the important trends in the food industry today is demand for natural antioxidants from plant material. Synthetic antioxidants such as butylated hydroxytoluene (BHT, and butylated hydroxyanisole (BHA are now being replaced by the natural antioxidants because of theirs possible toxicity and as they may act as promoters of carcinogens. The natural antioxidants may show equivalent or higher antioxidant activity than the endogenous or the synthetic antioxidants. Thus, great effort is being devoted to the search for alternative and cheap sources of natural antioxidants, as well as to the development of efficient and selective extraction techniques. The supercritical fluid extraction (SFE with carbon dioxide is considered to be the most suitable method for producing natural antioxidants for the use in food industry. The supercritical extract does not contain residual organic solvents as in conventional extraction processes, which makes these products suitable for use in food, cosmetic and pharmaceutical industry. The recovery of antioxidants from plant sources involves many problematic aspects: choice of an adequate source (in terms of availability, cost, difference in phenolic content with variety and season; selection of the optimal recovery procedure (in terms of yield, simplicity, industrial application, cost; chemical analysis of extracts (for optimization purposes a fast colorimetric method is more preferable than a chromatographic one; evaluation of the antioxidant power (preferably by the different assay methods. The paper presents information about different operational methods for SFE of bioactive compounds from natural sources. It also includes the various reports on the antioxidant activity of the supercritical extracts from Lamiaceae herbs, in comparison with the activity of the synthetic antioxidants and the extracts from Lamiaceae herbs obtained by the conventional methods.

  19. Optimal Design for Placements of Tsunami Observing Systems to Accurately Characterize the Inducing Earthquake

    Science.gov (United States)

    Mulia, Iyan E.; Gusman, Aditya Riadi; Satake, Kenji

    2017-12-01

    Recently, there are numerous tsunami observation networks deployed in several major tsunamigenic regions. However, guidance on where to optimally place the measurement devices is limited. This study presents a methodological approach to select strategic observation locations for the purpose of tsunami source characterizations, particularly in terms of the fault slip distribution. Initially, we identify favorable locations and determine the initial number of observations. These locations are selected based on extrema of empirical orthogonal function (EOF) spatial modes. To further improve the accuracy, we apply an optimization algorithm called a mesh adaptive direct search to remove redundant measurement locations from the EOF-generated points. We test the proposed approach using multiple hypothetical tsunami sources around the Nankai Trough, Japan. The results suggest that the optimized observation points can produce more accurate fault slip estimates with considerably less number of observations compared to the existing tsunami observation networks.

  20. DNA barcode data accurately assign higher spider taxa

    Directory of Open Access Journals (Sweden)

    Jonathan A. Coddington

    2016-07-01

    Full Text Available The use of unique DNA sequences as a method for taxonomic identification is no longer fundamentally controversial, even though debate continues on the best markers, methods, and technology to use. Although both existing databanks such as GenBank and BOLD, as well as reference taxonomies, are imperfect, in best case scenarios “barcodes” (whether single or multiple, organelle or nuclear, loci clearly are an increasingly fast and inexpensive method of identification, especially as compared to manual identification of unknowns by increasingly rare expert taxonomists. Because most species on Earth are undescribed, a complete reference database at the species level is impractical in the near term. The question therefore arises whether unidentified species can, using DNA barcodes, be accurately assigned to more inclusive groups such as genera and families—taxonomic ranks of putatively monophyletic groups for which the global inventory is more complete and stable. We used a carefully chosen test library of CO1 sequences from 49 families, 313 genera, and 816 species of spiders to assess the accuracy of genus and family-level assignment. We used BLAST queries of each sequence against the entire library and got the top ten hits. The percent sequence identity was reported from these hits (PIdent, range 75–100%. Accurate assignment of higher taxa (PIdent above which errors totaled less than 5% occurred for genera at PIdent values >95 and families at PIdent values ≥ 91, suggesting these as heuristic thresholds for accurate generic and familial identifications in spiders. Accuracy of identification increases with numbers of species/genus and genera/family in the library; above five genera per family and fifteen species per genus all higher taxon assignments were correct. We propose that using percent sequence identity between conventional barcode sequences may be a feasible and reasonably accurate method to identify animals to family/genus. However

  1. Dosimetry of industrial sources

    International Nuclear Information System (INIS)

    Vega C, H.R.; Rodriguez J, R.; Manzanares A, E.; Hernandez V, R.; Ramirez G, J.; Rivera M, T.

    2007-01-01

    The gamma rays are produced during the disintegration of the atomic nuclei, its high energy allows them to cross thick materials. The capacity to attenuate a photons beam allows to determine the density, in line, of industrial interest materials as the mining. By means of two active dosemeters and a TLDs group (passive dosimetry) the dose rates of two sources of Cs-137 used for determining in line the density of mining materials were determined. With the dosemeters the dose levels in diverse points inside the grave that it harbors the sources and by means of calculations the isodoses curves were determined. In the phase of calculations was supposed that both sources were punctual and the isodose curves were calculated for two situations: naked sources and in their Pb packings. The dosimetry was carried out around two sources of 137 Cs. The measured values allowed to develop a calculation procedure to obtain the isodoses curves in the grave where the sources are installed. (Author)

  2. Identification of southern radio sources

    International Nuclear Information System (INIS)

    Savage, A.; Bolton, J.G.; Wright, A.E.

    1976-01-01

    Identifications are suggested for 36 radio sources from the southern zones of the Parkes 2700 MHz survey, 28 with galaxies, six with confirmed and two with suggested quasi-stellar objects. The identifications were made from the ESO quick blue survey plates, the SRC IIIa-J deep survey plates and the Palomar sky survey prints. Accurate optical positions have also been measured for nine of the objects and for five previously suggested identifications. (author)

  3. More accurate fitting of 125I and 103Pd radial dose functions

    International Nuclear Information System (INIS)

    Taylor, R. E. P.; Rogers, D. W. O.

    2008-01-01

    In this study an improved functional form for fitting the radial dose functions, g(r), of 125 I and 103 Pd brachytherapy seeds is presented. The new function is capable of accurately fitting radial dose functions over ranges as large as 0.05 cm≤r≤10 cm for 125 I seeds and 0.10 cm≤r≤10 cm for 103 Pd seeds. The average discrepancies between fit and calculated data are less than 0.5% over the full range of fit and maximum discrepancies are 2% or less. The fitting function is also capable of accounting for the sharp increase in g(r) (upturn) seen for some sources for r 125 I seeds and 9 103 Pd seeds using the EGSnrc Monte Carlo user-code BrachyDose. Fitting coefficients of the new function are tabulated for all 27 seeds. Extrapolation characteristics of the function are also investigated. The new functional form is an improvement over currently used fitting functions with its main strength being the ability to accurately fit the rapidly varying radial dose function at small distances. The new function is an excellent candidate for fitting the radial dose function of all 103 Pd and 125 I brachytherapy seeds and will increase the accuracy of dose distributions calculated around brachytherapy seeds using the TG-43 protocol over a wider range of data. More accurate values of g(r) for r<0.5 cm may be particularly important in the treatment of ocular melanoma

  4. Breaking Snake Camouflage: Humans Detect Snakes More Accurately than Other Animals under Less Discernible Visual Conditions.

    Science.gov (United States)

    Kawai, Nobuyuki; He, Hongshen

    2016-01-01

    Humans and non-human primates are extremely sensitive to snakes as exemplified by their ability to detect pictures of snakes more quickly than those of other animals. These findings are consistent with the Snake Detection Theory, which hypothesizes that as predators, snakes were a major source of evolutionary selection that favored expansion of the visual system of primates for rapid snake detection. Many snakes use camouflage to conceal themselves from both prey and their own predators, making it very challenging to detect them. If snakes have acted as a selective pressure on primate visual systems, they should be more easily detected than other animals under difficult visual conditions. Here we tested whether humans discerned images of snakes more accurately than those of non-threatening animals (e.g., birds, cats, or fish) under conditions of less perceptual information by presenting a series of degraded images with the Random Image Structure Evolution technique (interpolation of random noise). We find that participants recognize mosaic images of snakes, which were regarded as functionally equivalent to camouflage, more accurately than those of other animals under dissolved conditions. The present study supports the Snake Detection Theory by showing that humans have a visual system that accurately recognizes snakes under less discernible visual conditions.

  5. Breaking Snake Camouflage: Humans Detect Snakes More Accurately than Other Animals under Less Discernible Visual Conditions.

    Directory of Open Access Journals (Sweden)

    Nobuyuki Kawai

    Full Text Available Humans and non-human primates are extremely sensitive to snakes as exemplified by their ability to detect pictures of snakes more quickly than those of other animals. These findings are consistent with the Snake Detection Theory, which hypothesizes that as predators, snakes were a major source of evolutionary selection that favored expansion of the visual system of primates for rapid snake detection. Many snakes use camouflage to conceal themselves from both prey and their own predators, making it very challenging to detect them. If snakes have acted as a selective pressure on primate visual systems, they should be more easily detected than other animals under difficult visual conditions. Here we tested whether humans discerned images of snakes more accurately than those of non-threatening animals (e.g., birds, cats, or fish under conditions of less perceptual information by presenting a series of degraded images with the Random Image Structure Evolution technique (interpolation of random noise. We find that participants recognize mosaic images of snakes, which were regarded as functionally equivalent to camouflage, more accurately than those of other animals under dissolved conditions. The present study supports the Snake Detection Theory by showing that humans have a visual system that accurately recognizes snakes under less discernible visual conditions.

  6. Modulation of Current Source Inverter

    Directory of Open Access Journals (Sweden)

    Golam Reza Arab Markadeh

    2011-04-01

    Full Text Available Direct torque control with Current Source Inverter (CSI instead of voltage source inverter is so appropriate because of determining the torque of induction motor with machine current and air gap flux. In addition, Space-Vector Modulation (SVM is a more proper method for CSI because of low order harmonics reduction, lower switching frequency and easier implementation. This paper introduces the SVM method for CSI and uses the proposed inverter for vector control of an induction motor. The simulation results illustrate fast dynamic response and desirable torque and speed output. Fast and accurate response to changes of speed and load torque reference completely proves the prominence of this method.

  7. Orphan sources

    International Nuclear Information System (INIS)

    Pust, R.; Urbancik, L.

    2008-01-01

    The presentation describes how the stable detection systems (hereinafter referred to as S DS ) have contributed to reveal the uncontrolled sources of ionizing radiation on the territory of the State Office for Nuclear Safety (SONS) Brno Regional Centre (RC Brno). It also describes the emergencies which were solved by or in which the workers from the Brno. Regional Centre participated in. The contribution is divided into the following chapters: A. SDS systems installed on the territory of SONS RC Brno; B. Selected unusual emergencies; C. Comments to individual emergencies; D. Aspects of SDS operation in term of their users; E. Aspects of SDS operation and related activities in term of radiation protection; F. Current state of orphan sources. (authors)

  8. Irradiation with protons in order to obtain new rice varieties

    International Nuclear Information System (INIS)

    Gonzalez, Maria C.; Cristo, Elizabeth; Fuentes, Jorge L.

    2001-01-01

    In the Laboratory of Genetics and Improvement of the National Institute of Agricultural Sciences was developed a Program of Genetic Improvement using Biotechnical and Nuclear Techniques in order to obtain new rice varieties of high yield potential under drought stress condition. For them different explants types were used starting from seeds of the Cuban variety of rice Amistad 82 irradiated with protons in dose of 10, 20, 30, 40, 50, 60, 70, 80, 90 and 100 Gy. The explants were cultivated in vitro in order to obtaining callus and later plant regenerate. The plant selected in vitro was multiplied being carried out several selection cycles under field conditions. A remarkable stimulation was observed from the regeneration of plants when using dose of 20 Gy and it was possible to select 4 promising lines that differ of the donor as for cycle, architecture of the plant and tolerance to drought. Tjis result demonstrates the potentialities of this radiation source for obtaining of new rice varieties

  9. Synthesis and characterization of carbon fibers obtained through plasma techniques

    International Nuclear Information System (INIS)

    Valdivia B, M.

    2005-01-01

    The study of carbon, particularly the nano technology is a recent field, the one which has important implications in the science of new materials. It investigation is of great interest for industries producers of ceramic, metallurgy, electronic, energy storage, biomedicine, among others. The diverse application fields are a reason at national as international level, so that many works are focused in the production of nano fibers of carbon. The Thermal plasma applications laboratory (LAPT) of the National Institute of Nuclear Research (ININ), it is carrying out works about carbon nano technology. The present work has as purpose to carry out the synthesis and characterization of the carbon nano fibers which are obtained by electric arch of alternating current (CA) to high frequencies and by a plasma gun of non transferred arch, where are used hydrocarbons like benzene, methane, acetylene like carbon source and ferrocene, nickel, yttrium and cerium oxide like catalysts. For both techniques its were thought about a relationship among hydrocarbon-catalyst that it favored to the nano fibers production. The obtained product of each experiment outlined it was analyzed by transmission electron microscopy (TEM), scanning electron microscopy (SEM) and X-ray diffraction (XRD), analysis with those were obtained pictures and diffraction graphs, which were observed to arrive to one conclusion on the operation conditions, same analysis with those were characterized the tests carried out according to the nano structures formation of carbon. (Author)

  10. Tritium sources

    International Nuclear Information System (INIS)

    Glodic, S.; Boreli, F.

    1993-01-01

    Tritium is the only radioactive isotope of hydrogen. It directly follows the metabolism of water and it can be bound into genetic material, so it is very important to control levels of contamination. In order to define the state of contamination it is necessary to establish 'zero level', i.e. actual global inventory. The importance of tritium contamination monitoring increases with the development of fusion power installations. Different sources of tritium are analyzed and summarized in this paper. (author)

  11. Radioactive source

    International Nuclear Information System (INIS)

    Drabkina, L.E.; Mazurek, V.; Myascedov, D.N.; Prokhorov, P.; Kachalov, V.A.; Ziv, D.M.

    1976-01-01

    A radioactive layer in a radioactive source is sealed by the application of a sealing layer on the radioactive layer. The sealing layer can consist of a film of oxide of titanium, tin, zirconium, aluminum, or chromium. Preferably, the sealing layer is pure titanium dioxide. The radioactive layer is embedded in a finish enamel which, in turn, is on a priming enamel which surrounds a substrate

  12. Reduced sensitivity RDX obtained from bachmann RDX

    Energy Technology Data Exchange (ETDEWEB)

    Spyckerelle, Christian; Eck, Genevieve [EURENCO France, Sorgues Plant 1928 route d' Avignon, BP 311, 84706 Sorgues Cedex (France); Sjoeberg, Per; Amneus, Anna-Maria [EURENCO Sweden, SE-69186 Karlskoga (Sweden)

    2008-02-15

    In recent years much interest has been generated in a quality of reduced sensitivity RDX (RS-RDX), like I-RDX {sup registered} which, when incorporated in cast cure and even pressable plastic bonded explosives (PBX compositions), can confer reduced shock sensitivity as measured through gap test. At crystal level, lot of work has been done to try to determine which property or properties may explain the behaviour of the corresponding cast PBX composition. But up to now, and despite an international inter-laboratory comparison (Round Robin) of seven lots of RDX from five different manufacturers conducted from 2003 to 2005, even if some techniques lead to interesting results, there is no dedicated specification to apply to RS-RDX. This quality (I-RDX {sup registered}) has proved to retain its low sensitivity even after ageing, which does not seem to be the case for standard RDX produced by the Bachmann process (when re-crystallized under I-RDX conditions in order to obtain RS-RDX). It has been shown that the higher sensitivity of RDX produced by the Bachmann process, or the evolution of sensitivity after ageing of RS-RDX produced from Bachmann RDX may be linked to the presence of octogen (HMX) during the crystallization process. In order to check such hypothesis, low HMX content RDX produced by the Bachmann process has been prepared and evaluated in cast PBX composition (PBX N 109). Results of the characterization of such quality of RDX and its evaluation in cast PBX composition as well as ageing behaviour are presented and discussed; there are indications that removal of HMX from Bachmann RDX may lead to RS-RDX, which retains its RS character even after ageing. (Abstract Copyright [2008], Wiley Periodicals, Inc.)

  13. Atom interferometry experiments with lithium. Accurate measurement of the electric polarizability; Experiences d'interferometrie atomique avec le lithium. Mesure de precision de la polarisabilite electrique

    Energy Technology Data Exchange (ETDEWEB)

    Miffre, A

    2005-06-15

    Atom interferometers are very sensitive tools to make precise measurements of physical quantities. This study presents a measurement of the static electric polarizability of lithium by atom interferometry. Our result, {alpha} = (24.33 {+-} 0.16)*10{sup -30} m{sup 3}, improves by a factor 3 the most accurate measurements of this quantity. This work describes the tuning and the operation of a Mach-Zehnder atom interferometer in detail. The two interfering arms are separated by the elastic diffraction of the atomic wave by a laser standing wave, almost resonant with the first resonance transition of lithium atom. A set of experimental techniques, often complicated to implement, is necessary to build the experimental set-up. After a detailed study of the atom source (a supersonic beam of lithium seeded in argon), we present our experimental atom signals which exhibit a very high fringe visibility, up to 84.5 % for first order diffraction. A wide variety of signals has been observed by diffraction of the bosonic isotope at higher diffraction orders and by diffraction of the fermionic less abundant isotope. The quality of these signals is then used to do very accurate phase measurements. A first experiment investigates how the atom interferometer signals are modified by a magnetic field gradient. An absolute measurement of lithium atom electric polarizability is then achieved by applying a static electric field on one of the two interfering arms, separated by only 90 micrometers. The construction of such a capacitor, its alignment in the experimental set-up and its operation are fully detailed.We obtain a very accurate phase measurement of the induced Lo Surdo - Stark phase shift (0.07 % precision). For this first measurement, the final uncertainty on the electric polarizability of lithium is only 0.66 %, and is dominated by the uncertainty on the atom beam mean velocity, so that a further reduction of the uncertainty can be expected. (author)

  14. Atom interferometry experiments with lithium. Accurate measurement of the electric polarizability; Experiences d'interferometrie atomique avec le lithium. Mesure de precision de la polarisabilite electrique

    Energy Technology Data Exchange (ETDEWEB)

    Miffre, A

    2005-06-15

    Atom interferometers are very sensitive tools to make precise measurements of physical quantities. This study presents a measurement of the static electric polarizability of lithium by atom interferometry. Our result, {alpha} = (24.33 {+-} 0.16)*10{sup -30} m{sup 3}, improves by a factor 3 the most accurate measurements of this quantity. This work describes the tuning and the operation of a Mach-Zehnder atom interferometer in detail. The two interfering arms are separated by the elastic diffraction of the atomic wave by a laser standing wave, almost resonant with the first resonance transition of lithium atom. A set of experimental techniques, often complicated to implement, is necessary to build the experimental set-up. After a detailed study of the atom source (a supersonic beam of lithium seeded in argon), we present our experimental atom signals which exhibit a very high fringe visibility, up to 84.5 % for first order diffraction. A wide variety of signals has been observed by diffraction of the bosonic isotope at higher diffraction orders and by diffraction of the fermionic less abundant isotope. The quality of these signals is then used to do very accurate phase measurements. A first experiment investigates how the atom interferometer signals are modified by a magnetic field gradient. An absolute measurement of lithium atom electric polarizability is then achieved by applying a static electric field on one of the two interfering arms, separated by only 90 micrometers. The construction of such a capacitor, its alignment in the experimental set-up and its operation are fully detailed.We obtain a very accurate phase measurement of the induced Lo Surdo - Stark phase shift (0.07 % precision). For this first measurement, the final uncertainty on the electric polarizability of lithium is only 0.66 %, and is dominated by the uncertainty on the atom beam mean velocity, so that a further reduction of the uncertainty can be expected. (author)

  15. Where Do Chinese Adolescents Obtain Knowledge of Sex? Implications for Sex Education in China

    Science.gov (United States)

    Zhang, Liying; Li, Xiaoming; Shah, Iqbal H.

    2007-01-01

    Purpose: Sex education in China has been promoted for many years, but limited data are available regarding the sources from which adolescents receive sex-related knowledge. The present study was designed to examine the sources from which Chinese adolescents obtain their information on puberty, sexuality and STI/HIV/AIDS, and whether there are any…

  16. Muon sources

    International Nuclear Information System (INIS)

    Parsa, Z.

    2001-01-01

    A full high energy muon collider may take considerable time to realize. However, intermediate steps in its direction are possible and could help facilitate the process. Employing an intense muon source to carry out forefront low energy research, such as the search for muon-number non-conservation, represents one interesting possibility. For example, the MECO proposal at BNL aims for 2 x 10 -17 sensitivity in their search for coherent muon-electron conversion in the field of a nucleus. To reach that goal requires the production, capture and stopping of muons at an unprecedented 10 11 μ/sec. If successful, such an effort would significantly advance the state of muon technology. More ambitious ideas for utilizing high intensity muon sources are also being explored. Building a muon storage ring for the purpose of providing intense high energy neutrino beams is particularly exciting.We present an overview of muon sources and example of a muon storage ring based Neutrino Factory at BNL with various detector location possibilities

  17. Magnetoencephalographic accuracy profiles for the detection of auditory pathway sources.

    Science.gov (United States)

    Bauer, Martin; Trahms, Lutz; Sander, Tilmann

    2015-04-01

    The detection limits for cortical and brain stem sources associated with the auditory pathway are examined in order to analyse brain responses at the limits of the audible frequency range. The results obtained from this study are also relevant to other issues of auditory brain research. A complementary approach consisting of recordings of magnetoencephalographic (MEG) data and simulations of magnetic field distributions is presented in this work. A biomagnetic phantom consisting of a spherical volume filled with a saline solution and four current dipoles is built. The magnetic fields outside of the phantom generated by the current dipoles are then measured for a range of applied electric dipole moments with a planar multichannel SQUID magnetometer device and a helmet MEG gradiometer device. The inclusion of a magnetometer system is expected to be more sensitive to brain stem sources compared with a gradiometer system. The same electrical and geometrical configuration is simulated in a forward calculation. From both the measured and the simulated data, the dipole positions are estimated using an inverse calculation. Results are obtained for the reconstruction accuracy as a function of applied electric dipole moment and depth of the current dipole. We found that both systems can localize cortical and subcortical sources at physiological dipole strength even for brain stem sources. Further, we found that a planar magnetometer system is more suitable if the position of the brain source can be restricted in a limited region of the brain. If this is not the case, a helmet-shaped sensor system offers more accurate source estimation.

  18. How Accurately Can We Calculate Neutrons Slowing Down In Water ?

    International Nuclear Information System (INIS)

    Cullen, D E; Blomquist, R; Greene, M; Lent, E; MacFarlane, R; McKinley, S; Plechaty, E; Sublet, J C

    2006-01-01

    We have compared the results produced by a variety of currently available Monte Carlo neutron transport codes for the relatively simple problem of a fast source of neutrons slowing down and thermalizing in water. Initial comparisons showed rather large differences in the calculated flux; up to 80% differences. By working together we iterated to improve the results by: (1) insuring that all codes were using the same data, (2) improving the models used by the codes, and (3) correcting errors in the codes; no code is perfect. Even after a number of iterations we still found differences, demonstrating that our Monte Carlo and supporting codes are far from perfect; in particularly we found that the often overlooked nuclear data processing codes can be the weakest link in our systems of codes. The results presented here represent the today's state-of-the-art, in the sense that all of the Monte Carlo codes are modern, widely available and used codes. They all use the most up-to-date nuclear data, and the results are very recent, weeks or at most a few months old; these are the results that current users of these codes should expect to obtain from them. As such, the accuracy and limitations of the codes presented here should serve as guidelines to code users in interpreting their results for similar problems. We avoid crystal ball gazing, in the sense that we limit the scope of this report to what is available to code users today, and we avoid predicting future improvements that may or may not actual come to pass. An exception that we make is in presenting results for an improved thermal scattering model currently being testing using advanced versions of NJOY and MCNP that are not currently available to users, but are planned for release in the not too distant future. The other exception is to show comparisons between experimentally measured water cross sections and preliminary ENDF/B-VII thermal scattering law, S(α,β) data; although these data are strictly preliminary

  19. Accurate lithography simulation model based on convolutional neural networks

    Science.gov (United States)

    Watanabe, Yuki; Kimura, Taiki; Matsunawa, Tetsuaki; Nojima, Shigeki

    2017-07-01

    Lithography simulation is an essential technique for today's semiconductor manufacturing process. In order to calculate an entire chip in realistic time, compact resist model is commonly used. The model is established for faster calculation. To have accurate compact resist model, it is necessary to fix a complicated non-linear model function. However, it is difficult to decide an appropriate function manually because there are many options. This paper proposes a new compact resist model using CNN (Convolutional Neural Networks) which is one of deep learning techniques. CNN model makes it possible to determine an appropriate model function and achieve accurate simulation. Experimental results show CNN model can reduce CD prediction errors by 70% compared with the conventional model.

  20. Fast and accurate edge orientation processing during object manipulation

    Science.gov (United States)

    Flanagan, J Randall; Johansson, Roland S

    2018-01-01

    Quickly and accurately extracting information about a touched object’s orientation is a critical aspect of dexterous object manipulation. However, the speed and acuity of tactile edge orientation processing with respect to the fingertips as reported in previous perceptual studies appear inadequate in these respects. Here we directly establish the tactile system’s capacity to process edge-orientation information during dexterous manipulation. Participants extracted tactile information about edge orientation very quickly, using it within 200 ms of first touching the object. Participants were also strikingly accurate. With edges spanning the entire fingertip, edge-orientation resolution was better than 3° in our object manipulation task, which is several times better than reported in previous perceptual studies. Performance remained impressive even with edges as short as 2 mm, consistent with our ability to precisely manipulate very small objects. Taken together, our results radically redefine the spatial processing capacity of the tactile system. PMID:29611804

  1. The FLUKA code: An accurate simulation tool for particle therapy

    CERN Document Server

    Battistoni, Giuseppe; Böhlen, Till T; Cerutti, Francesco; Chin, Mary Pik Wai; Dos Santos Augusto, Ricardo M; Ferrari, Alfredo; Garcia Ortega, Pablo; Kozlowska, Wioletta S; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis

    2016-01-01

    Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically-based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in-vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with bot...

  2. An Accurate Link Correlation Estimator for Improving Wireless Protocol Performance

    Science.gov (United States)

    Zhao, Zhiwei; Xu, Xianghua; Dong, Wei; Bu, Jiajun

    2015-01-01

    Wireless link correlation has shown significant impact on the performance of various sensor network protocols. Many works have been devoted to exploiting link correlation for protocol improvements. However, the effectiveness of these designs heavily relies on the accuracy of link correlation measurement. In this paper, we investigate state-of-the-art link correlation measurement and analyze the limitations of existing works. We then propose a novel lightweight and accurate link correlation estimation (LACE) approach based on the reasoning of link correlation formation. LACE combines both long-term and short-term link behaviors for link correlation estimation. We implement LACE as a stand-alone interface in TinyOS and incorporate it into both routing and flooding protocols. Simulation and testbed results show that LACE: (1) achieves more accurate and lightweight link correlation measurements than the state-of-the-art work; and (2) greatly improves the performance of protocols exploiting link correlation. PMID:25686314

  3. Accurate phylogenetic tree reconstruction from quartets: a heuristic approach.

    Science.gov (United States)

    Reaz, Rezwana; Bayzid, Md Shamsuzzoha; Rahman, M Sohel

    2014-01-01

    Supertree methods construct trees on a set of taxa (species) combining many smaller trees on the overlapping subsets of the entire set of taxa. A 'quartet' is an unrooted tree over 4 taxa, hence the quartet-based supertree methods combine many 4-taxon unrooted trees into a single and coherent tree over the complete set of taxa. Quartet-based phylogeny reconstruction methods have been receiving considerable attentions in the recent years. An accurate and efficient quartet-based method might be competitive with the current best phylogenetic tree reconstruction methods (such as maximum likelihood or Bayesian MCMC analyses), without being as computationally intensive. In this paper, we present a novel and highly accurate quartet-based phylogenetic tree reconstruction method. We performed an extensive experimental study to evaluate the accuracy and scalability of our approach on both simulated and biological datasets.

  4. Improved fingercode alignment for accurate and compact fingerprint recognition

    CSIR Research Space (South Africa)

    Brown, Dane

    2016-05-01

    Full Text Available Alignment for Accurate and Compact Fingerprint Recognition Dane Brown∗† and Karen Bradshaw∗ ∗Department of Computer Science Rhodes University Grahamstown, South Africa †Council for Scientific and Industrial Research Modelling and Digital Sciences Pretoria.... The experimental analysis and results are discussed in Section IV. Section V concludes the paper. II. RELATED STUDIES FingerCode [1] uses circular tessellation of filtered finger- print images centered at the reference point, which results in a circular ROI...

  5. D-BRAIN : Anatomically accurate simulated diffusion MRI brain data

    OpenAIRE

    Perrone, Daniele; Jeurissen, Ben; Aelterman, Jan; Roine, Timo; Sijbers, Jan; Pizurica, Aleksandra; Leemans, Alexander; Philips, Wilfried

    2016-01-01

    Diffusion Weighted (DW) MRI allows for the non-invasive study of water diffusion inside living tissues. As such, it is useful for the investigation of human brain white matter (WM) connectivity in vivo through fiber tractography (FT) algorithms. Many DW-MRI tailored restoration techniques and FT algorithms have been developed. However, it is not clear how accurately these methods reproduce the WM bundle characteristics in real-world conditions, such as in the presence of noise, partial volume...

  6. Accurate Online Full Charge Capacity Modeling of Smartphone Batteries

    OpenAIRE

    Hoque, Mohammad A.; Siekkinen, Matti; Koo, Jonghoe; Tarkoma, Sasu

    2016-01-01

    Full charge capacity (FCC) refers to the amount of energy a battery can hold. It is the fundamental property of smartphone batteries that diminishes as the battery ages and is charged/discharged. We investigate the behavior of smartphone batteries while charging and demonstrate that the battery voltage and charging rate information can together characterize the FCC of a battery. We propose a new method for accurately estimating FCC without exposing low-level system details or introducing new ...

  7. Multigrid time-accurate integration of Navier-Stokes equations

    Science.gov (United States)

    Arnone, Andrea; Liou, Meng-Sing; Povinelli, Louis A.

    1993-01-01

    Efficient acceleration techniques typical of explicit steady-state solvers are extended to time-accurate calculations. Stability restrictions are greatly reduced by means of a fully implicit time discretization. A four-stage Runge-Kutta scheme with local time stepping, residual smoothing, and multigridding is used instead of traditional time-expensive factorizations. Some applications to natural and forced unsteady viscous flows show the capability of the procedure.

  8. Fast, Accurate Memory Architecture Simulation Technique Using Memory Access Characteristics

    OpenAIRE

    小野, 貴継; 井上, 弘士; 村上, 和彰

    2007-01-01

    This paper proposes a fast and accurate memory architecture simulation technique. To design memory architecture, the first steps commonly involve using trace-driven simulation. However, expanding the design space makes the evaluation time increase. A fast simulation is achieved by a trace size reduction, but it reduces the simulation accuracy. Our approach can reduce the simulation time while maintaining the accuracy of the simulation results. In order to evaluate validity of proposed techniq...

  9. Total reference air kerma can accurately predict isodose surface volumes in cervix cancer brachytherapy. A multicenter study

    DEFF Research Database (Denmark)

    Nkiwane, Karen S; Andersen, Else; Champoudry, Jerome

    2017-01-01

    PURPOSE: To demonstrate that V60 Gy, V75 Gy, and V85 Gy isodose surface volumes can be accurately estimated from total reference air kerma (TRAK) in cervix cancer MRI-guided brachytherapy (BT). METHODS AND MATERIALS: 60 Gy, 75 Gy, and 85 Gy isodose surface volumes levels were obtained from treatm...

  10. Obtaining Samples Representative of Contaminant Distribution in an Aquifer

    International Nuclear Information System (INIS)

    Schalla, Ronald; Spane, Frank A.; Narbutovskih, Susan M.; Conley, Scott F.; Webber, William D.

    2002-01-01

    Historically, groundwater samples collected from monitoring wells have been assumed to provide average indications of contaminant concentrations within the aquifer over the well-screen interval. In-well flow circulation, heterogeneity in the surrounding aquifer, and the sampling method utilized, however, can significantly impact the representativeness of samples as contaminant indicators of actual conditions within the surrounding aquifer. This paper identifies the need and approaches essential for providing cost-effective and technically meaningful groundwater-monitoring results. Proper design of the well screen interval is critical. An accurate understanding of ambient (non-pumping) flow conditions within the monitoring well is essential for determining the contaminant distribution within the aquifer. The ambient in-well flow velocity, flow direction and volumetric flux rate are key to this understanding. Not only do the ambient flow conditions need to be identified for preferential flow zones, but also the probable changes that will be imposed under dynamic conditions that occur during groundwater sampling. Once the in-well flow conditions are understood, effective sampling can be conducted to obtain representative samples for specific depth zones or zones of interest. The question of sample representativeness has become an important issue as waste minimization techniques such as low flow purging and sampling are implemented to combat the increasing cost of well purging and sampling at many hazardous waste sites. Several technical approaches (e.g., well tracer techniques and flowmeter surveys) can be used to determine in-well flow conditions, and these are discussed with respect to both their usefulness and limitations. Proper fluid extraction methods using minimal, (low) volume and no purge sampling methods that are used to obtain representative samples of aquifer conditions are presented

  11. Accuracy of stone casts obtained by different impression materials

    Directory of Open Access Journals (Sweden)

    Adriana Cláudia Lapria Faria

    2008-12-01

    Full Text Available Several impression materials are available in the Brazilian marketplace to be used in oral rehabilitation. The aim of this study was to compare the accuracy of different impression materials used for fixed partial dentures following the manufacturers' instructions. A master model representing a partially edentulous mandibular right hemi-arch segment whose teeth were prepared to receive full crowns was used. Custom trays were prepared with auto-polymerizing acrylic resin and impressions were performed with a dental surveyor, standardizing the path of insertion and removal of the tray. Alginate and elastomeric materials were used and stone casts were obtained after the impressions. For the silicones, impression techniques were also compared. To determine the impression materials' accuracy, digital photographs of the master model and of the stone casts were taken and the discrepancies between them were measured. The data were subjected to analysis of variance and Duncan's complementary test. Polyether and addition silicone following the single-phase technique were statistically different from alginate, condensation silicone and addition silicone following the double-mix technique (p .05 to alginate and addition silicone following the double-mix technique, but different from polysulfide. The results led to the conclusion that different impression materials and techniques influenced the stone casts' accuracy in a way that polyether, polysulfide and addition silicone following the single-phase technique were more accurate than the other materials.

  12. Stutter seismic source

    Energy Technology Data Exchange (ETDEWEB)

    Gumma, W. H.; Hughes, D. R.; Zimmerman, N. S.

    1980-08-12

    An improved seismic prospecting system comprising the use of a closely spaced sequence of source initiations at essentially the same location to provide shorter objective-level wavelets than are obtainable with a single pulse. In a preferred form, three dynamite charges are detonated in the same or three closely spaced shot holes to generate a downward traveling wavelet having increased high frequency content and reduced content at a peak frequency determined by initial testing.

  13. PS proton source

    CERN Multimedia

    1959-01-01

    The first proton source used at CERN's Proton Synchrotron (PS) which started operation in 1959. This is CERN's oldest accelerator still functioning today (2018). It is part of the accelerator chain that supplies proton beams to the Large Hadron Collider. The source is a Thonemann type. In order to extract and accelerate the protons at high energy, a high frequency electrical field is used (140Mhz). The field is transmitted by a coil around a discharge tube in order to maintain the gas hydrogen in an ionised state. An electrical field pulse, in the order of 15kV, is then applied via an impulse transformer between anode and cathode of the discharge tube. The electrons and protons of the plasma formed in the ionised gas in the tube, are then separated. Currents in the order of 200mA during 100 microseconds have benn obtained with this type of source.

  14. Can blind persons accurately assess body size from the voice?

    Science.gov (United States)

    Pisanski, Katarzyna; Oleszkiewicz, Anna; Sorokowska, Agnieszka

    2016-04-01

    Vocal tract resonances provide reliable information about a speaker's body size that human listeners use for biosocial judgements as well as speech recognition. Although humans can accurately assess men's relative body size from the voice alone, how this ability is acquired remains unknown. In this study, we test the prediction that accurate voice-based size estimation is possible without prior audiovisual experience linking low frequencies to large bodies. Ninety-one healthy congenitally or early blind, late blind and sighted adults (aged 20-65) participated in the study. On the basis of vowel sounds alone, participants assessed the relative body sizes of male pairs of varying heights. Accuracy of voice-based body size assessments significantly exceeded chance and did not differ among participants who were sighted, or congenitally blind or who had lost their sight later in life. Accuracy increased significantly with relative differences in physical height between men, suggesting that both blind and sighted participants used reliable vocal cues to size (i.e. vocal tract resonances). Our findings demonstrate that prior visual experience is not necessary for accurate body size estimation. This capacity, integral to both nonverbal communication and speech perception, may be present at birth or may generalize from broader cross-modal correspondences. © 2016 The Author(s).

  15. An accurate determination of the flux within a slab

    International Nuclear Information System (INIS)

    Ganapol, B.D.; Lapenta, G.

    1993-01-01

    During the past decade, several articles have been written concerning accurate solutions to the monoenergetic neutron transport equation in infinite and semi-infinite geometries. The numerical formulations found in these articles were based primarily on the extensive theoretical investigations performed by the open-quotes transport greatsclose quotes such as Chandrasekhar, Busbridge, Sobolev, and Ivanov, to name a few. The development of numerical solutions in infinite and semi-infinite geometries represents an example of how mathematical transport theory can be utilized to provide highly accurate and efficient numerical transport solutions. These solutions, or analytical benchmarks, are useful as open-quotes industry standards,close quotes which provide guidance to code developers and promote learning in the classroom. The high accuracy of these benchmarks is directly attributable to the rapid advancement of the state of computing and computational methods. Transport calculations that were beyond the capability of the open-quotes supercomputersclose quotes of just a few years ago are now possible at one's desk. In this paper, we again build upon the past to tackle the slab problem, which is of the next level of difficulty in comparison to infinite media problems. The formulation is based on the monoenergetic Green's function, which is the most fundamental transport solution. This method of solution requires a fast and accurate evaluation of the Green's function, which, with today's computational power, is now readily available

  16. Can cancer researchers accurately judge whether preclinical reports will reproduce?

    Directory of Open Access Journals (Sweden)

    Daniel Benjamin

    2017-06-01

    Full Text Available There is vigorous debate about the reproducibility of research findings in cancer biology. Whether scientists can accurately assess which experiments will reproduce original findings is important to determining the pace at which science self-corrects. We collected forecasts from basic and preclinical cancer researchers on the first 6 replication studies conducted by the Reproducibility Project: Cancer Biology (RP:CB to assess the accuracy of expert judgments on specific replication outcomes. On average, researchers forecasted a 75% probability of replicating the statistical significance and a 50% probability of replicating the effect size, yet none of these studies successfully replicated on either criterion (for the 5 studies with results reported. Accuracy was related to expertise: experts with higher h-indices were more accurate, whereas experts with more topic-specific expertise were less accurate. Our findings suggest that experts, especially those with specialized knowledge, were overconfident about the RP:CB replicating individual experiments within published reports; researcher optimism likely reflects a combination of overestimating the validity of original studies and underestimating the difficulties of repeating their methodologies.

  17. Is bioelectrical impedance accurate for use in large epidemiological studies?

    Directory of Open Access Journals (Sweden)

    Merchant Anwar T

    2008-09-01

    Full Text Available Abstract Percentage of body fat is strongly associated with the risk of several chronic diseases but its accurate measurement is difficult. Bioelectrical impedance analysis (BIA is a relatively simple, quick and non-invasive technique, to measure body composition. It measures body fat accurately in controlled clinical conditions but its performance in the field is inconsistent. In large epidemiologic studies simpler surrogate techniques such as body mass index (BMI, waist circumference, and waist-hip ratio are frequently used instead of BIA to measure body fatness. We reviewed the rationale, theory, and technique of recently developed systems such as foot (or hand-to-foot BIA measurement, and the elements that could influence its results in large epidemiologic studies. BIA results are influenced by factors such as the environment, ethnicity, phase of menstrual cycle, and underlying medical conditions. We concluded that BIA measurements validated for specific ethnic groups, populations and conditions can accurately measure body fat in those populations, but not others and suggest that for large epdiemiological studies with diverse populations BIA may not be the appropriate choice for body composition measurement unless specific calibration equations are developed for different groups participating in the study.

  18. Accurate and approximate thermal rate constants for polyatomic chemical reactions

    International Nuclear Information System (INIS)

    Nyman, Gunnar

    2007-01-01

    In favourable cases it is possible to calculate thermal rate constants for polyatomic reactions to high accuracy from first principles. Here, we discuss the use of flux correlation functions combined with the multi-configurational time-dependent Hartree (MCTDH) approach to efficiently calculate cumulative reaction probabilities and thermal rate constants for polyatomic chemical reactions. Three isotopic variants of the H 2 + CH 3 → CH 4 + H reaction are used to illustrate the theory. There is good agreement with experimental results although the experimental rates generally are larger than the calculated ones, which are believed to be at least as accurate as the experimental rates. Approximations allowing evaluation of the thermal rate constant above 400 K are treated. It is also noted that for the treated reactions, transition state theory (TST) gives accurate rate constants above 500 K. TST theory also gives accurate results for kinetic isotope effects in cases where the mass of the transfered atom is unchanged. Due to neglect of tunnelling, TST however fails below 400 K if the mass of the transferred atom changes between the isotopic reactions

  19. Accurate thermoelastic tensor and acoustic velocities of NaCl

    Energy Technology Data Exchange (ETDEWEB)

    Marcondes, Michel L., E-mail: michel@if.usp.br [Physics Institute, University of Sao Paulo, Sao Paulo, 05508-090 (Brazil); Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455 (United States); Shukla, Gaurav, E-mail: shukla@physics.umn.edu [School of Physics and Astronomy, University of Minnesota, Minneapolis, 55455 (United States); Minnesota supercomputer Institute, University of Minnesota, Minneapolis, 55455 (United States); Silveira, Pedro da [Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455 (United States); Wentzcovitch, Renata M., E-mail: wentz002@umn.edu [Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455 (United States); Minnesota supercomputer Institute, University of Minnesota, Minneapolis, 55455 (United States)

    2015-12-15

    Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor by using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.

  20. Indexed variation graphs for efficient and accurate resistome profiling.

    Science.gov (United States)

    Rowe, Will P M; Winn, Martyn D

    2018-05-14

    Antimicrobial resistance remains a major threat to global health. Profiling the collective antimicrobial resistance genes within a metagenome (the "resistome") facilitates greater understanding of antimicrobial resistance gene diversity and dynamics. In turn, this can allow for gene surveillance, individualised treatment of bacterial infections and more sustainable use of antimicrobials. However, resistome profiling can be complicated by high similarity between reference genes, as well as the sheer volume of sequencing data and the complexity of analysis workflows. We have developed an efficient and accurate method for resistome profiling that addresses these complications and improves upon currently available tools. Our method combines a variation graph representation of gene sets with an LSH Forest indexing scheme to allow for fast classification of metagenomic sequence reads using similarity-search queries. Subsequent hierarchical local alignment of classified reads against graph traversals enables accurate reconstruction of full-length gene sequences using a scoring scheme. We provide our implementation, GROOT, and show it to be both faster and more accurate than a current reference-dependent tool for resistome profiling. GROOT runs on a laptop and can process a typical 2 gigabyte metagenome in 2 minutes using a single CPU. Our method is not restricted to resistome profiling and has the potential to improve current metagenomic workflows. GROOT is written in Go and is available at https://github.com/will-rowe/groot (MIT license). will.rowe@stfc.ac.uk. Supplementary data are available at Bioinformatics online.

  1. Hubble Source Catalog

    Science.gov (United States)

    Lubow, S.; Budavári, T.

    2013-10-01

    We have created an initial catalog of objects observed by the WFPC2 and ACS instruments on the Hubble Space Telescope (HST). The catalog is based on observations taken on more than 6000 visits (telescope pointings) of ACS/WFC and more than 25000 visits of WFPC2. The catalog is obtained by cross matching by position in the sky all Hubble Legacy Archive (HLA) Source Extractor source lists for these instruments. The source lists describe properties of source detections within a visit. The calculations are performed on a SQL Server database system. First we collect overlapping images into groups, e.g., Eta Car, and determine nearby (approximately matching) pairs of sources from different images within each group. We then apply a novel algorithm for improving the cross matching of pairs of sources by adjusting the astrometry of the images. Next, we combine pairwise matches into maximal sets of possible multi-source matches. We apply a greedy Bayesian method to split the maximal matches into more reliable matches. We test the accuracy of the matches by comparing the fluxes of the matched sources. The result is a set of information that ties together multiple observations of the same object. A byproduct of the catalog is greatly improved relative astrometry for many of the HST images. We also provide information on nondetections that can be used to determine dropouts. With the catalog, for the first time, one can carry out time domain, multi-wavelength studies across a large set of HST data. The catalog is publicly available. Much more can be done to expand the catalog capabilities.

  2. Component Repair Times Obtained from MSPI Data

    International Nuclear Information System (INIS)

    Eide, Steven A.; Cadwallader, Lee

    2015-01-01

    Information concerning times to repair or restore equipment to service given a failure is valuable to probabilistic risk assessments (PRAs). Examples of such uses in modern PRAs include estimation of the probability of failing to restore a failed component within a specified time period (typically tied to recovering a mitigating system before core damage occurs at nuclear power plants) and the determination of mission times for support system initiating event (SSIE) fault tree models. Information on equipment repair or restoration times applicable to PRA modeling is limited and dated for U.S. commercial nuclear power plants. However, the Mitigating Systems Performance Index (MSPI) program covering all U.S. commercial nuclear power plants provides up-to-date information on restoration times for a limited set of component types. This paper describes the MSPI program data available and analyzes the data to obtain median and mean component restoration times as well as non-restoration cumulative probability curves. The MSPI program provides guidance for monitoring both planned and unplanned outages of trains of selected mitigating systems deemed important to safety. For systems included within the MSPI program, plants monitor both train UA and component unreliability (UR) against baseline values. If the combined system UA and UR increases sufficiently above established baseline results (converted to an estimated change in core damage frequency or CDF), a ''white'' (or worse) indicator is generated for that system. That in turn results in increased oversight by the US Nuclear Regulatory Commission (NRC) and can impact a plant's insurance rating. Therefore, there is pressure to return MSPI program components to service as soon as possible after a failure occurs. Three sets of unplanned outages might be used to determine the component repair durations desired in this article: all unplanned outages for the train type that includes the component of

  3. Component Repair Times Obtained from MSPI Data

    Energy Technology Data Exchange (ETDEWEB)

    Eide, Steven A. [Curtiss-Wright/Scietech, Ketchum, ID (United States); Cadwallader, Lee [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-05-01

    Information concerning times to repair or restore equipment to service given a failure is valuable to probabilistic risk assessments (PRAs). Examples of such uses in modern PRAs include estimation of the probability of failing to restore a failed component within a specified time period (typically tied to recovering a mitigating system before core damage occurs at nuclear power plants) and the determination of mission times for support system initiating event (SSIE) fault tree models. Information on equipment repair or restoration times applicable to PRA modeling is limited and dated for U.S. commercial nuclear power plants. However, the Mitigating Systems Performance Index (MSPI) program covering all U.S. commercial nuclear power plants provides up-to-date information on restoration times for a limited set of component types. This paper describes the MSPI program data available and analyzes the data to obtain median and mean component restoration times as well as non-restoration cumulative probability curves. The MSPI program provides guidance for monitoring both planned and unplanned outages of trains of selected mitigating systems deemed important to safety. For systems included within the MSPI program, plants monitor both train UA and component unreliability (UR) against baseline values. If the combined system UA and UR increases sufficiently above established baseline results (converted to an estimated change in core damage frequency or CDF), a “white” (or worse) indicator is generated for that system. That in turn results in increased oversight by the US Nuclear Regulatory Commission (NRC) and can impact a plant’s insurance rating. Therefore, there is pressure to return MSPI program components to service as soon as possible after a failure occurs. Three sets of unplanned outages might be used to determine the component repair durations desired in this article: all unplanned outages for the train type that includes the component of interest, only

  4. Rapid identification of sequences for orphan enzymes to power accurate protein annotation.

    Directory of Open Access Journals (Sweden)

    Kevin R Ramkissoon

    Full Text Available The power of genome sequencing depends on the ability to understand what those genes and their proteins products actually do. The automated methods used to assign functions to putative proteins in newly sequenced organisms are limited by the size of our library of proteins with both known function and sequence. Unfortunately this library grows slowly, lagging well behind the rapid increase in novel protein sequences produced by modern genome sequencing methods. One potential source for rapidly expanding this functional library is the "back catalog" of enzymology--"orphan enzymes," those enzymes that have been characterized and yet lack any associated sequence. There are hundreds of orphan enzymes in the Enzyme Commission (EC database alone. In this study, we demonstrate how this orphan enzyme "back catalog" is a fertile source for rapidly advancing the state of protein annotation. Starting from three orphan enzyme samples, we applied mass-spectrometry based analysis and computational methods (including sequence similarity networks, sequence and structural alignments, and operon context analysis to rapidly identify the specific sequence for each orphan while avoiding the most time- and labor-intensive aspects of typical sequence identifications. We then used these three new sequences to more accurately predict the catalytic function of 385 previously uncharacterized or misannotated proteins. We expect that this kind of rapid sequence identification could be efficiently applied on a larger scale to make enzymology's "back catalog" another powerful tool to drive accurate genome annotation.

  5. A Flexible Fringe Projection Vision System with Extended Mathematical Model for Accurate Three-Dimensional Measurement

    Directory of Open Access Journals (Sweden)

    Suzhi Xiao

    2016-04-01

    Full Text Available In order to acquire an accurate three-dimensional (3D measurement, the traditional fringe projection technique applies complex and laborious procedures to compensate for the errors that exist in the vision system. However, the error sources in the vision system are very complex, such as lens distortion, lens defocus, and fringe pattern nonsinusoidality. Some errors cannot even be explained or rendered with clear expressions and are difficult to compensate directly as a result. In this paper, an approach is proposed that avoids the complex and laborious compensation procedure for error sources but still promises accurate 3D measurement. It is realized by the mathematical model extension technique. The parameters of the extended mathematical model for the ’phase to 3D coordinates transformation’ are derived using the least-squares parameter estimation algorithm. In addition, a phase-coding method based on a frequency analysis is proposed for the absolute phase map retrieval to spatially isolated objects. The results demonstrate the validity and the accuracy of the proposed flexible fringe projection vision system on spatially continuous and discontinuous objects for 3D measurement.

  6. A Flexible Fringe Projection Vision System with Extended Mathematical Model for Accurate Three-Dimensional Measurement.

    Science.gov (United States)

    Xiao, Suzhi; Tao, Wei; Zhao, Hui

    2016-04-28

    In order to acquire an accurate three-dimensional (3D) measurement, the traditional fringe projection technique applies complex and laborious procedures to compensate for the errors that exist in the vision system. However, the error sources in the vision system are very complex, such as lens distortion, lens defocus, and fringe pattern nonsinusoidality. Some errors cannot even be explained or rendered with clear expressions and are difficult to compensate directly as a result. In this paper, an approach is proposed that avoids the complex and laborious compensation procedure for error sources but still promises accurate 3D measurement. It is realized by the mathematical model extension technique. The parameters of the extended mathematical model for the 'phase to 3D coordinates transformation' are derived using the least-squares parameter estimation algorithm. In addition, a phase-coding method based on a frequency analysis is proposed for the absolute phase map retrieval to spatially isolated objects. The results demonstrate the validity and the accuracy of the proposed flexible fringe projection vision system on spatially continuous and discontinuous objects for 3D measurement.

  7. Rapid Identification of Sequences for Orphan Enzymes to Power Accurate Protein Annotation

    Science.gov (United States)

    Ojha, Sunil; Watson, Douglas S.; Bomar, Martha G.; Galande, Amit K.; Shearer, Alexander G.

    2013-01-01

    The power of genome sequencing depends on the ability to understand what those genes and their proteins products actually do. The automated methods used to assign functions to putative proteins in newly sequenced organisms are limited by the size of our library of proteins with both known function and sequence. Unfortunately this library grows slowly, lagging well behind the rapid increase in novel protein sequences produced by modern genome sequencing methods. One potential source for rapidly expanding this functional library is the “back catalog” of enzymology – “orphan enzymes,” those enzymes that have been characterized and yet lack any associated sequence. There are hundreds of orphan enzymes in the Enzyme Commission (EC) database alone. In this study, we demonstrate how this orphan enzyme “back catalog” is a fertile source for rapidly advancing the state of protein annotation. Starting from three orphan enzyme samples, we applied mass-spectrometry based analysis and computational methods (including sequence similarity networks, sequence and structural alignments, and operon context analysis) to rapidly identify the specific sequence for each orphan while avoiding the most time- and labor-intensive aspects of typical sequence identifications. We then used these three new sequences to more accurately predict the catalytic function of 385 previously uncharacterized or misannotated proteins. We expect that this kind of rapid sequence identification could be efficiently applied on a larger scale to make enzymology’s “back catalog” another powerful tool to drive accurate genome annotation. PMID:24386392

  8. BEYOND ELLIPSE(S): ACCURATELY MODELING THE ISOPHOTAL STRUCTURE OF GALAXIES WITH ISOFIT AND CMODEL

    International Nuclear Information System (INIS)

    Ciambur, B. C.

    2015-01-01

    This work introduces a new fitting formalism for isophotes that enables more accurate modeling of galaxies with non-elliptical shapes, such as disk galaxies viewed edge-on or galaxies with X-shaped/peanut bulges. Within this scheme, the angular parameter that defines quasi-elliptical isophotes is transformed from the commonly used, but inappropriate, polar coordinate to the “eccentric anomaly.” This provides a superior description of deviations from ellipticity, better capturing the true isophotal shape. Furthermore, this makes it possible to accurately recover both the surface brightness profile, using the correct azimuthally averaged isophote, and the two-dimensional model of any galaxy: the hitherto ubiquitous, but artificial, cross-like features in residual images are completely removed. The formalism has been implemented into the Image Reduction and Analysis Facility tasks Ellipse and Bmodel to create the new tasks “Isofit,” and “Cmodel.” The new tools are demonstrated here with application to five galaxies, chosen to be representative case-studies for several areas where this technique makes it possible to gain new scientific insight. Specifically: properly quantifying boxy/disky isophotes via the fourth harmonic order in edge-on galaxies, quantifying X-shaped/peanut bulges, higher-order Fourier moments for modeling bars in disks, and complex isophote shapes. Higher order (n > 4) harmonics now become meaningful and may correlate with structural properties, as boxyness/diskyness is known to do. This work also illustrates how the accurate construction, and subtraction, of a model from a galaxy image facilitates the identification and recovery of over-lapping sources such as globular clusters and the optical counterparts of X-ray sources

  9. A self-interaction-free local hybrid functional: Accurate binding energies vis-à-vis accurate ionization potentials from Kohn-Sham eigenvalues

    International Nuclear Information System (INIS)

    Schmidt, Tobias; Kümmel, Stephan; Kraisler, Eli; Makmal, Adi; Kronik, Leeor

    2014-01-01

    We present and test a new approximation for the exchange-correlation (xc) energy of Kohn-Sham density functional theory. It combines exact exchange with a compatible non-local correlation functional. The functional is by construction free of one-electron self-interaction, respects constraints derived from uniform coordinate scaling, and has the correct asymptotic behavior of the xc energy density. It contains one parameter that is not determined ab initio. We investigate whether it is possible to construct a functional that yields accurate binding energies and affords other advantages, specifically Kohn-Sham eigenvalues that reliably reflect ionization potentials. Tests for a set of atoms and small molecules show that within our local-hybrid form accurate binding energies can be achieved by proper optimization of the free parameter in our functional, along with an improvement in dissociation energy curves and in Kohn-Sham eigenvalues. However, the correspondence of the latter to experimental ionization potentials is not yet satisfactory, and if we choose to optimize their prediction, a rather different value of the functional's parameter is obtained. We put this finding in a larger context by discussing similar observations for other functionals and possible directions for further functional development that our findings suggest

  10. Accurate disintegration-rate measurement of 55Fe by liquid scintillation counting

    International Nuclear Information System (INIS)

    Steyn, J.; Oberholzer, P.; Botha, S.M.

    1979-01-01

    A method involving liquid scintillation counting is described for the accurate measurement of disintegration rate of 55 Fe. The method is based on the use of calculated efficiency functions together with either of the nuclides 54 Mn and 51 Cr as internal standards for measurement of counting efficiencies by coincidence counting. The method was used by the NAC during a recent international intercomparison of radioactivity measurements, and a summary of the results obtained by nine participating laboratories is presented. A spread in results of several percent is evident [af

  11. Accurate e/sup -/-He cross sections below 19 eV

    Energy Technology Data Exchange (ETDEWEB)

    Nesbet, R K [International Business Machines Corp., San Jose, CA (USA). Research Lab.

    1979-04-14

    Variational calculations of e/sup -/-He s- and p-wave phaseshifts, together with the Born formula for higher partial waves, are used to give the scattering amplitude to within one per cent estimated accuracy for energies less than 19 eV. Coefficients are given of cubic spline fits to auxiliary functions that provide smooth interpolation of the estimated accurate phaseshifts. Data given here make it possible to obtain the differential scattering cross section over the energy range considered from simple formulae.

  12. Highly accurate analytical energy of a two-dimensional exciton in a constant magnetic field

    International Nuclear Information System (INIS)

    Hoang, Ngoc-Tram D.; Nguyen, Duy-Anh P.; Hoang, Van-Hung; Le, Van-Hoang

    2016-01-01

    Explicit expressions are given for analytically describing the dependence of the energy of a two-dimensional exciton on magnetic field intensity. These expressions are highly accurate with the precision of up to three decimal places for the whole range of the magnetic field intensity. The results are shown for the ground state and some excited states; moreover, we have all formulae to obtain similar expressions of any excited state. Analysis of numerical results shows that the precision of three decimal places is maintained for the excited states with the principal quantum number of up to n=100.

  13. Toward an accurate description of solid-state properties of superheavy elements

    Directory of Open Access Journals (Sweden)

    Schwerdtfeger Peter

    2016-01-01

    Full Text Available In the last two decades cold and hot fusion experiments lead to the production of new elements for the Periodic Table up to nuclear charge 118. Recent developments in relativistic quantum theory have made it possible to obtain accurate electronic properties for the trans-actinide elements with the aim to predict their potential chemical and physical behaviour. Here we report on first results of solid-state calculations for Og (element 118 to support future atom-at-a-time gas-phase adsorption experiments on surfaces such as gold or quartz.

  14. Highly accurate analytical energy of a two-dimensional exciton in a constant magnetic field

    Energy Technology Data Exchange (ETDEWEB)

    Hoang, Ngoc-Tram D. [Department of Physics, Ho Chi Minh City University of Pedagogy 280, An Duong Vuong Street, District 5, Ho Chi Minh City (Viet Nam); Nguyen, Duy-Anh P. [Department of Natural Science, Thu Dau Mot University, 6, Tran Van On Street, Thu Dau Mot City, Binh Duong Province (Viet Nam); Hoang, Van-Hung [Department of Physics, Ho Chi Minh City University of Pedagogy 280, An Duong Vuong Street, District 5, Ho Chi Minh City (Viet Nam); Le, Van-Hoang, E-mail: levanhoang@tdt.edu.vn [Atomic Molecular and Optical Physics Research Group, Ton Duc Thang University, 19 Nguyen Huu Tho Street, Tan Phong Ward, District 7, Ho Chi Minh City (Viet Nam); Faculty of Applied Sciences, Ton Duc Thang University, 19 Nguyen Huu Tho Street, Tan Phong Ward, District 7, Ho Chi Minh City (Viet Nam)

    2016-08-15

    Explicit expressions are given for analytically describing the dependence of the energy of a two-dimensional exciton on magnetic field intensity. These expressions are highly accurate with the precision of up to three decimal places for the whole range of the magnetic field intensity. The results are shown for the ground state and some excited states; moreover, we have all formulae to obtain similar expressions of any excited state. Analysis of numerical results shows that the precision of three decimal places is maintained for the excited states with the principal quantum number of up to n=100.

  15. Accurate Classification of Chronic Migraine via Brain Magnetic Resonance Imaging

    Science.gov (United States)

    Schwedt, Todd J.; Chong, Catherine D.; Wu, Teresa; Gaw, Nathan; Fu, Yinlin; Li, Jing

    2015-01-01

    Background The International Classification of Headache Disorders provides criteria for the diagnosis and subclassification of migraine. Since there is no objective gold standard by which to test these diagnostic criteria, the criteria are based on the consensus opinion of content experts. Accurate migraine classifiers consisting of brain structural measures could serve as an objective gold standard by which to test and revise diagnostic criteria. The objectives of this study were to utilize magnetic resonance imaging measures of brain structure for constructing classifiers: 1) that accurately identify individuals as having chronic vs. episodic migraine vs. being a healthy control; and 2) that test the currently used threshold of 15 headache days/month for differentiating chronic migraine from episodic migraine. Methods Study participants underwent magnetic resonance imaging for determination of regional cortical thickness, cortical surface area, and volume. Principal components analysis combined structural measurements into principal components accounting for 85% of variability in brain structure. Models consisting of these principal components were developed to achieve the classification objectives. Ten-fold cross validation assessed classification accuracy within each of the ten runs, with data from 90% of participants randomly selected for classifier development and data from the remaining 10% of participants used to test classification performance. Headache frequency thresholds ranging from 5–15 headache days/month were evaluated to determine the threshold allowing for the most accurate subclassification of individuals into lower and higher frequency subgroups. Results Participants were 66 migraineurs and 54 healthy controls, 75.8% female, with an average age of 36 +/− 11 years. Average classifier accuracies were: a) 68% for migraine (episodic + chronic) vs. healthy controls; b) 67.2% for episodic migraine vs. healthy controls; c) 86.3% for chronic

  16. Integrated Characterization of DNAPL Source Zone Architecture in Clay Till and Limestone Bedrock

    DEFF Research Database (Denmark)

    Broholm, Mette Martina; Janniche, Gry Sander; Fjordbøge, Annika Sidelmann

    2014-01-01

    Background/Objectives. Characterization of dense non-aqueous phase liquid (DNAPL) source zone architecture is essential to develop accurate site specific conceptual models, delineate and quantify contaminant mass, perform risk assessment, and select and design remediation alternatives. The activi......Background/Objectives. Characterization of dense non-aqueous phase liquid (DNAPL) source zone architecture is essential to develop accurate site specific conceptual models, delineate and quantify contaminant mass, perform risk assessment, and select and design remediation alternatives...... innovative investigation methods and characterize the source zone hydrogeology and contamination to obtain an improved conceptual understanding of DNAPL source zone architecture in clay till and bryozoan limestone bedrock. Approach/Activities. A wide range of innovative and current site investigative tools...... for direct and indirect documentation and/or evaluation of DNAPL presence were combined in a multiple lines of evidence approach. Results/Lessons Learned. Though no single technique was sufficient for characterization of DNAPL source zone architecture, the combined use of membrane interphase probing (MIP...

  17. SU-E-T-284: Revisiting Reference Dosimetry for the Model S700 Axxent 50 KV{sub p} Electronic Brachytherapy Source

    Energy Technology Data Exchange (ETDEWEB)

    Hiatt, JR [Rhode Island Hospital, Providence, RI (United States); Rivard, MJ [Tufts University School of Medicine, Boston, MA (United States)

    2014-06-01

    Purpose: The model S700 Axxent electronic brachytherapy source by Xoft was characterized in 2006 by Rivard et al. The source design was modified in 2006 to include a plastic centering insert at the source tip to more accurately position the anode. The objectives of the current study were to establish an accurate Monte Carlo source model for simulation purposes, to dosimetrically characterize the new source and obtain its TG-43 brachytherapy dosimetry parameters, and to determine dose differences between the source with and without the centering insert. Methods: Design information from dissected sources and vendor-supplied CAD drawings were used to devise the source model for radiation transport simulations of dose distributions in a water phantom. Collision kerma was estimated as a function of radial distance, r, and polar angle, θ, for determination of reference TG-43 dosimetry parameters. Simulations were run for 10{sup 10} histories, resulting in statistical uncertainties on the transverse plane of 0.03% at r=1 cm and 0.08% at r=10 cm. Results: The dose rate distribution the transverse plane did not change beyond 2% between the 2006 model and the current study. While differences exceeding 15% were observed near the source distal tip, these diminished to within 2% for r>1.5 cm. Differences exceeding a factor of two were observed near θ=150° and in contact with the source, but diminished to within 20% at r=10 cm. Conclusions: Changes in source design influenced the overall dose rate and distribution by more than 2% over a third of the available solid angle external from the source. For clinical applications using balloons or applicators with tissue located within 5 cm from the source, dose differences exceeding 2% were observed only for θ>110°. This study carefully examined the current source geometry and presents a modern reference TG-43 dosimetry dataset for the model S700 source.

  18. Monte Carlo modeling of 60 Co HDR brachytherapy source in water and in different solid water phantom materials

    Directory of Open Access Journals (Sweden)

    Sahoo S

    2010-01-01

    Full Text Available The reference medium for brachytherapy dose measurements is water. Accuracy of dose measurements of brachytherapy sources is critically dependent on precise measurement of the source-detector distance. A solid phantom can be precisely machined and hence source-detector distances can be accurately determined. In the present study, four different solid phantom materials such as polymethylmethacrylate (PMMA, polystyrene, Solid Water, and RW1 are modeled using the Monte Carlo methods to investigate the influence of phantom material on dose rate distributions of the new model of BEBIG 60 Co brachytherapy source. The calculated dose rate constant is 1.086 ± 0.06% cGy h−1 U−1 for water, PMMA, polystyrene, Solid Water, and RW1. The investigation suggests that the phantom materials RW1 and Solid Water represent water-equivalent up to 20 cm from the source. PMMA and polystyrene are water-equivalent up to 10 cm and 15 cm from the source, respectively, as the differences in the dose data obtained in these phantom materials are not significantly different from the corresponding data obtained in liquid water phantom. At a radial distance of 20 cm from the source, polystyrene overestimates the dose by 3% and PMMA underestimates it by about 8% when compared to the corresponding data obtained in water phantom.

  19. Accurate performance analysis of opportunistic decode-and-forward relaying

    KAUST Repository

    Tourki, Kamel; Yang, Hongchuan; Alouini, Mohamed-Slim

    2011-01-01

    In this paper, we investigate an opportunistic relaying scheme where the selected relay assists the source-destination (direct) communication. In our study, we consider a regenerative opportunistic relaying scheme in which the direct path may

  20. Eddy covariance observations of methane and nitrous oxide emissions. Towards more accurate estimates from ecosystems

    International Nuclear Information System (INIS)

    Kroon, P.S.

    2010-09-01

    About 30% of the increased greenhouse gas (GHG) emissions of carbon dioxide (CO2), methane (CH4) and nitrous oxide (N2O) are related to land use changes and agricultural activities. In order to select effective measures, knowledge is required about GHG emissions from these ecosystems and how these emissions are influenced by management and meteorological conditions. Accurate emission values are therefore needed for all three GHGs to compile the full GHG balance. However, the current annual estimates of CH4 and N2O emissions from ecosystems have significant uncertainties, even larger than 50%. The present study showed that an advanced technique, micrometeorological eddy covariance flux technique, could obtain more accurate estimates with uncertainties even smaller than 10%. The current regional and global trace gas flux estimates of CH4 and N2O are possibly seriously underestimated due to incorrect measurement procedures. Accurate measurements of both gases are really important since they could even contribute for more than two-third to the total GHG emission. For example: the total GHG emission of a dairy farm site was estimated at 16.10 3 kg ha -1 yr -1 in CO2-equivalents from which 25% and 45% was contributed by CH4 and N2O, respectively. About 60% of the CH4 emission was emitted by ditches and their bordering edges. These emissions are not yet included in the national inventory reports. We recommend including these emissions in coming reports.