WorldWideScience

Sample records for high-resolution numerical methods

  1. Implementation and assessment of high-resolution numerical methods in TRACE

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Dean, E-mail: wangda@ornl.gov [Oak Ridge National Laboratory, 1 Bethel Valley RD 6167, Oak Ridge, TN 37831 (United States); Mahaffy, John H.; Staudenmeier, Joseph; Thurston, Carl G. [U.S. Nuclear Regulatory Commission, Washington, DC 20555 (United States)

    2013-10-15

    Highlights: • Study and implement high-resolution numerical methods for two-phase flow. • They can achieve better numerical accuracy than the 1st-order upwind scheme. • They are of great numerical robustness and efficiency. • Great application for BWR stability analysis and boron injection. -- Abstract: The 1st-order upwind differencing numerical scheme is widely employed to discretize the convective terms of the two-phase flow transport equations in reactor systems analysis codes such as TRACE and RELAP. While very robust and efficient, 1st-order upwinding leads to excessive numerical diffusion. Standard 2nd-order numerical methods (e.g., Lax–Wendroff and Beam–Warming) can effectively reduce numerical diffusion but often produce spurious oscillations for steep gradients. To overcome the difficulties with the standard higher-order schemes, high-resolution schemes such as nonlinear flux limiters have been developed and successfully applied in numerical simulation of fluid-flow problems in recent years. The present work contains a detailed study on the implementation and assessment of six nonlinear flux limiters in TRACE. These flux limiters selected are MUSCL, Van Leer (VL), OSPRE, Van Albada (VA), ENO, and Van Albada 2 (VA2). The assessment is focused on numerical stability, convergence, and accuracy of the flux limiters and their applicability for boiling water reactor (BWR) stability analysis. It is found that VA and MUSCL work best among of the six flux limiters. Both of them not only have better numerical accuracy than the 1st-order upwind scheme but also preserve great robustness and efficiency.

  2. Implementation and assessment of high-resolution numerical methods in TRACE

    International Nuclear Information System (INIS)

    Wang, Dean; Mahaffy, John H.; Staudenmeier, Joseph; Thurston, Carl G.

    2013-01-01

    Highlights: • Study and implement high-resolution numerical methods for two-phase flow. • They can achieve better numerical accuracy than the 1st-order upwind scheme. • They are of great numerical robustness and efficiency. • Great application for BWR stability analysis and boron injection. -- Abstract: The 1st-order upwind differencing numerical scheme is widely employed to discretize the convective terms of the two-phase flow transport equations in reactor systems analysis codes such as TRACE and RELAP. While very robust and efficient, 1st-order upwinding leads to excessive numerical diffusion. Standard 2nd-order numerical methods (e.g., Lax–Wendroff and Beam–Warming) can effectively reduce numerical diffusion but often produce spurious oscillations for steep gradients. To overcome the difficulties with the standard higher-order schemes, high-resolution schemes such as nonlinear flux limiters have been developed and successfully applied in numerical simulation of fluid-flow problems in recent years. The present work contains a detailed study on the implementation and assessment of six nonlinear flux limiters in TRACE. These flux limiters selected are MUSCL, Van Leer (VL), OSPRE, Van Albada (VA), ENO, and Van Albada 2 (VA2). The assessment is focused on numerical stability, convergence, and accuracy of the flux limiters and their applicability for boiling water reactor (BWR) stability analysis. It is found that VA and MUSCL work best among of the six flux limiters. Both of them not only have better numerical accuracy than the 1st-order upwind scheme but also preserve great robustness and efficiency

  3. Assessment of high-resolution methods for numerical simulations of compressible turbulence with shock waves

    International Nuclear Information System (INIS)

    Johnsen, Eric; Larsson, Johan; Bhagatwala, Ankit V.; Cabot, William H.; Moin, Parviz; Olson, Britton J.; Rawat, Pradeep S.; Shankar, Santhosh K.; Sjoegreen, Bjoern; Yee, H.C.; Zhong Xiaolin; Lele, Sanjiva K.

    2010-01-01

    Flows in which shock waves and turbulence are present and interact dynamically occur in a wide range of applications, including inertial confinement fusion, supernovae explosion, and scramjet propulsion. Accurate simulations of such problems are challenging because of the contradictory requirements of numerical methods used to simulate turbulence, which must minimize any numerical dissipation that would otherwise overwhelm the small scales, and shock-capturing schemes, which introduce numerical dissipation to stabilize the solution. The objective of the present work is to evaluate the performance of several numerical methods capable of simultaneously handling turbulence and shock waves. A comprehensive range of high-resolution methods (WENO, hybrid WENO/central difference, artificial diffusivity, adaptive characteristic-based filter, and shock fitting) and suite of test cases (Taylor-Green vortex, Shu-Osher problem, shock-vorticity/entropy wave interaction, Noh problem, compressible isotropic turbulence) relevant to problems with shocks and turbulence are considered. The results indicate that the WENO methods provide sharp shock profiles, but overwhelm the physical dissipation. The hybrid method is minimally dissipative and leads to sharp shocks and well-resolved broadband turbulence, but relies on an appropriate shock sensor. Artificial diffusivity methods in which the artificial bulk viscosity is based on the magnitude of the strain-rate tensor resolve vortical structures well but damp dilatational modes in compressible turbulence; dilatation-based artificial bulk viscosity methods significantly improve this behavior. For well-defined shocks, the shock fitting approach yields good results.

  4. Waterspout Forecasting Method Over the Eastern Adriatic Using a High-Resolution Numerical Weather Model

    Science.gov (United States)

    Renko, Tanja; Ivušić, Sarah; Telišman Prtenjak, Maja; Šoljan, Vinko; Horvat, Igor

    2018-03-01

    In this study, a synoptic and mesoscale analysis was performed and Szilagyi's waterspout forecasting method was tested on ten waterspout events in the period of 2013-2016. Data regarding waterspout occurrences were collected from weather stations, an online survey at the official website of the National Meteorological and Hydrological Service of Croatia and eyewitness reports from newspapers and the internet. Synoptic weather conditions were analyzed using surface pressure fields, 500 hPa level synoptic charts, SYNOP reports and atmospheric soundings. For all observed waterspout events, a synoptic type was determined using the 500 hPa geopotential height chart. The occurrence of lightning activity was determined from the LINET lightning database, and waterspouts were divided into thunderstorm-related and "fair weather" ones. Mesoscale characteristics (with a focus on thermodynamic instability indices) were determined using the high-resolution (500 m grid length) mesoscale numerical weather model and model results were compared with the available observations. Because thermodynamic instability indices are usually insufficient for forecasting waterspout activity, the performance of the Szilagyi Waterspout Index (SWI) was tested using vertical atmospheric profiles provided by the mesoscale numerical model. The SWI successfully forecasted all waterspout events, even the winter events. This indicates that the Szilagyi's waterspout prognostic method could be used as a valid prognostic tool for the eastern Adriatic.

  5. Improved methods for high resolution electron microscopy

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, J.R.

    1987-04-01

    Existing methods of making support films for high resolution transmission electron microscopy are investigated and novel methods are developed. Existing methods of fabricating fenestrated, metal reinforced specimen supports (microgrids) are evaluated for their potential to reduce beam induced movement of monolamellar crystals of C/sub 44/H/sub 90/ paraffin supported on thin carbon films. Improved methods of producing hydrophobic carbon films by vacuum evaporation, and improved methods of depositing well ordered monolamellar paraffin crystals on carbon films are developed. A novel technique for vacuum evaporation of metals is described which is used to reinforce microgrids. A technique is also developed to bond thin carbon films to microgrids with a polymer bonding agent. Unique biochemical methods are described to accomplish site specific covalent modification of membrane proteins. Protocols are given which covalently convert the carboxy terminus of papain cleaved bacteriorhodopsin to a free thiol. 53 refs., 19 figs., 1 tab.

  6. Processing method for high resolution monochromator

    International Nuclear Information System (INIS)

    Kiriyama, Koji; Mitsui, Takaya

    2006-12-01

    A processing method for high resolution monochromator (HRM) has been developed at Japanese Atomic Energy Agency/Quantum Beam Science Directorate/Synchrotron Radiation Research unit at SPring-8. For manufacturing a HRM, a sophisticated slicing machine and X-ray diffractometer have been installed for shaping a crystal ingot and orienting precisely the surface of a crystal ingot, respectively. The specification of the slicing machine is following; Maximum size of a diamond blade is φ 350mm in diameter, φ 38.1mm in the spindle diameter, and 2mm in thickness. A large crystal such as an ingot with 100mm in diameter, 200mm in length can be cut. Thin crystal samples such as a wafer can be also cut using by another sample holder. Working distance of a main shaft with the direction perpendicular to working table in the machine is 350mm at maximum. Smallest resolution of the main shaft with directions of front-and-back and top-and-bottom are 0.001mm read by a digital encoder. 2mm/min can set for cutting samples in the forward direction. For orienting crystal faces relative to the blade direction adjustment, a one-circle goniometer and 2-circle segment are equipped on the working table in the machine. A rotation and a tilt of the stage can be done by manual operation. Digital encoder in a turn stage is furnished and has angle resolution of less than 0.01 degrees. In addition, a hand drill as a supporting device for detailed processing of crystal is prepared. Then, an ideal crystal face can be cut from crystal samples within an accuracy of about 0.01 degrees. By installation of these devices, a high energy resolution monochromator crystal for inelastic x-ray scattering and a beam collimator are got in hand and are expected to be used for nanotechnology studies. (author)

  7. High-resolution method for evolving complex interface networks

    Science.gov (United States)

    Pan, Shucheng; Hu, Xiangyu Y.; Adams, Nikolaus A.

    2018-04-01

    In this paper we describe a high-resolution transport formulation of the regional level-set approach for an improved prediction of the evolution of complex interface networks. The novelty of this method is twofold: (i) construction of local level sets and reconstruction of a global level set, (ii) local transport of the interface network by employing high-order spatial discretization schemes for improved representation of complex topologies. Various numerical test cases of multi-region flow problems, including triple-point advection, single vortex flow, mean curvature flow, normal driven flow, dry foam dynamics and shock-bubble interaction show that the method is accurate and suitable for a wide range of complex interface-network evolutions. Its overall computational cost is comparable to the Semi-Lagrangian regional level-set method while the prediction accuracy is significantly improved. The approach thus offers a viable alternative to previous interface-network level-set method.

  8. Climate change and high-resolution whole-building numerical modelling

    NARCIS (Netherlands)

    Blocken, B.J.E.; Briggen, P.M.; Schellen, H.L.; Hensen, J.L.M.

    2010-01-01

    This paper briefly discusses the need of high-resolution whole-building numerical modelling in the context of climate change. High-resolution whole-building numerical modelling can be used for detailed analysis of the potential consequences of climate change on buildings and to evaluate remedial

  9. A method for generating high resolution satellite image time series

    Science.gov (United States)

    Guo, Tao

    2014-10-01

    There is an increasing demand for satellite remote sensing data with both high spatial and temporal resolution in many applications. But it still is a challenge to simultaneously improve spatial resolution and temporal frequency due to the technical limits of current satellite observation systems. To this end, much R&D efforts have been ongoing for years and lead to some successes roughly in two aspects, one includes super resolution, pan-sharpen etc. methods which can effectively enhance the spatial resolution and generate good visual effects, but hardly preserve spectral signatures and result in inadequate analytical value, on the other hand, time interpolation is a straight forward method to increase temporal frequency, however it increase little informative contents in fact. In this paper we presented a novel method to simulate high resolution time series data by combing low resolution time series data and a very small number of high resolution data only. Our method starts with a pair of high and low resolution data set, and then a spatial registration is done by introducing LDA model to map high and low resolution pixels correspondingly. Afterwards, temporal change information is captured through a comparison of low resolution time series data, and then projected onto the high resolution data plane and assigned to each high resolution pixel according to the predefined temporal change patterns of each type of ground objects. Finally the simulated high resolution data is generated. A preliminary experiment shows that our method can simulate a high resolution data with a reasonable accuracy. The contribution of our method is to enable timely monitoring of temporal changes through analysis of time sequence of low resolution images only, and usage of costly high resolution data can be reduces as much as possible, and it presents a highly effective way to build up an economically operational monitoring solution for agriculture, forest, land use investigation

  10. Towards High Resolution Numerical Algorithms for Wave Dominated Physical Phenomena

    Science.gov (United States)

    2009-01-30

    Modelling and Numerical Analysis, 40(5):815-841, 2006. [31] Michael Dumbser, Martin Kaser, and Eleuterio F. Toro. An arbitrary high-order Discontinuous...proximation of PML, SIAM J. Numer. Anal., 41 (2003), pp. 287-305. [60] E. BECACHE, S. FAUQUEUX, AND P. JOLY , Stability of perfectly matched layers, group...time-domain performance analysis, IEEE Trans, on Magnetics, 38 (2002), pp. 657- 660. [64] J. DIAZ AND P. JOLY , An analysis of higher-order boundary

  11. A new high-resolution electromagnetic method for subsurface imaging

    Science.gov (United States)

    Feng, Wanjie

    For most electromagnetic (EM) geophysical systems, the contamination of primary fields on secondary fields ultimately limits the capability of the controlled-source EM methods. Null coupling techniques were proposed to solve this problem. However, the small orientation errors in the null coupling systems greatly restrict the applications of these systems. Another problem encountered by most EM systems is the surface interference and geologic noise, which sometimes make the geophysical survey impossible to carry out. In order to solve these problems, the alternating target antenna coupling (ATAC) method was introduced, which greatly removed the influence of the primary field and reduced the surface interference. But this system has limitations on the maximum transmitter moment that can be used. The differential target antenna coupling (DTAC) method was proposed to allow much larger transmitter moments and at the same time maintain the advantages of the ATAC method. In this dissertation, first, the theoretical DTAC calculations were derived mathematically using Born and Wolf's complex magnetic vector. 1D layered and 2D blocked earth models were used to demonstrate that the DTAC method has no responses for 1D and 2D structures. Analytical studies of the plate model influenced by conductive and resistive backgrounds were presented to explain the physical phenomenology behind the DTAC method, which is the magnetic fields of the subsurface targets are required to be frequency dependent. Then, the advantages of the DTAC method, e.g., high-resolution, reducing the geologic noise and insensitive to surface interference, were analyzed using surface and subsurface numerical examples in the EMGIMA software. Next, the theoretical advantages, such as high resolution and insensitive to surface interference, were verified by designing and developing a low-power (moment of 50 Am 2) vertical-array DTAC system and testing it on controlled targets and scaled target coils. At last, a

  12. Eulerian and Lagrangian statistics from high resolution numerical simulations of weakly compressible turbulence

    NARCIS (Netherlands)

    Benzi, R.; Biferale, L.; Fisher, R.T.; Lamb, D.Q.; Toschi, F.

    2009-01-01

    We report a detailed study of Eulerian and Lagrangian statistics from high resolution Direct Numerical Simulations of isotropic weakly compressible turbulence. Reynolds number at the Taylor microscale is estimated to be around 600. Eulerian and Lagrangian statistics is evaluated over a huge data

  13. Application of the Oslo method to high resolution gamma spectra

    Science.gov (United States)

    Simon, A.; Guttormsen, M.; Larsen, A. C.; Beausang, C. W.; Humby, P.

    2015-10-01

    Hauser-Feshbach statistical model is a widely used tool for calculation of the reaction cross section, in particular for astrophysical processes. The HF model requires as an input an optical potential, gamma-strength function (GSF) and level density (LD) to properly model the statistical properties of the nucleus. The Oslo method is a well established technique to extract GSFs and LDs from experimental data, typically used for gamma-spectra obtained with scintillation detectors. Here, the first application of the Oslo method to high-resolution data obtained using the Ge detectors of the STARLITER setup at TAMU is discussed. The GSFs and LDs extracted from (p,d) and (p,t) reactions on 152154 ,Sm targets will be presented.

  14. Highly sensitive high resolution Raman spectroscopy using resonant ionization methods

    International Nuclear Information System (INIS)

    Owyoung, A.; Esherick, P.

    1984-05-01

    In recent years, the introduction of stimulated Raman methods has offered orders of magnitude improvement in spectral resolving power for gas phase Raman studies. Nevertheless, the inherent weakness of the Raman process suggests the need for significantly more sensitive techniques in Raman spectroscopy. In this we describe a new approach to this problem. Our new technique, which we call ionization-detected stimulated Raman spectroscopy (IDSRS), combines high-resolution SRS with highly-sensitive resonant laser ionization to achieve an increase in sensitivity of over three orders of magnitude. The excitation/detection process involves three sequential steps: (1) population of a vibrationally excited state via stimulated Raman pumping; (2) selective ionization of the vibrationally excited molecule with a tunable uv source; and (3) collection of the ionized species at biased electrodes where they are detected as current in an external circuit

  15. High Resolution Numerical Simulations of Primary Atomization in Diesel Sprays with Single Component Reference Fuels

    Science.gov (United States)

    2015-09-01

    NC. 14. ABSTRACT A high-resolution numerical simulation of jet breakup and spray formation from a complex diesel fuel injector at diesel engine... diesel fuel injector at diesel engine type conditions has been performed. A full understanding of the primary atomization process in diesel fuel... diesel liquid sprays the complexity is further compounded by the physical attributes present including nozzle turbulence, large density ratios

  16. The potential of high resolution ultrasonic in-situ methods

    International Nuclear Information System (INIS)

    Schuster, K.

    2010-01-01

    Document available in extended abstract form only. In the framework of geomechanical assessment of final repository underground openings the knowledge of geophysical rock parameters are of importance. Ultrasonic methods proved to be good geophysical tools to provide appropriate high resolution parameters for the characterisation of rock. In this context the detection and characterisation of rock heterogeneities at different scales, including the Excavation Damaged/disturbed Zone (EDZ/EdZ) features, play an important role. Especially, kinematic and dynamic parameters derived from ultrasonic measurements can be linked very close to rock mechanic investigations and interpretations. BGR uses high resolution ultrasonic methods, starting with emitted frequencies of about 1 kHz (seismic) and going up to about 100 kHz. The method development is going on and appropriate research and investigations are performed since many years at different European radioactive waste disposal related underground research laboratories in different potential host rocks. The most frequented are: Mont Terri Rock Laboratory, Switzerland (Opalinus Clay, OPA), Underground Research Laboratory Meuse/Haute- Marne, France (Callovo-Oxfordian, COX), Underground Research Facility Mol, Belgium (Boom Clay, BC), Aespoe Hard Rock Laboratory, Sweden (granites), Rock Laboratory Grimsel, Switzerland (granites) and Asse salt mine, Germany (rock salt). The methods can be grouped into borehole based methods and noninvasive methods like refraction and reflection methods, which are performed in general from the drift wall. Additionally, as a combination of these both methods a sort of vertical seismic profiling (VSP) is applied. The best qualified method, or a combination of methods, have to be chosen according to the scientific questions and the local site conditions. The degree of spatial resolution of zones of interest or any kind of anomaly depends strongly on the distance of these objects to the ultrasonic

  17. The effect of high-resolution orography on numerical modelling of atmospheric flow: a preliminary experiment

    International Nuclear Information System (INIS)

    Scarani, C.; Tampieri, F.; Tibaldi, S.

    1983-01-01

    The effect of increasing the resolution of the topography in models of numerical weather prediction is assessed. Different numerical experiments have been performed, referring to a case of cyclogenesis in the lee of the Alps. From the comparison, it appears that the lower atmospheric levels are better described by the model with higherresolution topography; comparable horizontal resolution runs with smoother topography appear to be less satisfactory in this respect. It turns out also that the vertical propagation of the signal due to the front-mountain interaction is faster in the high-resolution experiment

  18. Developing Local Scale, High Resolution, Data to Interface with Numerical Storm Models

    Science.gov (United States)

    Witkop, R.; Becker, A.; Stempel, P.

    2017-12-01

    High resolution, physical storm models that can rapidly predict storm surge, inundation, rainfall, wind velocity and wave height at the intra-facility scale for any storm affecting Rhode Island have been developed by Researchers at the University of Rhode Island's (URI's) Graduate School of Oceanography (GSO) (Ginis et al., 2017). At the same time, URI's Marine Affairs Department has developed methods that inhere individual geographic points into GSO's models and enable the models to accurately incorporate local scale, high resolution data (Stempel et al., 2017). This combination allows URI's storm models to predict any storm's impacts on individual Rhode Island facilities in near real time. The research presented here determines how a coastal Rhode Island town's critical facility managers (FMs) perceive their assets as being vulnerable to quantifiable hurricane-related forces at the individual facility scale and explores methods to elicit this information from FMs in a format usable for incorporation into URI's storm models.

  19. Analysis and Application of High Resolution Numerical Perturbation Algorithm for Convective-Diffusion Equation

    International Nuclear Information System (INIS)

    Gao Zhi; Shen Yi-Qing

    2012-01-01

    The high resolution numerical perturbation (NP) algorithm is analyzed and tested using various convective-diffusion equations. The NP algorithm is constructed by splitting the second order central difference schemes of both convective and diffusion terms of the convective-diffusion equation into upstream and downstream parts, then the perturbation reconstruction functions of the convective coefficient are determined using the power-series of grid interval and eliminating the truncated errors of the modified differential equation. The important nature, i.e. the upwind dominance nature, which is the basis to ensuring that the NP schemes are stable and essentially oscillation free, is firstly presented and verified. Various numerical cases show that the NP schemes are efficient, robust, and more accurate than the original second order central scheme

  20. High-resolution numerical modeling of mesoscale island wakes and sensitivity to static topographic relief data

    Directory of Open Access Journals (Sweden)

    C. G. Nunalee

    2015-08-01

    Full Text Available Recent decades have witnessed a drastic increase in the fidelity of numerical weather prediction (NWP modeling. Currently, both research-grade and operational NWP models regularly perform simulations with horizontal grid spacings as fine as 1 km. This migration towards higher resolution potentially improves NWP model solutions by increasing the resolvability of mesoscale processes and reducing dependency on empirical physics parameterizations. However, at the same time, the accuracy of high-resolution simulations, particularly in the atmospheric boundary layer (ABL, is also sensitive to orographic forcing which can have significant variability on the same spatial scale as, or smaller than, NWP model grids. Despite this sensitivity, many high-resolution atmospheric simulations do not consider uncertainty with respect to selection of static terrain height data set. In this paper, we use the Weather Research and Forecasting (WRF model to simulate realistic cases of lower tropospheric flow over and downstream of mountainous islands using the default global 30 s United States Geographic Survey terrain height data set (GTOPO30, the Shuttle Radar Topography Mission (SRTM, and the Global Multi-resolution Terrain Elevation Data set (GMTED2010 terrain height data sets. While the differences between the SRTM-based and GMTED2010-based simulations are extremely small, the GTOPO30-based simulations differ significantly. Our results demonstrate cases where the differences between the source terrain data sets are significant enough to produce entirely different orographic wake mechanics, such as vortex shedding vs. no vortex shedding. These results are also compared to MODIS visible satellite imagery and ASCAT near-surface wind retrievals. Collectively, these results highlight the importance of utilizing accurate static orographic boundary conditions when running high-resolution mesoscale models.

  1. A new automated assign and analysing method for high-resolution rotationally resolved spectra using genetic algorithms

    NARCIS (Netherlands)

    Meerts, W.L.; Schmitt, M.

    2006-01-01

    This paper describes a numerical technique that has recently been developed to automatically assign and fit high-resolution spectra. The method makes use of genetic algorithms (GA). The current algorithm is compared with previously used analysing methods. The general features of the GA and its

  2. Numerical tsunami hazard assessment of the submarine volcano Kick 'em Jenny in high resolution are

    Science.gov (United States)

    Dondin, Frédéric; Dorville, Jean-Francois Marc; Robertson, Richard E. A.

    2016-04-01

    Landslide-generated tsunami are infrequent phenomena that can be potentially highly hazardous for population located in the near-field domain of the source. The Lesser Antilles volcanic arc is a curved 800 km chain of volcanic islands. At least 53 flank collapse episodes have been recognized along the arc. Several of these collapses have been associated with underwater voluminous deposits (volume > 1 km3). Due to their momentum these events were likely capable of generating regional tsunami. However no clear field evidence of tsunami associated with these voluminous events have been reported but the occurrence of such an episode nowadays would certainly have catastrophic consequences. Kick 'em Jenny (KeJ) is the only active submarine volcano of the Lesser Antilles Arc (LAA), with a current edifice volume estimated to 1.5 km3. It is the southernmost edifice of the LAA with recognized associated volcanic landslide deposits. The volcano appears to have undergone three episodes of flank failure. Numerical simulations of one of these episodes associated with a collapse volume of ca. 4.4 km3 and considering a single pulse collapse revealed that this episode would have produced a regional tsunami with amplitude of 30 m. In the present study we applied a detailed hazard assessment on KeJ submarine volcano (KeJ) form its collapse to its waves impact on high resolution coastal area of selected island of the LAA in order to highlight needs to improve alert system and risk mitigation. We present the assessment process of tsunami hazard related to shoreline surface elevation (i.e. run-up) and flood dynamic (i.e. duration, height, speed...) at the coast of LAA island in the case of a potential flank collapse scenario at KeJ. After quantification of potential initial volumes of collapse material using relative slope instability analysis (RSIA, VolcanoFit 2.0 & SSAP 4.5) based on seven geomechanical models, the tsunami source have been simulate by St-Venant equations-based code

  3. Analytical method for high resolution liquid chromatography for quality control French Macaw

    International Nuclear Information System (INIS)

    Garcia Penna, Caridad M; Torres Amaro, Leonid; Menendez Castillo, Rosa; Sanchez, Esther; Martinez Espinosa, Vivian; Gonzalez, Maria Lidia; Rodriguez, Carlos

    2007-01-01

    Was developed and validated an analytical method for high resolution liquid chromatography applicable to quality control of drugs dry French Macaw (Senna alata L. Roxb.) With ultraviolet detection at 340 nm. The method for high resolution liquid chromatography used to quantify the sennosides A and B, main components, was validated and proved to be specific, linear, precise and accurate. (Author)

  4. Performance of the operational high-resolution numerical weather predictions of the Daphne project

    Science.gov (United States)

    Tegoulias, Ioannis; Pytharoulis, Ioannis; Karacostas, Theodore; Kartsios, Stergios; Kotsopoulos, Stelios; Bampzelis, Dimitrios

    2015-04-01

    In the framework of the DAPHNE project, the Department of Meteorology and Climatology (http://meteo.geo.auth.gr) of the Aristotle University of Thessaloniki, Greece, utilizes the nonhydrostatic Weather Research and Forecasting model with the Advanced Research dynamic solver (WRF-ARW) in order to produce high-resolution weather forecasts over Thessaly in central Greece. The aim of the DAPHNE project is to tackle the problem of drought in this area by means of Weather Modification. Cloud seeding assists the convective clouds to produce rain more efficiently or reduce hailstone size in favour of raindrops. The most favourable conditions for such a weather modification program in Thessaly occur in the period from March to October when convective clouds are triggered more frequently. Three model domains, using 2-way telescoping nesting, cover: i) Europe, the Mediterranean sea and northern Africa (D01), ii) Greece (D02) and iii) the wider region of Thessaly (D03; at selected periods) at horizontal grid-spacings of 15km, 5km and 1km, respectively. This research work intents to describe the atmospheric model setup and analyse its performance during a selected period of the operational phase of the project. The statistical evaluation of the high-resolution operational forecasts is performed using surface observations, gridded fields and radar data. Well established point verification methods combined with novel object based upon these methods, provide in depth analysis of the model skill. Spatial characteristics are adequately captured but a variable time lag between forecast and observation is noted. Acknowledgments: This research work has been co-financed by the European Union (European Regional Development Fund) and Greek national funds, through the action "COOPERATION 2011: Partnerships of Production and Research Institutions in Focused Research and Technology Sectors" (contract number 11SYN_8_1088 - DAPHNE) in the framework of the operational programme "Competitiveness

  5. Estimating Hydraulic Resistance for Floodplain Mapping and Hydraulic Studies from High-Resolution Topography: Physical and Numerical Simulations

    Science.gov (United States)

    Minear, J. T.

    2017-12-01

    One of the primary unknown variables in hydraulic analyses is hydraulic resistance, values for which are typically set using broad assumptions or calibration, with very few methods available for independent and robust determination. A better understanding of hydraulic resistance would be highly useful for understanding floodplain processes, forecasting floods, advancing sediment transport and hydraulic coupling, and improving higher dimensional flood modeling (2D+), as well as correctly calculating flood discharges for floods that are not directly measured. The relationship of observed features to hydraulic resistance is difficult to objectively quantify in the field, partially because resistance occurs at a variety of scales (i.e. grain, unit and reach) and because individual resistance elements, such as trees, grass and sediment grains, are inherently difficult to measure. Similar to photogrammetric techniques, Terrestrial Laser Scanning (TLS, also known as Ground-based LiDAR) has shown great ability to rapidly collect high-resolution topographic datasets for geomorphic and hydrodynamic studies and could be used to objectively quantify the features that collectively create hydraulic resistance in the field. Because of its speed in data collection and remote sensing ability, TLS can be used both for pre-flood and post-flood studies that require relatively quick response in relatively dangerous settings. Using datasets collected from experimental flume runs and numerical simulations, as well as field studies of several rivers in California and post-flood rivers in Colorado, this study evaluates the use of high-resolution topography to estimate hydraulic resistance, particularly from grain-scale elements. Contrary to conventional practice, experimental laboratory runs with bed grain size held constant but with varying grain-scale protusion create a nearly twenty-fold variation in measured hydraulic resistance. The ideal application of this high-resolution topography

  6. Quality and sensitivity of high-resolution numerical simulation of urban heat islands

    Science.gov (United States)

    Li, Dan; Bou-Zeid, Elie

    2014-05-01

    High-resolution numerical simulations of the urban heat island (UHI) effect with the widely-used Weather Research and Forecasting (WRF) model are assessed. Both the sensitivity of the results to the simulation setup, and the quality of the simulated fields as representations of the real world, are investigated. Results indicate that the WRF-simulated surface temperatures are more sensitive to the planetary boundary layer (PBL) scheme choice during nighttime, and more sensitive to the surface thermal roughness length parameterization during daytime. The urban surface temperatures simulated by WRF are also highly sensitive to the urban canopy model (UCM) used. The implementation in this study of an improved UCM (the Princeton UCM or PUCM) that allows the simulation of heterogeneous urban facets and of key hydrological processes, together with the so-called CZ09 parameterization for the thermal roughness length, significantly reduce the bias (Changing UCMs and PBL schemes does not alter the performance of WRF in reproducing bulk boundary layer temperature profiles significantly. The results illustrate the wide range of urban environmental conditions that various configurations of WRF can produce, and the significant biases that should be assessed before inferences are made based on WRF outputs. The optimal set-up of WRF-PUCM developed in this paper also paves the way for a confident exploration of the city-scale impacts of UHI mitigation strategies in the companion paper (Li et al 2014).

  7. High resolution x-ray CMT: Reconstruction methods

    Energy Technology Data Exchange (ETDEWEB)

    Brown, J.K.

    1997-02-01

    This paper qualitatively discusses the primary characteristics of methods for reconstructing tomographic images from a set of projections. These reconstruction methods can be categorized as either {open_quotes}analytic{close_quotes} or {open_quotes}iterative{close_quotes} techniques. Analytic algorithms are derived from the formal inversion of equations describing the imaging process, while iterative algorithms incorporate a model of the imaging process and provide a mechanism to iteratively improve image estimates. Analytic reconstruction algorithms are typically computationally more efficient than iterative methods; however, analytic algorithms are available for a relatively limited set of imaging geometries and situations. Thus, the framework of iterative reconstruction methods is better suited for high accuracy, tomographic reconstruction codes.

  8. A Geographic Method for High Resolution Spatial Heat Planning

    DEFF Research Database (Denmark)

    Nielsen, Steffen

    2014-01-01

    more detailed modelling that takes the geographic placement of buildings and the differences among DH systems into account. In the present article, a method for assessing the costs of DH expansions has been developed. The method was applied in a geographic information system (GIS) model that consists...... are considering distribution costs based on the geographic properties of each area and assessing transmission costs based on an iterative process that examines expansion potentials gradually. The GIS model is only applicable to a Danish context, but the method itself can be applied to other countries....... of three parts and assesses the costs of heat production, distribution, and transmission. The model was also applied to an actual case in order to show how it can be used. The model shows many improvements in the method for the assessment of distribution costs and transmission costs. Most notable...

  9. High-resolution imaging methods in array signal processing

    DEFF Research Database (Denmark)

    Xenaki, Angeliki

    in active sonar signal processing for detection and imaging of submerged oil contamination in sea water from a deep-water oil leak. The submerged oil _eld is modeled as a uid medium exhibiting spatial perturbations in the acoustic parameters from their mean ambient values which cause weak scattering...... of the incident acoustic energy. A highfrequency active sonar is selected to insonify the medium and receive the backscattered waves. High-frequency acoustic methods can both overcome the optical opacity of water (unlike methods based on electromagnetic waves) and resolve the small-scale structure...... of the submerged oil field (unlike low-frequency acoustic methods). The study shows that high-frequency acoustic methods are suitable not only for large-scale localization of the oil contamination in the water column but also for statistical characterization of the submerged oil field through inference...

  10. Introduction to quantum calculation methods in high resolution NMR

    International Nuclear Information System (INIS)

    Goldman, M.

    1996-01-01

    New techniques as for instance the polarization transfer, the coherence with several quanta and the double Fourier transformation have appeared fifteen years ago. These techniques constitute a considerable advance in NMR. Indeed, they allow to study more complex molecules than it was before possible. But with these advances, the classical description of the NMR is not enough to understand precisely the physical phenomena induced by these methods. It is then necessary to resort to quantum calculation methods. The aim of this work is to present these calculation methods. After some recalls of quantum mechanics, the author describes the NMR with the density matrix, reviews the main methods of double Fourier transformation and then gives the principle of the relaxation times calculation. (O.M.)

  11. Stochastic porous media modeling and high-resolution schemes for numerical simulation of subsurface immiscible fluid flow transport

    Science.gov (United States)

    Brantson, Eric Thompson; Ju, Binshan; Wu, Dan; Gyan, Patricia Semwaah

    2018-04-01

    This paper proposes stochastic petroleum porous media modeling for immiscible fluid flow simulation using Dykstra-Parson coefficient (V DP) and autocorrelation lengths to generate 2D stochastic permeability values which were also used to generate porosity fields through a linear interpolation technique based on Carman-Kozeny equation. The proposed method of permeability field generation in this study was compared to turning bands method (TBM) and uniform sampling randomization method (USRM). On the other hand, many studies have also reported that, upstream mobility weighting schemes, commonly used in conventional numerical reservoir simulators do not accurately capture immiscible displacement shocks and discontinuities through stochastically generated porous media. This can be attributed to high level of numerical smearing in first-order schemes, oftentimes misinterpreted as subsurface geological features. Therefore, this work employs high-resolution schemes of SUPERBEE flux limiter, weighted essentially non-oscillatory scheme (WENO), and monotone upstream-centered schemes for conservation laws (MUSCL) to accurately capture immiscible fluid flow transport in stochastic porous media. The high-order schemes results match well with Buckley Leverett (BL) analytical solution without any non-oscillatory solutions. The governing fluid flow equations were solved numerically using simultaneous solution (SS) technique, sequential solution (SEQ) technique and iterative implicit pressure and explicit saturation (IMPES) technique which produce acceptable numerical stability and convergence rate. A comparative and numerical examples study of flow transport through the proposed method, TBM and USRM permeability fields revealed detailed subsurface instabilities with their corresponding ultimate recovery factors. Also, the impact of autocorrelation lengths on immiscible fluid flow transport were analyzed and quantified. A finite number of lines used in the TBM resulted into visual

  12. High-resolution methods for fluorescence retrieval from space

    NARCIS (Netherlands)

    Mazzoni, M.; Falorni, P.; Verhoef, W.

    2010-01-01

    The retrieval from space of a very weak fluorescence signal was studied in the O2A and O2B oxygen atmospheric absorption bands. The accuracy of the method was tested for the retrieval of the chlorophyll fluorescence and reflectance terms contributing to the sensor signal. The radiance at the top of

  13. Multi-GPU Accelerated Admittance Method for High-Resolution Human Exposure Evaluation.

    Science.gov (United States)

    Xiong, Zubiao; Feng, Shi; Kautz, Richard; Chandra, Sandeep; Altunyurt, Nevin; Chen, Ji

    2015-12-01

    A multi-graphics processing unit (GPU) accelerated admittance method solver is presented for solving the induced electric field in high-resolution anatomical models of human body when exposed to external low-frequency magnetic fields. In the solver, the anatomical model is discretized as a three-dimensional network of admittances. The conjugate orthogonal conjugate gradient (COCG) iterative algorithm is employed to take advantage of the symmetric property of the complex-valued linear system of equations. Compared against the widely used biconjugate gradient stabilized method, the COCG algorithm can reduce the solving time by 3.5 times and reduce the storage requirement by about 40%. The iterative algorithm is then accelerated further by using multiple NVIDIA GPUs. The computations and data transfers between GPUs are overlapped in time by using asynchronous concurrent execution design. The communication overhead is well hidden so that the acceleration is nearly linear with the number of GPU cards. Numerical examples show that our GPU implementation running on four NVIDIA Tesla K20c cards can reach 90 times faster than the CPU implementation running on eight CPU cores (two Intel Xeon E5-2603 processors). The implemented solver is able to solve large dimensional problems efficiently. A whole adult body discretized in 1-mm resolution can be solved in just several minutes. The high efficiency achieved makes it practical to investigate human exposure involving a large number of cases with a high resolution that meets the requirements of international dosimetry guidelines.

  14. Quality and sensitivity of high-resolution numerical simulation of urban heat islands

    International Nuclear Information System (INIS)

    Li, Dan; Bou-Zeid, Elie

    2014-01-01

    High-resolution numerical simulations of the urban heat island (UHI) effect with the widely-used Weather Research and Forecasting (WRF) model are assessed. Both the sensitivity of the results to the simulation setup, and the quality of the simulated fields as representations of the real world, are investigated. Results indicate that the WRF-simulated surface temperatures are more sensitive to the planetary boundary layer (PBL) scheme choice during nighttime, and more sensitive to the surface thermal roughness length parameterization during daytime. The urban surface temperatures simulated by WRF are also highly sensitive to the urban canopy model (UCM) used. The implementation in this study of an improved UCM (the Princeton UCM or PUCM) that allows the simulation of heterogeneous urban facets and of key hydrological processes, together with the so-called CZ09 parameterization for the thermal roughness length, significantly reduce the bias (<1.5 °C) in the surface temperature fields as compared to satellite observations during daytime. The boundary layer potential temperature profiles are captured by WRF reasonable well at both urban and rural sites; the biases in these profiles relative to aircraft-mounted senor measurements are on the order of 1.5 °C. Changing UCMs and PBL schemes does not alter the performance of WRF in reproducing bulk boundary layer temperature profiles significantly. The results illustrate the wide range of urban environmental conditions that various configurations of WRF can produce, and the significant biases that should be assessed before inferences are made based on WRF outputs. The optimal set-up of WRF-PUCM developed in this paper also paves the way for a confident exploration of the city-scale impacts of UHI mitigation strategies in the companion paper (Li et al 2014). (letter)

  15. Tailored high-resolution numerical weather forecasts for energy efficient predictive building control

    Science.gov (United States)

    Stauch, V. J.; Gwerder, M.; Gyalistras, D.; Oldewurtel, F.; Schubiger, F.; Steiner, P.

    2010-09-01

    The high proportion of the total primary energy consumption by buildings has increased the public interest in the optimisation of buildings' operation and is also driving the development of novel control approaches for the indoor climate. In this context, the use of weather forecasts presents an interesting and - thanks to advances in information and predictive control technologies and the continuous improvement of numerical weather prediction (NWP) models - an increasingly attractive option for improved building control. Within the research project OptiControl (www.opticontrol.ethz.ch) predictive control strategies for a wide range of buildings, heating, ventilation and air conditioning (HVAC) systems, and representative locations in Europe are being investigated with the aid of newly developed modelling and simulation tools. Grid point predictions for radiation, temperature and humidity of the high-resolution limited area NWP model COSMO-7 (see www.cosmo-model.org) and local measurements are used as disturbances and inputs into the building system. The control task considered consists in minimizing energy consumption whilst maintaining occupant comfort. In this presentation, we use the simulation-based OptiControl methodology to investigate the impact of COSMO-7 forecasts on the performance of predictive building control and the resulting energy savings. For this, we have selected building cases that were shown to benefit from a prediction horizon of up to 3 days and therefore, are particularly suitable for the use of numerical weather forecasts. We show that the controller performance is sensitive to the quality of the weather predictions, most importantly of the incident radiation on differently oriented façades. However, radiation is characterised by a high temporal and spatial variability in part caused by small scale and fast changing cloud formation and dissolution processes being only partially represented in the COSMO-7 grid point predictions. On the

  16. Sinking, merging and stationary plumes in a coupled chemotaxis-fluid model: a high-resolution numerical approach

    KAUST Repository

    Chertock, A.

    2012-02-02

    Aquatic bacteria like Bacillus subtilis are heavier than water yet they are able to swim up an oxygen gradient and concentrate in a layer below the water surface, which will undergo Rayleigh-Taylor-type instabilities for sufficiently high concentrations. In the literature, a simplified chemotaxis-fluid system has been proposed as a model for bio-convection in modestly diluted cell suspensions. It couples a convective chemotaxis system for the oxygen-consuming and oxytactic bacteria with the incompressible Navier-Stokes equations subject to a gravitational force proportional to the relative surplus of the cell density compared to the water density. In this paper, we derive a high-resolution vorticity-based hybrid finite-volume finite-difference scheme, which allows us to investigate the nonlinear dynamics of a two-dimensional chemotaxis-fluid system with boundary conditions matching an experiment of Hillesdon et al. (Bull. Math. Biol., vol. 57, 1995, pp. 299-344). We present selected numerical examples, which illustrate (i) the formation of sinking plumes, (ii) the possible merging of neighbouring plumes and (iii) the convergence towards numerically stable stationary plumes. The examples with stable stationary plumes show how the surface-directed oxytaxis continuously feeds cells into a high-concentration layer near the surface, from where the fluid flow (recurring upwards in the space between the plumes) transports the cells into the plumes, where then gravity makes the cells sink and constitutes the driving force in maintaining the fluid convection and, thus, in shaping the plumes into (numerically) stable stationary states. Our numerical method is fully capable of solving the coupled chemotaxis-fluid system and enabling a full exploration of its dynamics, which cannot be done in a linearised framework. © 2012 Cambridge University Press.

  17. A METHOD TO CALIBRATE THE HIGH-RESOLUTION CATANIA ASTROPHYSICAL OBSERVATORY SPECTROPOLARIMETER

    Energy Technology Data Exchange (ETDEWEB)

    Leone, F.; Gangi, M.; Giarrusso, M.; Scalia, C. [Università di Catania, Dipartimento di Fisica e Astronomia, Sezione Astrofisica, Via S. Sofia 78, I-95123 Catania (Italy); Avila, G. [ESO, Karl-Schwarzschild-Straße 2, D-85748, Garching bei München (Germany); Bellassai, G.; Bruno, P.; Catalano, S.; Benedetto, R. Di; Stefano, A. Di; Greco, V.; Martinetti, E.; Miraglia, M.; Munari, M.; Pontoni, C.; Scuderi, S.; Spanó, P. [INAF—Osservatorio Astrofisico di Catania, Via S. Sofia 78, I-95123 Catania (Italy)

    2016-05-01

    The Catania Astrophysical Observatory Spectropolarimeter (CAOS) is a white-pupil cross-dispersed échelle spectrograph with a spectral resolution of up to R  = 55,000 in the 375–1100 nm range in a single exposure, with complete coverage up to 856 nm. CAOS is linked to the 36-inch telescope, at Mount Etna Observatory, with a couple of 100 μ m optical fibers and it achieves a signal-to-noise ratio better than 60 for a V  = 10 mag star in one hour. CAOS is thermally stabilized in temperature within a 0.01 K rms, so that radial velocities are measured with a precision better than 100 m s{sup −1} from a single spectral line. Linear and circular spectropolarimetric observations are possible by means of a Savart plate working in series with a half-wave and a quarter-wave retarder plate in the 376–850 nm range. As is usual for high-resolution spectropolarimeters, CAOS is suitable to measure all Stokes parameters across spectral lines and it cannot measure the absolute degree of polarization. Observations of unpolarized standard stars show that instrumental polarization is generally zero at 550 nm and can increase up to 3% at the other wavelengths. Since polarized and unpolarized standard stars are useless, we suggest a method to calibrate a high-resolution spectropolarimeter on the basis of the polarimetric properties of spectral lines formed in the presence of a magnetic field. As applied to CAOS, observations of magnetic chemically peculiar stars of the main sequence show that the cross-talk from linear to circular polarization is smaller than 0.4% and that conversion from circular to linear is less than 2.7%. Strength and wavelength dependences of cross-talk can be entirely ascribed, via numerical simulations, to the incorrect retardance of achromatic wave plates.

  18. A Saliency Guided Semi-Supervised Building Change Detection Method for High Resolution Remote Sensing Images

    Directory of Open Access Journals (Sweden)

    Bin Hou

    2016-08-01

    Full Text Available Characterizations of up to date information of the Earth’s surface are an important application providing insights to urban planning, resources monitoring and environmental studies. A large number of change detection (CD methods have been developed to solve them by utilizing remote sensing (RS images. The advent of high resolution (HR remote sensing images further provides challenges to traditional CD methods and opportunities to object-based CD methods. While several kinds of geospatial objects are recognized, this manuscript mainly focuses on buildings. Specifically, we propose a novel automatic approach combining pixel-based strategies with object-based ones for detecting building changes with HR remote sensing images. A multiresolution contextual morphological transformation called extended morphological attribute profiles (EMAPs allows the extraction of geometrical features related to the structures within the scene at different scales. Pixel-based post-classification is executed on EMAPs using hierarchical fuzzy clustering. Subsequently, the hierarchical fuzzy frequency vector histograms are formed based on the image-objects acquired by simple linear iterative clustering (SLIC segmentation. Then, saliency and morphological building index (MBI extracted on difference images are used to generate a pseudo training set. Ultimately, object-based semi-supervised classification is implemented on this training set by applying random forest (RF. Most of the important changes are detected by the proposed method in our experiments. This study was checked for effectiveness using visual evaluation and numerical evaluation.

  19. A NEW HIGH RESOLUTION OPTICAL METHOD FOR OBTAINING THE TOPOGRAPHY OF FRACTURE SURFACES IN ROCKS

    Directory of Open Access Journals (Sweden)

    Steven Ogilvie

    2011-05-01

    Full Text Available Surface roughness plays a major role in the movement of fluids through fracture systems. Fracture surface profiling is necessary to tune the properties of numerical fractures required in fluid flow modelling to those of real rock fractures. This is achieved using a variety of (i mechanical and (ii optical techniques. Stylus profilometry is a popularly used mechanical method and can measure surface heights with high precision, but only gives a good horizontal resolution in one direction on the fracture plane. This method is also expensive and simultaneous coverage of the surface is not possible. Here, we describe the development of an optical method which images cast copies of rough rock fractures using in-house developed hardware and image analysis software (OptiProf™ that incorporates image improvement and noise suppression features. This technique images at high resolutions, 15-200 μm for imaged areas of 10 × 7.5 mm and 100 × 133 mm, respectively and a similar vertical resolution (15 μm for a maximum topography of 4 mm. It uses in-house developed hardware and image analysis (OptiProf™ software and is cheap and non-destructive, providing continuous coverage of the fracture surface. The fracture models are covered with dye and fluid thicknesses above the rough surfaces converted into topographies using the Lambert-Beer Law. The dye is calibrated using 2 devices with accurately known thickness; (i a polycarbonate tile with wells of different depths and (ii a wedge-shaped vial made from silica glass. The data from each of the two surfaces can be combined to provide an aperture map of the fracture for the scenario where the surfaces touch at a single point or any greater mean aperture. The topography and aperture maps are used to provide data for the generation of synthetic fractures, tuned to the original fracture and used in numerical flow modelling.

  20. Development of numerical simulation technology for high resolution thermal hydraulic analysis

    International Nuclear Information System (INIS)

    Yoon, Han Young; Kim, K. D.; Kim, B. J.; Kim, J. T.; Park, I. K.; Bae, S. W.; Song, C. H.; Lee, S. W.; Lee, S. J.; Lee, J. R.; Chung, S. K.; Chung, B. D.; Cho, H. K.; Choi, S. K.; Ha, K. S.; Hwang, M. K.; Yun, B. J.; Jeong, J. J.; Sul, A. S.; Lee, H. D.; Kim, J. W.

    2012-04-01

    A realistic simulation of two phase flows is essential for the advanced design and safe operation of a nuclear reactor system. The need for a multi dimensional analysis of thermal hydraulics in nuclear reactor components is further increasing with advanced design features, such as a direct vessel injection system, a gravity driven safety injection system, and a passive secondary cooling system. These features require more detailed analysis with enhanced accuracy. In this regard, KAERI has developed a three dimensional thermal hydraulics code, CUPID, for the analysis of transient, multi dimensional, two phase flows in nuclear reactor components. The code was designed for use as a component scale code, and/or a three dimensional component, which can be coupled with a system code. This report presents an overview of the CUPID code development and preliminary assessment, mainly focusing on the numerical solution method and its verification and validation. It was shown that the CUPID code was successfully verified. The results of the validation calculations show that the CUPID code is very promising, but a systematic approach for the validation and improvement of the physical models is still needed

  1. Numerical methods

    CERN Document Server

    Dahlquist, Germund

    1974-01-01

    ""Substantial, detailed and rigorous . . . readers for whom the book is intended are admirably served."" - MathSciNet (Mathematical Reviews on the Web), American Mathematical Society.Practical text strikes fine balance between students' requirements for theoretical treatment and needs of practitioners, with best methods for large- and small-scale computing. Prerequisites are minimal (calculus, linear algebra, and preferably some acquaintance with computer programming). Text includes many worked examples, problems, and an extensive bibliography.

  2. Ultra high resolution tomography

    Energy Technology Data Exchange (ETDEWEB)

    Haddad, W.S.

    1994-11-15

    Recent work and results on ultra high resolution three dimensional imaging with soft x-rays will be presented. This work is aimed at determining microscopic three dimensional structure of biological and material specimens. Three dimensional reconstructed images of a microscopic test object will be presented; the reconstruction has a resolution on the order of 1000 A in all three dimensions. Preliminary work with biological samples will also be shown, and the experimental and numerical methods used will be discussed.

  3. High resolution numerical investigation on the effect of convective instability on long term CO2 storage in saline aquifers

    International Nuclear Information System (INIS)

    Lu, C; Lichtner, P C

    2007-01-01

    CO 2 sequestration (capture, separation, and long term storage) in various geologic media including depleted oil reservoirs, saline aquifers, and oceanic sediments is being considered as a possible solution to reduce green house gas emissions. Dissolution of supercritical CO 2 in formation brines is considered an important storage mechanism to prevent possible leakage. Accurate prediction of the plume dissolution rate and migration is essential. Analytical analysis and numerical experiments have demonstrated that convective instability (Rayleigh instability) has a crucial effect on the dissolution behavior and subsequent mineralization reactions. Global stability analysis indicates that a certain grid resolution is needed to capture the features of density-driven fingering phenomena. For 3-D field scale simulations, high resolution leads to large numbers of grid nodes, unfeasible for a single workstation. In this study, we investigate the effects of convective instability on geologic sequestration of CO 2 by taking advantage of parallel computing using the code PFLOTRAN, a massively parallel 3-D reservoir simulator for modeling subsurface multiphase, multicomponent reactive flow and transport based on continuum scale mass and energy conservation equations. The onset, development and long-term fate of a supercritical CO 2 plume will be resolved with high resolution numerical simulations to investigate the rate of plume dissolution caused by fingering phenomena

  4. Pyrosequencing™ : A one-step method for high resolution HLA typing

    Directory of Open Access Journals (Sweden)

    Marincola Francesco M

    2003-11-01

    Full Text Available Abstract While the use of high-resolution molecular typing in routine matching of human leukocyte antigens (HLA is expected to improve unrelated donor selection and transplant outcome, the genetic complexity of HLA still makes the current methodology limited and laborious. Pyrosequencing™ is a gel-free, sequencing-by-synthesis method. In a Pyrosequencing reaction, nucleotide incorporation proceeds sequentially along each DNA template at a given nucleotide dispensation order (NDO that is programmed into a pyrosequencer. Here we describe the design of a NDO that generates a pyrogram unique for any given allele or combination of alleles. We present examples of unique pyrograms generated from each of two heterozygous HLA templates, which would otherwise remain cis/trans ambiguous using standard sequencing based typing (SBT method. In addition, we display representative data that demonstrate long read and linear signal generation. These features are prerequisite of high-resolution typing and automated data analysis. In conclusion Pyrosequencing is a one-step method for high resolution DNA typing.

  5. A high-resolution method for the localization of proanthocyanidins in plant tissues

    Directory of Open Access Journals (Sweden)

    Panter Stephen

    2011-05-01

    Full Text Available Abstract Background Histochemical staining of plant tissues with 4-dimethylaminocinnamaldehyde (DMACA or vanillin-HCl is widely used to characterize spatial patterns of proanthocyanidin accumulation in plant tissues. These methods are limited in their ability to allow high-resolution imaging of proanthocyanidin deposits. Results Tissue embedding techniques were used in combination with DMACA staining to analyze the accumulation of proanthocyanidins in Lotus corniculatus (L. and Trifolium repens (L. tissues. Embedding of plant tissues in LR White or paraffin matrices, with or without DMACA staining, preserved the physical integrity of the plant tissues, allowing high-resolution imaging that facilitated cell-specific localization of proanthocyanidins. A brown coloration was seen in proanthocyanidin-producing cells when plant tissues were embedded without DMACA staining and this was likely to have been due to non-enzymatic oxidation of proanthocyanidins and the formation of colored semiquinones and quinones. Conclusions This paper presents a simple, high-resolution method for analysis of proanthocyanidin accumulation in organs, tissues and cells of two plant species with different patterns of proanthocyanidin accumulation, namely Lotus corniculatus (birdsfoot trefoil and Trifolium repens (white clover. This technique was used to characterize cell type-specific patterns of proanthocyanidin accumulation in white clover flowers at different stages of development.

  6. Polycrystalline magma behaviour in dykes: Insights from high-resolution numerical models

    Science.gov (United States)

    Yamato, Philippe; Duretz, Thibault; Tartèse, Romain; May, Dave

    2013-04-01

    The presence of a crystalline load in magmas modifies their effective rheology and thus their flow behaviour. In dykes, for instance, the presence of crystals denser than the melt reduces the ascent velocity and modifies the shape of the velocity profile from a Newtonian Poiseuille flow to a Bingham type flow. Nevertheless, several unresolved issues still remain poorly understood and need to be quantified: (1) What are the mechanisms controlling crystals segregation during magma ascent in dykes? (2) How does crystals transportation within a melt depend on their concentration, geometry, size and density? (3) Do crystals evolve in isolation to each other or as a cluster? (4) What is the influence of considering inertia of the melt within the system? In this study, we present numerical models following the setup previously used in Yamato et al. (2012). Our model setup simulates an effective pressure gradient between the base and the top of a channel (representing a dyke), by pushing a rigid piston into a magmatic mush that comprised crystals and melt and perforated by a hole. The initial resolution of the models (401x1551 nodes) has been doubled in order to ensure that the smallest crystalline fractions are sufficiently well resolved. Results show that the melt phase can be squeezed out from a crystal-rich magma when subjected to a given pressure gradient range and that clustering of crystals might be an important parameter controlling their behaviour. This demonstrates that crystal-melt segregation in dykes during magma ascent constitutes a viable mechanism for magmatic differentiation of residual melts. These results also explain how isolated crystal clusters and melt pockets, with different chemistry, can be formed. In addition, we discuss the impact of taking into account inertia in our models. Reference: Yamato, P., Tartèse, R., Duretz, T., May, D.A., 2012. Numerical modelling of magma transport in dykes. Tectonophysics 526-529, 97-109.

  7. Applications of high-resolution spatial discretization scheme and Jacobian-free Newton–Krylov method in two-phase flow problems

    International Nuclear Information System (INIS)

    Zou, Ling; Zhao, Haihua; Zhang, Hongbin

    2015-01-01

    Highlights: • Using high-resolution spatial scheme in solving two-phase flow problems. • Fully implicit time integrations scheme. • Jacobian-free Newton–Krylov method. • Analytical solution for two-phase water faucet problem. - Abstract: The majority of the existing reactor system analysis codes were developed using low-order numerical schemes in both space and time. In many nuclear thermal–hydraulics applications, it is desirable to use higher-order numerical schemes to reduce numerical errors. High-resolution spatial discretization schemes provide high order spatial accuracy in smooth regions and capture sharp spatial discontinuity without nonphysical spatial oscillations. In this work, we adapted an existing high-resolution spatial discretization scheme on staggered grids in two-phase flow applications. Fully implicit time integration schemes were also implemented to reduce numerical errors from operator-splitting types of time integration schemes. The resulting nonlinear system has been successfully solved using the Jacobian-free Newton–Krylov (JFNK) method. The high-resolution spatial discretization and high-order fully implicit time integration numerical schemes were tested and numerically verified for several two-phase test problems, including a two-phase advection problem, a two-phase advection with phase appearance/disappearance problem, and the water faucet problem. Numerical results clearly demonstrated the advantages of using such high-resolution spatial and high-order temporal numerical schemes to significantly reduce numerical diffusion and therefore improve accuracy. Our study also demonstrated that the JFNK method is stable and robust in solving two-phase flow problems, even when phase appearance/disappearance exists

  8. Assessment of the Suitability of High Resolution Numerical Weather Model Outputs for Hydrological Modelling in Mountainous Cold Regions

    Science.gov (United States)

    Rasouli, K.; Pomeroy, J. W.; Hayashi, M.; Fang, X.; Gutmann, E. D.; Li, Y.

    2017-12-01

    The hydrology of mountainous cold regions has a large spatial variability that is driven both by climate variability and near-surface process variability associated with complex terrain and patterns of vegetation, soils, and hydrogeology. There is a need to downscale large-scale atmospheric circulations towards the fine scales that cold regions hydrological processes operate at to assess their spatial variability in complex terrain and quantify uncertainties by comparison to field observations. In this research, three high resolution numerical weather prediction models, namely, the Intermediate Complexity Atmosphere Research (ICAR), Weather Research and Forecasting (WRF), and Global Environmental Multiscale (GEM) models are used to represent spatial and temporal patterns of atmospheric conditions appropriate for hydrological modelling. An area covering high mountains and foothills of the Canadian Rockies was selected to assess and compare high resolution ICAR (1 km × 1 km), WRF (4 km × 4 km), and GEM (2.5 km × 2.5 km) model outputs with station-based meteorological measurements. ICAR with very low computational cost was run with different initial and boundary conditions and with finer spatial resolution, which allowed an assessment of modelling uncertainty and scaling that was difficult with WRF. Results show that ICAR, when compared with WRF and GEM, performs very well in precipitation and air temperature modelling in the Canadian Rockies, while all three models show a fair performance in simulating wind and humidity fields. Representation of local-scale atmospheric dynamics leading to realistic fields of temperature and precipitation by ICAR, WRF, and GEM makes these models suitable for high resolution cold regions hydrological predictions in complex terrain, which is a key factor in estimating water security in western Canada.

  9. High-resolution numerical modeling of meteorological and hydrological conditions during May 2014 floods in Serbia

    Science.gov (United States)

    Vujadinovic, Mirjam; Vukovic, Ana; Cvetkovic, Bojan; Pejanovic, Goran; Nickovic, Slobodan; Djurdjevic, Vladimir; Rajkovic, Borivoj; Djordjevic, Marija

    2015-04-01

    In May 2014 west Balkan region was affected by catastrophic floods in Serbia, Bosnia and Herzegovina and eastern parts of Croatia. Observed precipitation amount were extremely high, on many stations largest ever recorded. In the period from 12th to 18th of May, most of Serbia received between 50 to 100 mm of rainfall, while western parts of the country, which were influenced the most, had over 200 mm of rainfall, locally even more than 300 mm. This very intense precipitation came when the soil was already saturated after a very wet period during the second half of April and beginning of May, when most of Serbia received between 120 i 170 mm of rainfall. New abundant precipitation on already saturated soil increased surface and underground water flow, caused floods, soil erosion and landslides. High water levels, most of them record breaking, were measured on the Sava, Drina, Dunav, Kolubara, Ljig, Ub, Toplica, Tamnava, Jadar, Zapadna Morava, Velika Morava, Mlava and Pek river. Overall, two cities and 17 municipals were severely affected by the floods, 32000 people were evacuated from their homes, while 51 died. Material damage to the infrastructure, energy power system, crops, livestock funds and houses is estimated to more than 2 billion euro. Although the operational numerical weather forecast gave in generally good precipitation prediction, flood forecasting in this case was mainly done through the expert judgment rather than relying on dynamic hydrological modeling. We applied an integrated atmospheric-hydrologic modelling system to some of the most impacted catchments in order to timely simulate hydrological response, and examine its potentials as a flood warning system. The system is based on the Non-hydrostatic Multiscale Model NMMB, which is a numerical weather prediction model that can be used on a broad range of spatial and temporal scales. Its non-hydrostatic module allows high horizontal resolution and resolving cloud systems as well as large

  10. Calibrating a numerical model's morphology using high-resolution spatial and temporal datasets from multithread channel flume experiments.

    Science.gov (United States)

    Javernick, L.; Bertoldi, W.; Redolfi, M.

    2017-12-01

    Accessing or acquiring high quality, low-cost topographic data has never been easier due to recent developments of the photogrammetric techniques of Structure-from-Motion (SfM). Researchers can acquire the necessary SfM imagery with various platforms, with the ability to capture millimetre resolution and accuracy, or large-scale areas with the help of unmanned platforms. Such datasets in combination with numerical modelling have opened up new opportunities to study river environments physical and ecological relationships. While numerical models overall predictive accuracy is most influenced by topography, proper model calibration requires hydraulic data and morphological data; however, rich hydraulic and morphological datasets remain scarce. This lack in field and laboratory data has limited model advancement through the inability to properly calibrate, assess sensitivity, and validate the models performance. However, new time-lapse imagery techniques have shown success in identifying instantaneous sediment transport in flume experiments and their ability to improve hydraulic model calibration. With new capabilities to capture high resolution spatial and temporal datasets of flume experiments, there is a need to further assess model performance. To address this demand, this research used braided river flume experiments and captured time-lapse observed sediment transport and repeat SfM elevation surveys to provide unprecedented spatial and temporal datasets. Through newly created metrics that quantified observed and modeled activation, deactivation, and bank erosion rates, the numerical model Delft3d was calibrated. This increased temporal data of both high-resolution time series and long-term temporal coverage provided significantly improved calibration routines that refined calibration parameterization. Model results show that there is a trade-off between achieving quantitative statistical and qualitative morphological representations. Specifically, statistical

  11. Developing Local Scale, High Resolution, Data to Interface with Numerical Hurricane Models

    Science.gov (United States)

    Witkop, R.; Becker, A.

    2017-12-01

    In 2017, the University of Rhode Island's (URI's) Graduate School of Oceanography (GSO) developed hurricane models that specify wind speed, inundation, and erosion around Rhode Island with enough precision to incorporate impacts on individual facilities. At the same time, URI's Marine Affairs Visualization Lab (MAVL) developed a way to realistically visualize these impacts in 3-D. Since climate change visualizations and water resource simulations have been shown to promote resiliency action (Sheppard, 2015) and increase credibility (White et al., 2010) when local knowledge is incorporated, URI's hurricane models and visualizations may also more effectively enable hurricane resilience actions if they include Facility Manager (FM) and Emergency Manager (EM) perceived hurricane impacts. This study determines how FM's and EM's perceive their assets as being vulnerable to quantifiable hurricane-related forces at the individual facility scale while exploring methods to elicit this information from FMs and EMs in a format usable for incorporation into URI GSO's hurricane models.

  12. A postprocessing method based on high-resolution spectral estimation for FDTD calculation of phononic band structures

    Energy Technology Data Exchange (ETDEWEB)

    Su Xiaoxing, E-mail: xxsu@bjtu.edu.c [School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044 (China); Li Jianbao; Wang Yuesheng [Institute of Engineering Mechanics, Beijing Jiaotong University, Beijing 100044 (China)

    2010-05-15

    If the energy bands of a phononic crystal are calculated by the finite difference time domain (FDTD) method combined with the fast Fourier transform (FFT), good estimation of the eigenfrequencies can only be ensured by the postprocessing of sufficiently long time series generated by a large number of FDTD iterations. In this paper, a postprocessing method based on the high-resolution spectral estimation via the Yule-Walker method is proposed to overcome this difficulty. Numerical simulation results for three-dimensional acoustic and two-dimensional elastic systems show that, compared with the classic FFT-based postprocessing method, the proposed method can give much better estimation of the eigenfrequencies when the FDTD is run with relatively few iterations.

  13. A postprocessing method based on high-resolution spectral estimation for FDTD calculation of phononic band structures

    International Nuclear Information System (INIS)

    Su Xiaoxing; Li Jianbao; Wang Yuesheng

    2010-01-01

    If the energy bands of a phononic crystal are calculated by the finite difference time domain (FDTD) method combined with the fast Fourier transform (FFT), good estimation of the eigenfrequencies can only be ensured by the postprocessing of sufficiently long time series generated by a large number of FDTD iterations. In this paper, a postprocessing method based on the high-resolution spectral estimation via the Yule-Walker method is proposed to overcome this difficulty. Numerical simulation results for three-dimensional acoustic and two-dimensional elastic systems show that, compared with the classic FFT-based postprocessing method, the proposed method can give much better estimation of the eigenfrequencies when the FDTD is run with relatively few iterations.

  14. FFT-enhanced IHS transform method for fusing high-resolution satellite images

    Science.gov (United States)

    Ling, Y.; Ehlers, M.; Usery, E.L.; Madden, M.

    2007-01-01

    Existing image fusion techniques such as the intensity-hue-saturation (IHS) transform and principal components analysis (PCA) methods may not be optimal for fusing the new generation commercial high-resolution satellite images such as Ikonos and QuickBird. One problem is color distortion in the fused image, which causes visual changes as well as spectral differences between the original and fused images. In this paper, a fast Fourier transform (FFT)-enhanced IHS method is developed for fusing new generation high-resolution satellite images. This method combines a standard IHS transform with FFT filtering of both the panchromatic image and the intensity component of the original multispectral image. Ikonos and QuickBird data are used to assess the FFT-enhanced IHS transform method. Experimental results indicate that the FFT-enhanced IHS transform method may improve upon the standard IHS transform and the PCA methods in preserving spectral and spatial information. ?? 2006 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS).

  15. Potential of high resolution protein mapping as a method of monitoring the human immune system

    International Nuclear Information System (INIS)

    Anderson, N.L.; Anderson, N.G

    1980-01-01

    Immunology traditionally deals with complex cellular systems and heterogeneous mixtures of effector molecules (primarily antibodies). Some sense has emerged from this chaos through the use of functional assays. Such an approach however naturally leaves a great deal undiscovered since the assays are simple and the assayed objects are complex. In this chapter some experimental approaches to immunological problems are described using high-resolution two-dimensional electrophoresis, a method that can resolve thousands of proteins and can thus begin to treat immunological entities at their appropriate level of complexity. In addition, the possible application of this work to the problem of monitoring events in the individual human immune system are discussed

  16. A fast and automatic mosaic method for high-resolution satellite images

    Science.gov (United States)

    Chen, Hongshun; He, Hui; Xiao, Hongyu; Huang, Jing

    2015-12-01

    We proposed a fast and fully automatic mosaic method for high-resolution satellite images. First, the overlapped rectangle is computed according to geographical locations of the reference and mosaic images and feature points on both the reference and mosaic images are extracted by a scale-invariant feature transform (SIFT) algorithm only from the overlapped region. Then, the RANSAC method is used to match feature points of both images. Finally, the two images are fused into a seamlessly panoramic image by the simple linear weighted fusion method or other method. The proposed method is implemented in C++ language based on OpenCV and GDAL, and tested by Worldview-2 multispectral images with a spatial resolution of 2 meters. Results show that the proposed method can detect feature points efficiently and mosaic images automatically.

  17. Use of a New High Resolution Melting Method for Genotyping Pathogenic Leptospira spp.

    Directory of Open Access Journals (Sweden)

    Florence Naze

    Full Text Available Leptospirosis is a worldwide zoonosis that is endemic in tropical areas, such as Reunion Island. The species Leptospira interrogans is the primary agent in human infections, but other pathogenic species, such as L. kirschner and L. borgpetersenii, are also associated with human leptospirosis.In this study, a melting curve analysis of the products that were amplified with the primer pairs lfb1 F/R and G1/G2 facilitated an accurate species classification of Leptospira reference strains. Next, we combined an unsupervised high resolution melting (HRM method with a new statistical approach using primers to amplify a two variable-number tandem-repeat (VNTR for typing at the subspecies level. The HRM analysis, which was performed with ScreenClust Software, enabled the identification of genotypes at the serovar level with high resolution power (Hunter-Gaston index 0.984. This method was also applied to Leptospira DNA from blood samples that were obtained from Reunion Island after 1998. We were able to identify a unique genotype that is identical to that of the L. interrogans serovars Copenhageni and Icterohaemorrhagiae, suggesting that this genotype is the major cause of leptospirosis on Reunion Island.Our simple, rapid, and robust genotyping method enables the identification of Leptospira strains at the species and subspecies levels and supports the direct genotyping of Leptospira in biological samples without requiring cultures.

  18. A method for geological hazard extraction using high-resolution remote sensing

    International Nuclear Information System (INIS)

    Wang, Q J; Chen, Y; Bi, J T; Lin, Q Z; Li, M X

    2014-01-01

    Taking Yingxiu, the epicentre of the Wenchuan earthquake, as the study area, a method for geological disaster extraction using high-resolution remote sensing imagery was proposed in this study. A high-resolution Digital Elevation Model (DEM) was used to create mask imagery to remove interfering factors such as buildings and water at low altitudes. Then, the mask imagery was diced into several small parts to reduce the large images' inconsistency, and they were used as the sources to be classified. After that, vector conversion was done on the classified imagery in ArcGIS. Finally, to ensure accuracy, other interfering factors such as buildings at high altitudes, bare land, and land covered by little vegetation were removed manually. Because the method can extract geological hazards in a short time, it is of great importance for decision-makers and rescuers who need to know the degree of damage in the disaster area, especially within 72 hours after an earthquake. Therefore, the method will play an important role in decision making, rescue, and disaster response planning

  19. Extraction Method for Earthquake-Collapsed Building Information Based on High-Resolution Remote Sensing

    International Nuclear Information System (INIS)

    Chen, Peng; Wu, Jian; Liu, Yaolin; Wang, Jing

    2014-01-01

    At present, the extraction of earthquake disaster information from remote sensing data relies on visual interpretation. However, this technique cannot effectively and quickly obtain precise and efficient information for earthquake relief and emergency management. Collapsed buildings in the town of Zipingpu after the Wenchuan earthquake were used as a case study to validate two kinds of rapid extraction methods for earthquake-collapsed building information based on pixel-oriented and object-oriented theories. The pixel-oriented method is based on multi-layer regional segments that embody the core layers and segments of the object-oriented method. The key idea is to mask layer by layer all image information, including that on the collapsed buildings. Compared with traditional techniques, the pixel-oriented method is innovative because it allows considerably rapid computer processing. As for the object-oriented method, a multi-scale segment algorithm was applied to build a three-layer hierarchy. By analyzing the spectrum, texture, shape, location, and context of individual object classes in different layers, the fuzzy determined rule system was established for the extraction of earthquake-collapsed building information. We compared the two sets of results using three variables: precision assessment, visual effect, and principle. Both methods can extract earthquake-collapsed building information quickly and accurately. The object-oriented method successfully overcomes the pepper salt noise caused by the spectral diversity of high-resolution remote sensing data and solves the problem of same object, different spectrums and that of same spectrum, different objects. With an overall accuracy of 90.38%, the method achieves more scientific and accurate results compared with the pixel-oriented method (76.84%). The object-oriented image analysis method can be extensively applied in the extraction of earthquake disaster information based on high-resolution remote sensing

  20. High resolution through-the-wall radar image based on beamspace eigenstructure subspace methods

    Science.gov (United States)

    Yoon, Yeo-Sun; Amin, Moeness G.

    2008-04-01

    Through-the-wall imaging (TWI) is a challenging problem, even if the wall parameters and characteristics are known to the system operator. Proper target classification and correct imaging interpretation require the application of high resolution techniques using limited array size. In inverse synthetic aperture radar (ISAR), signal subspace methods such as Multiple Signal Classification (MUSIC) are used to obtain high resolution imaging. In this paper, we adopt signal subspace methods and apply them to the 2-D spectrum obtained from the delay-andsum beamforming image. This is in contrast to ISAR, where raw data, in frequency and angle, is directly used to form the estimate of the covariance matrix and array response vector. Using beams rather than raw data has two main advantages, namely, it improves the signal-to-noise ratio (SNR) and can correctly image typical indoor extended targets, such as tables and cabinets, as well as point targets. The paper presents both simulated and experimental results using synthesized and real data. It compares the performance of beam-space MUSIC and Capon beamformer. The experimental data is collected at the test facility in the Radar Imaging Laboratory, Villanova University.

  1. Pixel-Wise Classification Method for High Resolution Remote Sensing Imagery Using Deep Neural Networks

    Directory of Open Access Journals (Sweden)

    Rui Guo

    2018-03-01

    Full Text Available Considering the classification of high spatial resolution remote sensing imagery, this paper presents a novel classification method for such imagery using deep neural networks. Deep learning methods, such as a fully convolutional network (FCN model, achieve state-of-the-art performance in natural image semantic segmentation when provided with large-scale datasets and respective labels. To use data efficiently in the training stage, we first pre-segment training images and their labels into small patches as supplements of training data using graph-based segmentation and the selective search method. Subsequently, FCN with atrous convolution is used to perform pixel-wise classification. In the testing stage, post-processing with fully connected conditional random fields (CRFs is used to refine results. Extensive experiments based on the Vaihingen dataset demonstrate that our method performs better than the reference state-of-the-art networks when applied to high-resolution remote sensing imagery classification.

  2. A high-resolution neutron spectra unfolding method using the Genetic Algorithm technique

    CERN Document Server

    Mukherjee, B

    2002-01-01

    The Bonner sphere spectrometers (BSS) are commonly used to determine the neutron spectra within various nuclear facilities. Sophisticated mathematical tools are used to unfold the neutron energy distribution from the output data of the BSS. This paper highlights a novel high-resolution neutron spectra-unfolding method using the Genetic Algorithm (GA) technique. The GA imitates the biological evolution process prevailing in the nature to solve complex optimisation problems. The GA method was utilised to evaluate the neutron energy distribution, average energy, fluence and equivalent dose rates at important work places of a DIDO class research reactor and a high-energy superconducting heavy ion cyclotron. The spectrometer was calibrated with a sup 2 sup 4 sup 1 Am/Be (alpha,n) neutron standard source. The results of the GA method agreed satisfactorily with the results obtained by using the well-known BUNKI neutron spectra unfolding code.

  3. A flexible and accurate digital volume correlation method applicable to high-resolution volumetric images

    Science.gov (United States)

    Pan, Bing; Wang, Bo

    2017-10-01

    Digital volume correlation (DVC) is a powerful technique for quantifying interior deformation within solid opaque materials and biological tissues. In the last two decades, great efforts have been made to improve the accuracy and efficiency of the DVC algorithm. However, there is still a lack of a flexible, robust and accurate version that can be efficiently implemented in personal computers with limited RAM. This paper proposes an advanced DVC method that can realize accurate full-field internal deformation measurement applicable to high-resolution volume images with up to billions of voxels. Specifically, a novel layer-wise reliability-guided displacement tracking strategy combined with dynamic data management is presented to guide the DVC computation from slice to slice. The displacements at specified calculation points in each layer are computed using the advanced 3D inverse-compositional Gauss-Newton algorithm with the complete initial guess of the deformation vector accurately predicted from the computed calculation points. Since only limited slices of interest in the reference and deformed volume images rather than the whole volume images are required, the DVC calculation can thus be efficiently implemented on personal computers. The flexibility, accuracy and efficiency of the presented DVC approach are demonstrated by analyzing computer-simulated and experimentally obtained high-resolution volume images.

  4. Calibration of high resolution digital camera based on different photogrammetric methods

    International Nuclear Information System (INIS)

    Hamid, N F A; Ahmad, A

    2014-01-01

    This paper presents method of calibrating high-resolution digital camera based on different configuration which comprised of stereo and convergent. Both methods are performed in the laboratory and in the field calibration. Laboratory calibration is based on a 3D test field where a calibration plate of dimension 0.4 m × 0.4 m with grid of targets at different height is used. For field calibration, it uses the same concept of 3D test field which comprised of 81 target points located on a flat ground and the dimension is 9 m × 9 m. In this study, a non-metric high resolution digital camera called Canon Power Shot SX230 HS was calibrated in the laboratory and in the field using different configuration for data acquisition. The aim of the calibration is to investigate the behavior of the internal digital camera whether all the digital camera parameters such as focal length, principal point and other parameters remain the same or vice-versa. In the laboratory, a scale bar is placed in the test field for scaling the image and approximate coordinates were used for calibration process. Similar method is utilized in the field calibration. For both test fields, the digital images were acquired within short period using stereo and convergent configuration. For field calibration, aerial digital images were acquired using unmanned aerial vehicle (UAV) system. All the images were processed using photogrammetric calibration software. Different calibration results were obtained for both laboratory and field calibrations. The accuracy of the results is evaluated based on standard deviation. In general, for photogrammetric applications and other applications the digital camera must be calibrated for obtaining accurate measurement or results. The best method of calibration depends on the type of applications. Finally, for most applications the digital camera is calibrated on site, hence, field calibration is the best method of calibration and could be employed for obtaining accurate

  5. Multi-group transport methods for high-resolution neutron activation analysis

    International Nuclear Information System (INIS)

    Burns, K. A.; Smith, L. E.; Gesh, C. J.; Shaver, M. W.

    2009-01-01

    The accurate and efficient simulation of coupled neutron-photon problems is necessary for several important radiation detection applications. Examples include the detection of nuclear threats concealed in cargo containers and prompt gamma neutron activation analysis for nondestructive determination of elemental composition of unknown samples. In these applications, high-resolution gamma-ray spectrometers are used to preserve as much information as possible about the emitted photon flux, which consists of both continuum and characteristic gamma rays with discrete energies. Monte Carlo transport is the most commonly used modeling tool for this type of problem, but computational times for many problems can be prohibitive. This work explores the use of multi-group deterministic methods for the simulation of neutron activation problems. Central to this work is the development of a method for generating multi-group neutron-photon cross-sections in a way that separates the discrete and continuum photon emissions so that the key signatures in neutron activation analysis (i.e., the characteristic line energies) are preserved. The mechanics of the cross-section preparation method are described and contrasted with standard neutron-gamma cross-section sets. These custom cross-sections are then applied to several benchmark problems. Multi-group results for neutron and photon flux are compared to MCNP results. Finally, calculated responses of high-resolution spectrometers are compared. Preliminary findings show promising results when compared to MCNP. A detailed discussion of the potential benefits and shortcomings of the multi-group-based approach, in terms of accuracy, and computational efficiency, is provided. (authors)

  6. High resolution imaging of vadose zone transport using crosswell radar and seismic methods; TOPICAL

    International Nuclear Information System (INIS)

    Majer, Ernest L.; Williams, Kenneth H.; Peterson, John E.; Daley, Thomas E.

    2001-01-01

    The summary and conclusions are that overall the radar and seismic results were excellent. At the time of design of the experiments we did not know how well these two methods could penetrate or resolve the moisture content and structure. It appears that the radar could easily go up to 5, even 10 meters between boreholes at 200 Mhz and even father (up to 20 to 40 m) at 50 Mhz. The seismic results indicate that at several hundred hertz propagation of 20 to 30 meters giving high resolution is possible. One of the most important results, however is that together the seismic and radar are complementary in their properties estimation. The radar being primarily sensitive to changes in moisture content, and the seismic being primarily sensitive to porosity. Taken in a time lapse sense the radar can show the moisture content changes to a high resolution, with the seismic showing high resolution lithology. The significant results for each method are: Radar: (1) Delineated geological layers 0.25 to 3.5 meters thick with 0.25 m resolution; (2) Delineated moisture movement and content with 0.25 m resolution; (3) Compared favorably with neutron probe measurements; and (4) Penetration up to 30 m. Radar results indicate that the transport of the riverwater is different from that of the heavier and more viscous sodium thiosulfate. It appears that the heavier fluids are not mixing readily with the in-situ fluids and the transport may be influenced by them. Seismic: (1) Delineated lithology at .25 m resolution; (2) Penetration over 20 meters, with a possibility of up to 30 or more meters; and (3) Maps porosity and density differences of the sediments. Overall the seismic is mapping the porosity and density distribution. The results are consistent with the flow field mapped by the radar, there is a change in flow properties at the 10 to 11 meter depth in the flow cell. There also appears to be break through by looking at the radar data with the denser sodium thiosulfate finally

  7. Visual quantification of diffuse emphysema with Sakal's method and high-resolution chest CT

    International Nuclear Information System (INIS)

    Feuerstein, I.M.; McElvaney, N.G.; Simon, T.R.; Hubbard, R.C.; Crystal, R.G.

    1990-01-01

    This paper determines the accuracy and efficacy of visual quantitation for a diffuse form of pulmonary emphysema with high-resolution CT (HRCT). Twenty- five adults patients with symptomatic emphysema due to α-antitrypsin deficiency prospectively underwent HRCT with 1.5-mm sections, a high-spatial-resolution algorithm, and targeted reconstruction. Photography was performed with narrow lung windows to accentuate diffuse emphysema. Emphysema was then scored with use of a modification of Sakai's extent and severity scoring method. The scans were all scored by the same blinded observer. Pulmonary function testing (PFT), including diffusing capacity measurement, was performed in all patients. Results were statistically correlated with the use of regression analysis

  8. Reduced material model for closed cell metal foam infiltrated with phase change material based on high resolution numerical studies

    International Nuclear Information System (INIS)

    Ohsenbrügge, Christoph; Marth, Wieland; Navarro y de Sosa, Iñaki; Drossel, Welf-Guntram; Voigt, Axel

    2016-01-01

    Highlights: • Closed cell metal foam sandwich structures were investigated. • High resolution numerical studies were conducted using CT scan data. • A reduced model for use in commercial FE software reduces needed degrees of freedom. • Thermal inertia is increased about 4 to 5 times in PCM filled structures. • The reduced material model was verified using experimental data. - Abstract: The thermal behaviour of closed cell metal foam infiltrated with paraffin wax as latent heat storage for application in high precision tool machines was examined. Aluminium foam sandwiches with metallically bound cover layers were prepared in a powder metallurgical process and cross-sectional images of the structures were generated with X-ray computed tomography. Based on the image data a three dimensional highly detailed model was derived and prepared for simulation with the adaptive FE-library AMDiS. The pores were assumed to be filled with paraffin wax. The thermal conductivity and the transient thermal behaviour in the phase-change region were investigated. Based on the results from the highly detailed simulations a reduced model for use in commercial FE-software (ANSYS) was derived. It incorporates the properties of the matrix and the phase change material into a homogenized material. A sandwich-structure with and without paraffin was investigated experimentally under constant thermal load. The results were used to verify the reduced material model in ANSYS.

  9. Solving phase appearance/disappearance two-phase flow problems with high resolution staggered grid and fully implicit schemes by the Jacobian-free Newton–Krylov Method

    Energy Technology Data Exchange (ETDEWEB)

    Zou, Ling; Zhao, Haihua; Zhang, Hongbin

    2016-04-01

    The phase appearance/disappearance issue presents serious numerical challenges in two-phase flow simulations. Many existing reactor safety analysis codes use different kinds of treatments for the phase appearance/disappearance problem. However, to our best knowledge, there are no fully satisfactory solutions. Additionally, the majority of the existing reactor system analysis codes were developed using low-order numerical schemes in both space and time. In many situations, it is desirable to use high-resolution spatial discretization and fully implicit time integration schemes to reduce numerical errors. In this work, we adapted a high-resolution spatial discretization scheme on staggered grid mesh and fully implicit time integration methods (such as BDF1 and BDF2) to solve the two-phase flow problems. The discretized nonlinear system was solved by the Jacobian-free Newton Krylov (JFNK) method, which does not require the derivation and implementation of analytical Jacobian matrix. These methods were tested with a few two-phase flow problems with phase appearance/disappearance phenomena considered, such as a linear advection problem, an oscillating manometer problem, and a sedimentation problem. The JFNK method demonstrated extremely robust and stable behaviors in solving the two-phase flow problems with phase appearance/disappearance. No special treatments such as water level tracking or void fraction limiting were used. High-resolution spatial discretization and second- order fully implicit method also demonstrated their capabilities in significantly reducing numerical errors.

  10. A Method of Road Extraction from High-resolution Remote Sensing Images Based on Shape Features

    Directory of Open Access Journals (Sweden)

    LEI Xiaoqi

    2016-02-01

    Full Text Available Road extraction from high-resolution remote sensing image is an important and difficult task.Since remote sensing images include complicated information,the methods that extract roads by spectral,texture and linear features have certain limitations.Also,many methods need human-intervention to get the road seeds(semi-automatic extraction,which have the great human-dependence and low efficiency.The road-extraction method,which uses the image segmentation based on principle of local gray consistency and integration shape features,is proposed in this paper.Firstly,the image is segmented,and then the linear and curve roads are obtained by using several object shape features,so the method that just only extract linear roads are rectified.Secondly,the step of road extraction is carried out based on the region growth,the road seeds are automatic selected and the road network is extracted.Finally,the extracted roads are regulated by combining the edge information.In experiments,the images that including the better gray uniform of road and the worse illuminated of road surface were chosen,and the results prove that the method of this study is promising.

  11. Method for local temperature measurement in a nanoreactor for in situ high-resolution electron microscopy.

    Science.gov (United States)

    Vendelbo, S B; Kooyman, P J; Creemer, J F; Morana, B; Mele, L; Dona, P; Nelissen, B J; Helveg, S

    2013-10-01

    In situ high-resolution transmission electron microscopy (TEM) of solids under reactive gas conditions can be facilitated by microelectromechanical system devices called nanoreactors. These nanoreactors are windowed cells containing nanoliter volumes of gas at ambient pressures and elevated temperatures. However, due to the high spatial confinement of the reaction environment, traditional methods for measuring process parameters, such as the local temperature, are difficult to apply. To address this issue, we devise an electron energy loss spectroscopy (EELS) method that probes the local temperature of the reaction volume under inspection by the electron beam. The local gas density, as measured using quantitative EELS, is combined with the inherent relation between gas density and temperature, as described by the ideal gas law, to obtain the local temperature. Using this method we determined the temperature gradient in a nanoreactor in situ, while the average, global temperature was monitored by a traditional measurement of the electrical resistivity of the heater. The local gas temperatures had a maximum of 56 °C deviation from the global heater values under the applied conditions. The local temperatures, obtained with the proposed method, are in good agreement with predictions from an analytical model. Copyright © 2013 Elsevier B.V. All rights reserved.

  12. Analytical method by high resolution liquid chromatography for the determination of carbamazepine in human plasma

    International Nuclear Information System (INIS)

    Jimenez Aleman, Narda M; Calero Carbonell, Jorge E; Padron Yaquis, Alejandro S; Izquierdo Lozano, Julio C

    2007-01-01

    One of the requirements to develop the studies of bioavailability and bioequivalence is to have analytic methodologies validated for the work with samples in biological fluids. A method was developed by high resolution liquid chromatography for the determination of carbamazepine in human plasma. A mixture of hydrogen phosphate of sodium: acetonitrile (65:35) adjusted to pH= 3.3 with phosphoric acid, flow of 1.2 mL/min and ultraviolet detection at 210 nm, was used as mobile phase. Propylparabene was used as an internal standard. According to the established regulations for the validation of the methods in biological fluids, the following parameters were studied: stability of the samples, lineality, specificity, precision, accuracy and limit of detection and quantification. The method proved to be specific and sensitive with a detection and quantification limit of 0.9 and 1.0 ng, respectively. The method was lineal, precise and exact in the range of concentrations of 1. 07 at 12.67 μg/mL. The mean recovery was not statistically different from 100.0 %. The analito in the proposed biological matrix remained in the studied period. The methodology described in this work is applied in our case to the study that evaluates the bioavailability and bioequivalence of a Cuban formulation of carbamazepine in healthy volunteers. (Author)

  13. Methods of numerical relativity

    International Nuclear Information System (INIS)

    Piran, T.

    1983-01-01

    Numerical Relativity is an alternative to analytical methods for obtaining solutions for Einstein equations. Numerical methods are particularly useful for studying generation of gravitational radiation by potential strong sources. The author reviews the analytical background, the numerical analysis aspects and techniques and some of the difficulties involved in numerical relativity. (Auth.)

  14. Fast and accurate denoising method applied to very high resolution optical remote sensing images

    Science.gov (United States)

    Masse, Antoine; Lefèvre, Sébastien; Binet, Renaud; Artigues, Stéphanie; Lassalle, Pierre; Blanchet, Gwendoline; Baillarin, Simon

    2017-10-01

    Restoration of Very High Resolution (VHR) optical Remote Sensing Image (RSI) is critical and leads to the problem of removing instrumental noise while keeping integrity of relevant information. Improving denoising in an image processing chain implies increasing image quality and improving performance of all following tasks operated by experts (photo-interpretation, cartography, etc.) or by algorithms (land cover mapping, change detection, 3D reconstruction, etc.). In a context of large industrial VHR image production, the selected denoising method should optimized accuracy and robustness with relevant information and saliency conservation, and rapidity due to the huge amount of data acquired and/or archived. Very recent research in image processing leads to a fast and accurate algorithm called Non Local Bayes (NLB) that we propose to adapt and optimize for VHR RSIs. This method is well suited for mass production thanks to its best trade-off between accuracy and computational complexity compared to other state-of-the-art methods. NLB is based on a simple principle: similar structures in an image have similar noise distribution and thus can be denoised with the same noise estimation. In this paper, we describe in details algorithm operations and performances, and analyze parameter sensibilities on various typical real areas observed in VHR RSIs.

  15. A simple and rapid method for high-resolution visualization of single-ion tracks

    Directory of Open Access Journals (Sweden)

    Masaaki Omichi

    2014-11-01

    Full Text Available Prompt determination of spatial points of single-ion tracks plays a key role in high-energy particle induced-cancer therapy and gene/plant mutations. In this study, a simple method for the high-resolution visualization of single-ion tracks without etching was developed through the use of polyacrylic acid (PAA-N, N’-methylene bisacrylamide (MBAAm blend films. One of the steps of the proposed method includes exposure of the irradiated films to water vapor for several minutes. Water vapor was found to promote the cross-linking reaction of PAA and MBAAm to form a bulky cross-linked structure; the ion-track scars were detectable at a nanometer scale by atomic force microscopy. This study demonstrated that each scar is easily distinguishable, and the amount of generated radicals of the ion tracks can be estimated by measuring the height of the scars, even in highly dense ion tracks. This method is suitable for the visualization of the penumbra region in a single-ion track with a high spatial resolution of 50 nm, which is sufficiently small to confirm that a single ion hits a cell nucleus with a size ranging between 5 and 20 μm.

  16. A system and method for online high-resolution mapping of gastric slow-wave activity.

    Science.gov (United States)

    Bull, Simon H; O'Grady, Gregory; Du, Peng; Cheng, Leo K

    2014-11-01

    High-resolution (HR) mapping employs multielectrode arrays to achieve spatially detailed analyses of propagating bioelectrical events. A major current limitation is that spatial analyses must currently be performed "off-line" (after experiments), compromising timely recording feedback and restricting experimental interventions. These problems motivated development of a system and method for "online" HR mapping. HR gastric recordings were acquired and streamed to a novel software client. Algorithms were devised to filter data, identify slow-wave events, eliminate corrupt channels, and cluster activation events. A graphical user interface animated data and plotted electrograms and maps. Results were compared against off-line methods. The online system analyzed 256-channel serosal recordings with no unexpected system terminations with a mean delay 18 s. Activation time marking sensitivity was 0.92; positive predictive value was 0.93. Abnormal slow-wave patterns including conduction blocks, ectopic pacemaking, and colliding wave fronts were reliably identified. Compared to traditional analysis methods, online mapping had comparable results with equivalent coverage of 90% of electrodes, average RMS errors of less than 1 s, and CC of activation maps of 0.99. Accurate slow-wave mapping was achieved in near real-time, enabling monitoring of recording quality and experimental interventions targeted to dysrhythmic onset. This work also advances the translation of HR mapping toward real-time clinical application.

  17. A simple and rapid method for high-resolution visualization of single-ion tracks

    Energy Technology Data Exchange (ETDEWEB)

    Omichi, Masaaki [Department of Applied Chemistry, Graduate School of Engineering, Osaka University, Osaka 565-0871 (Japan); Center for Collaborative Research, Anan National College of Technology, Anan, Tokushima 774-0017 (Japan); Choi, Wookjin; Sakamaki, Daisuke; Seki, Shu, E-mail: seki@chem.eng.osaka-u.ac.jp [Department of Applied Chemistry, Graduate School of Engineering, Osaka University, Osaka 565-0871 (Japan); Tsukuda, Satoshi [Institute of Multidisciplinary Research for Advanced Materials, Tohoku University, Sendai, Miyagi 980-8577 (Japan); Sugimoto, Masaki [Japan Atomic Energy Agency, Takasaki Advanced Radiation Research Institute, Gunma, Gunma 370-1292 (Japan)

    2014-11-15

    Prompt determination of spatial points of single-ion tracks plays a key role in high-energy particle induced-cancer therapy and gene/plant mutations. In this study, a simple method for the high-resolution visualization of single-ion tracks without etching was developed through the use of polyacrylic acid (PAA)-N, N’-methylene bisacrylamide (MBAAm) blend films. One of the steps of the proposed method includes exposure of the irradiated films to water vapor for several minutes. Water vapor was found to promote the cross-linking reaction of PAA and MBAAm to form a bulky cross-linked structure; the ion-track scars were detectable at a nanometer scale by atomic force microscopy. This study demonstrated that each scar is easily distinguishable, and the amount of generated radicals of the ion tracks can be estimated by measuring the height of the scars, even in highly dense ion tracks. This method is suitable for the visualization of the penumbra region in a single-ion track with a high spatial resolution of 50 nm, which is sufficiently small to confirm that a single ion hits a cell nucleus with a size ranging between 5 and 20 μm.

  18. Production of solar radiation bankable datasets from high-resolution solar irradiance derived with dynamical downscaling Numerical Weather prediction model

    Directory of Open Access Journals (Sweden)

    Yassine Charabi

    2016-11-01

    Full Text Available A bankable solar radiation database is required for the financial viability of solar energy project. Accurate estimation of solar energy resources in a country is very important for proper siting, sizing and life cycle cost analysis of solar energy systems. During the last decade an important progress has been made to develop multiple solar irradiance database (Global Horizontal Irradiance (GHI and Direct Normal Irradiance (DNI, using satellite of different resolution and sophisticated models. This paper assesses the performance of High-resolution solar irradiance derived with dynamical downscaling Numerical Weather Prediction model with, GIS topographical solar radiation model, satellite data and ground measurements, for the production of bankable solar radiation datasets. For this investigation, NWP model namely Consortium for Small-scale Modeling (COSMO is used for the dynamical downscaling of solar radiation. The obtained results increase confidence in solar radiation data base obtained from dynamical downscaled NWP model. The mean bias of dynamical downscaled NWP model is small, on the order of a few percents for GHI, and it could be ranked as a bankable datasets. Fortunately, these data are usually archived in the meteorological department and gives a good idea of the hourly, monthly, and annual incident energy. Such short time-interval data are valuable in designing and operating the solar energy facility. The advantage of the NWP model is that it can be used for solar radiation forecast since it can estimate the weather condition within the next 72–120 hours. This gives a reasonable estimation of the solar radiation that in turns can be used to forecast the electric power generation by the solar power plant.

  19. Method of Obtaining High Resolution Intrinsic Wire Boom Damping Parameters for Multi-Body Dynamics Simulations

    Science.gov (United States)

    Yew, Alvin G.; Chai, Dean J.; Olney, David J.

    2010-01-01

    The goal of NASA's Magnetospheric MultiScale (MMS) mission is to understand magnetic reconnection with sensor measurements from four spinning satellites flown in a tight tetrahedron formation. Four of the six electric field sensors on each satellite are located at the end of 60- meter wire booms to increase measurement sensitivity in the spin plane and to minimize motion coupling from perturbations on the main body. A propulsion burn however, might induce boom oscillations that could impact science measurements if oscillations do not damp to values on the order of 0.1 degree in a timely fashion. Large damping time constants could also adversely affect flight dynamics and attitude control performance. In this paper, we will discuss the implementation of a high resolution method for calculating the boom's intrinsic damping, which was used in multi-body dynamics simulations. In summary, experimental data was obtained with a scaled-down boom, which was suspended as a pendulum in vacuum. Optical techniques were designed to accurately measure the natural decay of angular position and subsequently, data processing algorithms resulted in excellent spatial and temporal resolutions. This method was repeated in a parametric study for various lengths, root tensions and vacuum levels. For all data sets, regression models for damping were applied, including: nonlinear viscous, frequency-independent hysteretic, coulomb and some combination of them. Our data analysis and dynamics models have shown that the intrinsic damping for the baseline boom is insufficient, thereby forcing project management to explore mitigation strategies.

  20. High resolution melting analysis: a rapid and accurate method to detect CALR mutations.

    Directory of Open Access Journals (Sweden)

    Cristina Bilbao-Sieyro

    Full Text Available The recent discovery of CALR mutations in essential thrombocythemia (ET and primary myelofibrosis (PMF patients without JAK2/MPL mutations has emerged as a relevant finding for the molecular diagnosis of these myeloproliferative neoplasms (MPN. We tested the feasibility of high-resolution melting (HRM as a screening method for rapid detection of CALR mutations.CALR was studied in wild-type JAK2/MPL patients including 34 ET, 21 persistent thrombocytosis suggestive of MPN and 98 suspected secondary thrombocytosis. CALR mutation analysis was performed through HRM and Sanger sequencing. We compared clinical features of CALR-mutated versus 45 JAK2/MPL-mutated subjects in ET.Nineteen samples showed distinct HRM patterns from wild-type. Of them, 18 were mutations and one a polymorphism as confirmed by direct sequencing. CALR mutations were present in 44% of ET (15/34, 14% of persistent thrombocytosis suggestive of MPN (3/21 and none of the secondary thrombocytosis (0/98. Of the 18 mutants, 9 were 52 bp deletions, 8 were 5 bp insertions and other was a complex mutation with insertion/deletion. No mutations were found after sequencing analysis of 45 samples displaying wild-type HRM curves. HRM technique was reproducible, no false positive or negative were detected and the limit of detection was of 3%.This study establishes a sensitive, reliable and rapid HRM method to screen for the presence of CALR mutations.

  1. Testing methods for using high-resolution satellite imagery to monitor polar bear abundance and distribution

    Science.gov (United States)

    LaRue, Michelle A.; Stapleton, Seth P.; Porter, Claire; Atkinson, Stephen N.; Atwood, Todd C.; Dyck, Markus; Lecomte, Nicolas

    2015-01-01

    High-resolution satellite imagery is a promising tool for providing coarse information about polar species abundance and distribution, but current applications are limited. With polar bears (Ursus maritimus), the technique has only proven effective on landscapes with little topographic relief that are devoid of snow and ice, and time-consuming manual review of imagery is required to identify bears. Here, we evaluated mechanisms to further develop methods for satellite imagery by examining data from Rowley Island, Canada. We attempted to automate and expedite detection via a supervised spectral classification and image differencing to expedite image review. We also assessed what proportion of a region should be sampled to obtain reliable estimates of density and abundance. Although the spectral signature of polar bears differed from nontarget objects, these differences were insufficient to yield useful results via a supervised classification process. Conversely, automated image differencing—or subtracting one image from another—correctly identified nearly 90% of polar bear locations. This technique, however, also yielded false positives, suggesting that manual review will still be required to confirm polar bear locations. On Rowley Island, bear distribution approximated a Poisson distribution across a range of plot sizes, and resampling suggests that sampling >50% of the site facilitates reliable estimation of density (CV in certain areas, but large-scale applications remain limited because of the challenges in automation and the limited environments in which the method can be effectively applied. Improvements in resolution may expand opportunities for its future uses.

  2. Testing methods for using high-resolution satellite imagery to monitor polar bear abundance and distribution

    Science.gov (United States)

    LaRue, Michelle A.; Stapleton, Seth P.; Porter, Claire; Atkinson, Stephen N.; Atwood, Todd C.; Dyck, Markus; Lecomte, Nicolas

    2015-01-01

    High-resolution satellite imagery is a promising tool for providing coarse information about polar species abundance and distribution, but current applications are limited. With polar bears (Ursus maritimus), the technique has only proven effective on landscapes with little topographic relief that are devoid of snow and ice, and time-consuming manual review of imagery is required to identify bears. Here, we evaluated mechanisms to further develop methods for satellite imagery by examining data from Rowley Island, Canada. We attempted to automate and expedite detection via a supervised spectral classification and image differencing to expedite image review. We also assessed what proportion of a region should be sampled to obtain reliable estimates of density and abundance. Although the spectral signature of polar bears differed from nontarget objects, these differences were insufficient to yield useful results via a supervised classification process. Conversely, automated image differencing—or subtracting one image from another—correctly identified nearly 90% of polar bear locations. This technique, however, also yielded false positives, suggesting that manual review will still be required to confirm polar bear locations. On Rowley Island, bear distribution approximated a Poisson distribution across a range of plot sizes, and resampling suggests that sampling >50% of the site facilitates reliable estimation of density (CV large-scale applications remain limited because of the challenges in automation and the limited environments in which the method can be effectively applied. Improvements in resolution may expand opportunities for its future uses.

  3. A Multi-stage Method to Extract Road from High Resolution Satellite Image

    International Nuclear Information System (INIS)

    Zhijian, Huang; Zhang, Jinfang; Xu, Fanjiang

    2014-01-01

    Extracting road information from high-resolution satellite images is complex and hardly achieves by exploiting only one or two modules. This paper presents a multi-stage method, consisting of automatic information extraction and semi-automatic post-processing. The Multi-scale Enhancement algorithm enlarges the contrast of human-made structures with the background. The Statistical Region Merging segments images into regions, whose skeletons are extracted and pruned according to geometry shape information. Setting the start and the end skeleton points, the shortest skeleton path is constructed as a road centre line. The Bidirectional Adaptive Smoothing technique smoothens the road centre line and adjusts it to right position. With the smoothed line and its average width, a Buffer algorithm reconstructs the road region easily. Seen from the last results, the proposed method eliminates redundant non-road regions, repairs incomplete occlusions, jumps over complete occlusions, and reserves accurate road centre lines and neat road regions. During the whole process, only a few interactions are needed

  4. Evaluation of a high resolution genotyping method for Chlamydia trachomatis using routine clinical samples.

    Directory of Open Access Journals (Sweden)

    Yibing Wang

    2011-02-01

    Full Text Available Genital chlamydia infection is the most commonly diagnosed sexually transmitted infection in the UK. C. trachomatis genital infections are usually caused by strains which fall into two pathovars: lymphogranuloma venereum (LGV and the genitourinary genotypes D-K. Although these genotypes can be discriminated by outer membrane protein gene (ompA sequencing or multi-locus sequence typing (MLST, neither protocol affords the high-resolution genotyping required for local epidemiology and accurate contact-tracing.We evaluated variable number tandem repeat (VNTR and ompA sequencing (now called multi-locus VNTR analysis and ompA or "MLVA-ompA" to study local epidemiology in Southampton over a period of six months. One hundred and fifty seven endocervical swabs that tested positive for C. trachomatis from both the Southampton genitourinary medicine (GUM clinic and local GP surgeries were tested by COBAS Taqman 48 (Roche PCR for the presence of C. trachomatis. Samples tested as positive by the commercial NAATs test were genotyped, where possible, by a MLVA-ompA sequencing technique. Attempts were made to isolate C. trachomatis from all 157 samples in cell culture, and 68 (43% were successfully recovered by repeatable passage in culture. Of the 157 samples, 93 (i.e. 59% were fully genotyped by MLVA-ompA. Only one mixed infection (E & D in a single sample was confirmed. There were two distinct D genotypes for the ompA gene. Most frequent ompA genotypes were D, E and F, comprising 20%, 41% and 16% of the type-able samples respectively. Within all genotypes we detected numerous MLVA sub-types.Amongst the common genotypes, there are a significant number of defined MLVA sub-types, which may reflect particular background demographics including age group, geography, high-risk sexual behavior, and sexual networks.

  5. Comparison of online and offline based merging methods for high resolution rainfall intensities

    Science.gov (United States)

    Shehu, Bora; Haberlandt, Uwe

    2016-04-01

    Accurate rainfall intensities with high spatial and temporal resolution are crucial for urban flow prediction. Commonly, raw or bias corrected radar fields are used for forecasting, while different merging products are employed for simulation. The merging products are proven to be adequate for rainfall intensities estimation, however their application in forecasting is limited as they are developed for offline mode. This study aims at adapting and refining the offline merging techniques for the online implementation, and at comparing the performance of these methods for high resolution rainfall data. Radar bias correction based on mean fields and quantile mapping are analyzed individually and also are implemented in conditional merging. Special attention is given to the impact of different spatial and temporal filters on the predictive skill of all methods. Raw radar data and kriging interpolation of station data are considered as a reference to check the benefit of the merged products. The methods are applied for several extreme events in the time period 2006-2012 caused by different meteorological conditions, and their performance is evaluated by split sampling. The study area is located within the 112 km radius of Hannover radar in Lower Saxony, Germany and the data set constitutes of 80 recording stations in 5 min time steps. The results of this study reveal how the performance of the methods is affected by the adjustment of radar data, choice of merging method and selected event. Merging techniques can be used to improve the performance of online rainfall estimation, which gives way to the application of merging products in forecasting.

  6. Liquid chromatography-high resolution mass spectrometric methods for the surveillance monitoring of cyanotoxins in freshwaters.

    Science.gov (United States)

    Bogialli, Sara; Bortolini, Claudio; Di Gangi, Iole Maria; Di Gregorio, Federica Nigro; Lucentini, Luca; Favaro, Gabriella; Pastore, Paolo

    2017-08-01

    A comprehensive risk management on human exposure to cyanotoxins, whose production is actually unpredictable, is limited by reliable analytical tools for monitoring as many toxic algal metabolites as possible. Two analytical approaches based on a LC-QTOF system for target analysis and suspect screening of cyanotoxins in freshwater were presented. A database with 369 compounds belonging to cyanobacterial metabolites was developed and used for a retrospective data analysis based on high resolution mass spectrometry (HRMS). HRMS fragmentation of the suspect cyanotoxin precursor ions was subsequently performed for correctly identifying the specific variants. Alternatively, an automatic tandem HRMS analysis tailored for cyanotoxins was performed in a single chromatographic run, using the developed database as a preferred precursor ions list. Twenty-five extracts of surface and drinking waters contaminated by cyanobacteria were processed. The identification of seven uncommon microcystins (M(O)R, MC-FR, MSer 7 -YR, D-Asp 3 MSer 7 -LR, MSer 7 -LR, dmAdda-LR and dmAdda-YR) and 6 anabaenopeptins (A, B, F, MM850, MM864, oscyllamide Y) was reported. Several isobaric variants, fully separated by chromatography, were pointed out. The developed methods are proposed to be used by environmental and health agencies for strengthening the surveillance monitoring of cyanotoxins in water. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. The development of high-resolution spectroscopic methods and their use in atomic structure studies

    International Nuclear Information System (INIS)

    Poulsen, O.

    1984-01-01

    This thesis discusses work performed during the last nine years in the field of atomic spectroscopy. Several high-resolution techniques, ranging from quantum beats, level crossings, rf-laser double resonances to nonlinear field atom interactions, have been employed. In particular, these methods have been adopted and developed to deal with fast accelerated atomic or ionic beams, allowing studies of problems in atomic-structure theory. Fine- and hyperfine-structure determinations in the He I and Li I isoelectronic sequences, in 51 V I, and in 235 U I, II have permitted a detailed comparison with ab initio calculations, demonstrating the change in problems when going towards heavier elements or higher ionization stage. The last part of the thesis is concerned with the fundamental question of obtaining very high optical resolution in the interaction between a fast accelerated atom or ion beam and a laser field, this problem being the core in the continuing development of atomic spectroscopy necessary to challenge the more precise and sophisticated theories advanced. (Auth.)

  8. High-resolution X-ray crystal structure of bovine H-protein using the high-pressure cryocooling method

    International Nuclear Information System (INIS)

    Higashiura, Akifumi; Ohta, Kazunori; Masaki, Mika; Sato, Masaru; Inaka, Koji; Tanaka, Hiroaki; Nakagawa, Atsushi

    2013-01-01

    Using the high-pressure cryocooling method, the high-resolution X-ray crystal structure of bovine H-protein was determined at 0.86 Å resolution. This is the first ultra-high-resolution structure obtained from a high-pressure cryocooled crystal. Recently, many technical improvements in macromolecular X-ray crystallography have increased the number of structures deposited in the Protein Data Bank and improved the resolution limit of protein structures. Almost all high-resolution structures have been determined using a synchrotron radiation source in conjunction with cryocooling techniques, which are required in order to minimize radiation damage. However, optimization of cryoprotectant conditions is a time-consuming and difficult step. To overcome this problem, the high-pressure cryocooling method was developed (Kim et al., 2005 ▶) and successfully applied to many protein-structure analyses. In this report, using the high-pressure cryocooling method, the X-ray crystal structure of bovine H-protein was determined at 0.86 Å resolution. Structural comparisons between high- and ambient-pressure cryocooled crystals at ultra-high resolution illustrate the versatility of this technique. This is the first ultra-high-resolution X-ray structure obtained using the high-pressure cryocooling method

  9. High Resolution DNS of Turbulent Flows using an Adaptive, Finite Volume Method

    Science.gov (United States)

    Trebotich, David

    2014-11-01

    We present a new computational capability for high resolution simulation of incompressible viscous flows. Our approach is based on cut cell methods where an irregular geometry such as a bluff body is intersected with a rectangular Cartesian grid resulting in cut cells near the boundary. In the cut cells we use a conservative discretization based on a discrete form of the divergence theorem to approximate fluxes for elliptic and hyperbolic terms in the Navier-Stokes equations. Away from the boundary the method reduces to a finite difference method. The algorithm is implemented in the Chombo software framework which supports adaptive mesh refinement and massively parallel computations. The code is scalable to 200,000 + processor cores on DOE supercomputers, resulting in DNS studies at unprecedented scale and resolution. For flow past a cylinder in transition (Re = 300) we observe a number of secondary structures in the far wake in 2D where the wake is over 120 cylinder diameters in length. These are compared with the more regularized wake structures in 3D at the same scale. For flow past a sphere (Re = 600) we resolve an arrowhead structure in the velocity in the near wake. The effectiveness of AMR is further highlighted in a simulation of turbulent flow (Re = 6000) in the contraction of an oil well blowout preventer. This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, Applied Mathematics program under Contract Number DE-AC02-05-CH11231.

  10. Accuracy assessment of high resolution satellite imagery orientation by leave-one-out method

    Science.gov (United States)

    Brovelli, Maria Antonia; Crespi, Mattia; Fratarcangeli, Francesca; Giannone, Francesca; Realini, Eugenio

    Interest in high-resolution satellite imagery (HRSI) is spreading in several application fields, at both scientific and commercial levels. Fundamental and critical goals for the geometric use of this kind of imagery are their orientation and orthorectification, processes able to georeference the imagery and correct the geometric deformations they undergo during acquisition. In order to exploit the actual potentialities of orthorectified imagery in Geomatics applications, the definition of a methodology to assess the spatial accuracy achievable from oriented imagery is a crucial topic. In this paper we want to propose a new method for accuracy assessment based on the Leave-One-Out Cross-Validation (LOOCV), a model validation method already applied in different fields such as machine learning, bioinformatics and generally in any other field requiring an evaluation of the performance of a learning algorithm (e.g. in geostatistics), but never applied to HRSI orientation accuracy assessment. The proposed method exhibits interesting features which are able to overcome the most remarkable drawbacks involved by the commonly used method (Hold-Out Validation — HOV), based on the partitioning of the known ground points in two sets: the first is used in the orientation-orthorectification model (GCPs — Ground Control Points) and the second is used to validate the model itself (CPs — Check Points). In fact the HOV is generally not reliable and it is not applicable when a low number of ground points is available. To test the proposed method we implemented a new routine that performs the LOOCV in the software SISAR, developed by the Geodesy and Geomatics Team at the Sapienza University of Rome to perform the rigorous orientation of HRSI; this routine was tested on some EROS-A and QuickBird images. Moreover, these images were also oriented using the world recognized commercial software OrthoEngine v. 10 (included in the Geomatica suite by PCI), manually performing the LOOCV

  11. Improving the singles rate method for modeling accidental coincidences in high-resolution PET

    International Nuclear Information System (INIS)

    Oliver, Josep F; Rafecas, Magdalena

    2010-01-01

    Random coincidences ('randoms') are one of the main sources of image degradation in PET imaging. In order to correct for this effect, an accurate method to estimate the contribution of random events is necessary. This aspect becomes especially relevant for high-resolution PET scanners where the highest image quality is sought and accurate quantitative analysis is undertaken. One common approach to estimate randoms is the so-called singles rate method (SR) widely used because of its good statistical properties. SR is based on the measurement of the singles rate in each detector element. However, recent studies suggest that SR systematically overestimates the correct random rate. This overestimation can be particularly marked for low energy thresholds, below 250 keV used in some applications and could entail a significant image degradation. In this work, we investigate the performance of SR as a function of the activity, geometry of the source and energy acceptance window used. We also investigate the performance of an alternative method, which we call 'singles trues' (ST) that improves SR by properly modeling the presence of true coincidences in the sample. Nevertheless, in any real data acquisition the knowledge of which singles are members of a true coincidence is lost. Therefore, we propose an iterative method, STi, that provides an estimation based on ST but which only requires the knowledge of measurable quantities: prompts and singles. Due to inter-crystal scatter, for wide energy windows ST only partially corrects SR overestimations. While SR deviations are in the range 86-300% (depending on the source geometry), the ST deviations are systematically smaller and contained in the range 4-60%. STi fails to reproduce the ST results, although for not too high activities the deviation with respect to ST is only a few percent. For conventional energy windows, i.e. those without inter-crystal scatter, the ST method corrects the SR overestimations, and deviations from

  12. Gold finger formation studied by high-resolution mass spectrometry and in silico methods

    NARCIS (Netherlands)

    Laskay, Ü.A.; Garino, C.; Tsybin, Y.O.; Salassa, L.; Casini, A.

    2015-01-01

    High-resolution mass spectrometry and quantum mechanics/molecular mechanics studies were employed for characterizing the formation of two gold finger (GF) domains from the reaction of zinc fingers (ZF) with gold complexes. The influence of both the gold oxidation state and the ZF coordination sphere

  13. Novel method of simultaneous multiple immunogold localization on resin sections in high resolution scanning electron microscopy

    Czech Academy of Sciences Publication Activity Database

    Nebesářová, Jana; Wandrol, P.; Vancová, Marie

    2016-01-01

    Roč. 12, č. 1 (2016), s. 105-517 ISSN 1549-9634 R&D Projects: GA TA ČR(CZ) TE01020118 Institutional support: RVO:60077344 Keywords : multiple immunolabeling * gold nanoparticles * high resolution SEM * STEM imaging * BSE imaging Subject RIV: EA - Cell Biology Impact factor: 5.720, year: 2016

  14. Comparison of four machine learning methods for object-oriented change detection in high-resolution satellite imagery

    Science.gov (United States)

    Bai, Ting; Sun, Kaimin; Deng, Shiquan; Chen, Yan

    2018-03-01

    High resolution image change detection is one of the key technologies of remote sensing application, which is of great significance for resource survey, environmental monitoring, fine agriculture, military mapping and battlefield environment detection. In this paper, for high-resolution satellite imagery, Random Forest (RF), Support Vector Machine (SVM), Deep belief network (DBN), and Adaboost models were established to verify the possibility of different machine learning applications in change detection. In order to compare detection accuracy of four machine learning Method, we applied these four machine learning methods for two high-resolution images. The results shows that SVM has higher overall accuracy at small samples compared to RF, Adaboost, and DBN for binary and from-to change detection. With the increase in the number of samples, RF has higher overall accuracy compared to Adaboost, SVM and DBN.

  15. The new high resolution method of Godunov`s type for 3D viscous flow calculations

    Energy Technology Data Exchange (ETDEWEB)

    Yershov, S.V.; Rusanov, A.V. [Ukranian National Academy of Sciences, Kahrkov (Ukraine)

    1996-12-31

    The numerical method is suggested for the calculations of the 3D viscous compressible flows described by the thin-layer Reynolds-averaged Navier-Stokes equations. The method is based on the Godunov`s finite-difference scheme and it uses the ENO reconstruction suggested by Harten to achieve the uniformly high-order accuracy. The computational efficiency is provided with the simplified multi grid approach and the implicit step written in {delta} -form. The turbulent effects are simulated with the Baldwin - Lomax turbulence model. The application package FlowER is developed to calculate the 3D turbulent flows within complex-shape channels. The numerical results for the 3D flow around a cylinder and through the complex-shaped channels show the accuracy and the reliability of the suggested method. (author)

  16. The new high resolution method of Godunov`s type for 3D viscous flow calculations

    Energy Technology Data Exchange (ETDEWEB)

    Yershov, S V; Rusanov, A V [Ukranian National Academy of Sciences, Kahrkov (Ukraine)

    1997-12-31

    The numerical method is suggested for the calculations of the 3D viscous compressible flows described by the thin-layer Reynolds-averaged Navier-Stokes equations. The method is based on the Godunov`s finite-difference scheme and it uses the ENO reconstruction suggested by Harten to achieve the uniformly high-order accuracy. The computational efficiency is provided with the simplified multi grid approach and the implicit step written in {delta} -form. The turbulent effects are simulated with the Baldwin - Lomax turbulence model. The application package FlowER is developed to calculate the 3D turbulent flows within complex-shape channels. The numerical results for the 3D flow around a cylinder and through the complex-shaped channels show the accuracy and the reliability of the suggested method. (author)

  17. Rapid multiplex high resolution melting method to analyze inflammatory related SNPs in preterm birth

    Directory of Open Access Journals (Sweden)

    Pereyra Silvana

    2012-01-01

    Full Text Available Abstract Background Complex traits like cancer, diabetes, obesity or schizophrenia arise from an intricate interaction between genetic and environmental factors. Complex disorders often cluster in families without a clear-cut pattern of inheritance. Genomic wide association studies focus on the detection of tens or hundreds individual markers contributing to complex diseases. In order to test if a subset of single nucleotide polymorphisms (SNPs from candidate genes are associated to a condition of interest in a particular individual or group of people, new techniques are needed. High-resolution melting (HRM analysis is a new method in which polymerase chain reaction (PCR and mutations scanning are carried out simultaneously in a closed tube, making the procedure fast, inexpensive and easy. Preterm birth (PTB is considered a complex disease, where genetic and environmental factors interact to carry out the delivery of a newborn before 37 weeks of gestation. It is accepted that inflammation plays an important role in pregnancy and PTB. Methods Here, we used real time-PCR followed by HRM analysis to simultaneously identify several gene variations involved in inflammatory pathways on preterm labor. SNPs from TLR4, IL6, IL1 beta and IL12RB genes were analyzed in a case-control study. The results were confirmed either by sequencing or by PCR followed by restriction fragment length polymorphism. Results We were able to simultaneously recognize the variations of four genes with similar accuracy than other methods. In order to obtain non-overlapping melting temperatures, the key step in this strategy was primer design. Genotypic frequencies found for each SNP are in concordance with those previously described in similar populations. None of the studied SNPs were associated with PTB. Conclusions Several gene variations related to the same inflammatory pathway were screened through a new flexible, fast and non expensive method with the purpose of analyzing

  18. A new method for high-resolution characterization of hydraulic conductivity

    Science.gov (United States)

    Liu, Gaisheng; Butler, J.J.; Bohling, Geoffrey C.; Reboulet, Ed; Knobbe, Steve; Hyndman, D.W.

    2009-01-01

    A new probe has been developed for high-resolution characterization of hydraulic conductivity (K) in shallow unconsolidated formations. The probe was recently applied at the Macrodispersion Experiment (MADE) site in Mississippi where K was rapidly characterized at a resolution as fine as 0.015 m, which has not previously been possible. Eleven profiles were obtained with K varying up to 7 orders of magnitude in individual profiles. Currently, high-resolution (0.015-m) profiling has an upper K limit of 10 m/d; lower-resolution (???0.4-m) mode is used in more permeable zones pending modifications. The probe presents a new means to help address unresolved issues of solute transport in heterogeneous systems. Copyright 2009 by the American Geophysical Union.

  19. Numerical methods using Matlab

    CERN Document Server

    Lindfield, George

    2012-01-01

    Numerical Methods using MATLAB, 3e, is an extensive reference offering hundreds of useful and important numerical algorithms that can be implemented into MATLAB for a graphical interpretation to help researchers analyze a particular outcome. Many worked examples are given together with exercises and solutions to illustrate how numerical methods can be used to study problems that have applications in the biosciences, chaos, optimization, engineering and science across the board. Numerical Methods using MATLAB, 3e, is an extensive reference offering hundreds of use

  20. A high resolution interferometric method to measure local swelling due to CO2 exposure in coal and shale

    NARCIS (Netherlands)

    Pluymakers, A.; Liu, J.; Kohler, F.; Renard, F.; Dysthe, D.

    2018-01-01

    We present an experimental method to study time-dependent, CO2-induced, local topography changes in mm-sized composite samples, plus results showing heterogeneous swelling of coal and shale on the nano- to micrometer scale. These results were obtained using high resolution interferometry

  1. Advances in Numerical Methods

    CERN Document Server

    Mastorakis, Nikos E

    2009-01-01

    Features contributions that are focused on significant aspects of current numerical methods and computational mathematics. This book carries chapters that advanced methods and various variations on known techniques that can solve difficult scientific problems efficiently.

  2. High-resolution numerical simulation of summer wind field comparing WRF boundary-layer parametrizations over complex Arctic topography: case study from central Spitsbergen

    Czech Academy of Sciences Publication Activity Database

    Láska, K.; Chládová, Zuzana; Hošek, Jiří

    2017-01-01

    Roč. 26, č. 4 (2017), s. 391-408 ISSN 0941-2948 Institutional support: RVO:68378289 Keywords : surface wind field * model evaluation * topographic effect * circulation pattern * Svalbard Subject RIV: DG - Athmosphere Sciences, Meteorology OBOR OECD: Meteorology and atmospheric sciences Impact factor: 1.989, year: 2016 http://www.schweizerbart.de/papers/metz/detail/prepub/87659/High_resolution_numerical_simulation_of_summer_wind_field_comparing_WRF_boundary_layer_parametrizations_over_complex_Arctic_topography_case_study_from_central_Spitsbergen

  3. Immersed boundary methods for high-resolution simulation of atmospheric boundary-layer flow over complex terrain

    Science.gov (United States)

    Lundquist, Katherine Ann

    Mesoscale models, such as the Weather Research and Forecasting (WRF) model, are increasingly used for high resolution simulations, particularly in complex terrain, but errors associated with terrain-following coordinates degrade the accuracy of the solution. Use of an alternative Cartesian gridding technique, known as an immersed boundary method (IBM), alleviates coordinate transformation errors and eliminates restrictions on terrain slope which currently limit mesoscale models to slowly varying terrain. In this dissertation, an immersed boundary method is developed for use in numerical weather prediction. Use of the method facilitates explicit resolution of complex terrain, even urban terrain, in the WRF mesoscale model. First, the errors that arise in the WRF model when complex terrain is present are presented. This is accomplished using a scalar advection test case, and comparing the numerical solution to the analytical solution. Results are presented for different orders of advection schemes, grid resolutions and aspect ratios, as well as various degrees of terrain slope. For comparison, results from the same simulation are presented using the IBM. Both two-dimensional and three-dimensional immersed boundary methods are then described, along with details that are specific to the implementation of IBM in the WRF code. Our IBM is capable of imposing both Dirichlet and Neumann boundary conditions. Additionally, a method for coupling atmospheric physics parameterizations at the immersed boundary is presented, making IB methods much more functional in the context of numerical weather prediction models. The two-dimensional IB method is verified through comparisons of solutions for gentle terrain slopes when using IBM and terrain-following grids. The canonical case of flow over a Witch of Agnesi hill provides validation of the basic no-slip and zero gradient boundary conditions. Specified diurnal heating in a valley, producing anabatic winds, is used to validate the

  4. Immersed Boundary Methods for High-Resolution Simulation of Atmospheric Boundary-Layer Flow Over Complex Terrain

    Energy Technology Data Exchange (ETDEWEB)

    Lundquist, K A [Univ. of California, Berkeley, CA (United States)

    2010-05-12

    Mesoscale models, such as the Weather Research and Forecasting (WRF) model, are increasingly used for high resolution simulations, particularly in complex terrain, but errors associated with terrain-following coordinates degrade the accuracy of the solution. Use of an alternative Cartesian gridding technique, known as an immersed boundary method (IBM), alleviates coordinate transformation errors and eliminates restrictions on terrain slope which currently limit mesoscale models to slowly varying terrain. In this dissertation, an immersed boundary method is developed for use in numerical weather prediction. Use of the method facilitates explicit resolution of complex terrain, even urban terrain, in the WRF mesoscale model. First, the errors that arise in the WRF model when complex terrain is present are presented. This is accomplished using a scalar advection test case, and comparing the numerical solution to the analytical solution. Results are presented for different orders of advection schemes, grid resolutions and aspect ratios, as well as various degrees of terrain slope. For comparison, results from the same simulation are presented using the IBM. Both two-dimensional and three-dimensional immersed boundary methods are then described, along with details that are specific to the implementation of IBM in the WRF code. Our IBM is capable of imposing both Dirichlet and Neumann boundary conditions. Additionally, a method for coupling atmospheric physics parameterizations at the immersed boundary is presented, making IB methods much more functional in the context of numerical weather prediction models. The two-dimensional IB method is verified through comparisons of solutions for gentle terrain slopes when using IBM and terrain-following grids. The canonical case of flow over a Witch of Agnesi hill provides validation of the basic no-slip and zero gradient boundary conditions. Specified diurnal heating in a valley, producing anabatic winds, is used to validate the

  5. Fluorescence photooxidation with eosin: a method for high resolution immunolocalization and in situ hybridization detection for light and electron microscopy

    Science.gov (United States)

    1994-01-01

    A simple method is described for high-resolution light and electron microscopic immunolocalization of proteins in cells and tissues by immunofluorescence and subsequent photooxidation of diaminobenzidine tetrahydrochloride into an insoluble osmiophilic polymer. By using eosin as the fluorescent marker, a substantial improvement in sensitivity is achieved in the photooxidation process over other conventional fluorescent compounds. The technique allows for precise correlative immunolocalization studies on the same sample using fluorescence, transmitted light and electron microscopy. Furthermore, because eosin is smaller in size than other conventional markers, this method results in improved penetration of labeling reagents compared to gold or enzyme based procedures. The improved penetration allows for three-dimensional immunolocalization using high voltage electron microscopy. Fluorescence photooxidation can also be used for high resolution light and electron microscopic localization of specific nucleic acid sequences by in situ hybridization utilizing biotinylated probes followed by an eosin-streptavidin conjugate. PMID:7519623

  6. High-resolution X-ray crystal structure of bovine H-protein using the high-pressure cryocooling method.

    Science.gov (United States)

    Higashiura, Akifumi; Ohta, Kazunori; Masaki, Mika; Sato, Masaru; Inaka, Koji; Tanaka, Hiroaki; Nakagawa, Atsushi

    2013-11-01

    Recently, many technical improvements in macromolecular X-ray crystallography have increased the number of structures deposited in the Protein Data Bank and improved the resolution limit of protein structures. Almost all high-resolution structures have been determined using a synchrotron radiation source in conjunction with cryocooling techniques, which are required in order to minimize radiation damage. However, optimization of cryoprotectant conditions is a time-consuming and difficult step. To overcome this problem, the high-pressure cryocooling method was developed (Kim et al., 2005) and successfully applied to many protein-structure analyses. In this report, using the high-pressure cryocooling method, the X-ray crystal structure of bovine H-protein was determined at 0.86 Å resolution. Structural comparisons between high- and ambient-pressure cryocooled crystals at ultra-high resolution illustrate the versatility of this technique. This is the first ultra-high-resolution X-ray structure obtained using the high-pressure cryocooling method.

  7. High-resolution pyrimidine- and ribose-specific 4D HCCH-COSY spectra of RNA using the filter diagonalization method

    International Nuclear Information System (INIS)

    Douglas, Justin T.; Latham, Michael P.; Armstrong, Geoffrey S.; Bendiak, Brad; Pardi, Arthur

    2008-01-01

    The NMR spectra of nucleic acids suffer from severe peak overlap, which complicates resonance assignments. 4D NMR experiments can overcome much of the degeneracy in 2D and 3D spectra; however, the linear increase in acquisition time with each new dimension makes it impractical to acquire high-resolution 4D spectra using standard Fourier transform (FT) techniques. The filter diagonalization method (FDM) is a numerically efficient algorithm that fits the entire multi-dimensional time-domain data to a set of multi-dimensional oscillators. Selective 4D constant-time HCCH-COSY experiments that correlate the H5-C5-C6-H6 base spin systems of pyrimidines or the H1'-C1'-C2'-H2' spin systems of ribose sugars were acquired on the 13 C-labeled iron responsive element (IRE) RNA. FDM-processing of these 4D experiments recorded with only 8 complex points in the indirect dimensions showed superior spectral resolution than FT-processed spectra. Practical aspects of obtaining optimal FDM-processed spectra are discussed. The results here demonstrate that FDM-processing can be used to obtain high-resolution 4D spectra on a medium sized RNA in a fraction of the acquisition time normally required for high-resolution, high-dimensional spectra

  8. Sinking, merging and stationary plumes in a coupled chemotaxis-fluid model: a high-resolution numerical approach

    KAUST Repository

    Chertock, A.; Fellner, K.; Kurganov, A.; Lorz, A.; Markowich, P. A.

    2012-01-01

    examples, which illustrate (i) the formation of sinking plumes, (ii) the possible merging of neighbouring plumes and (iii) the convergence towards numerically stable stationary plumes. The examples with stable stationary plumes show how the surface

  9. High resolution electromagnetic methods and low frequency dispersion of rock conductivity

    Directory of Open Access Journals (Sweden)

    V. V. Ageev

    1999-06-01

    Full Text Available The influence of frequency dispersion of conductivity (induced polarization of rocks on the results of electromagnetic (EM sounding was studied on the basis of calculation of electric field of vertical magnetic dipole above horizontally layered polarizable sections. Frequency dispersion was approximated by the Debye formula. Polarizable homogeneous halfspace, two, three and multilayered sections were analyzed in frequency and time domains. The calculations for different values of chargeability and time constants of polarization were performed. In the far zone of a source, the IP of rocks led to quasi-wave phenomena. They produced rapid fluctuations of frequency and transient sounding curves (interference phenomena, multireflections in polarizable layers. In the case of transient sounding in the near zone of a source quasistatic distortions prevailed, caused by the counter electromotive force arising in polarizable layers which may lead to strong changes in transient curves. In some cases quasiwave and quasistatic phenomena made EM sounding curves non-interpretable in the class of quasistationary curves over non-dispersive sections. On the other hand, they could increase the resolution and depth of investigation of EM sounding. This was confirmed by an experience of "high-resolution" electroprospecting in Russia. The problem of interpretation of EM sounding data in polarizable sections is nonunique. To achieve uniqueness it is probably necessary to complement them by soundings of other type.

  10. High resolution electromagnetic methods and low frequency dispersion of rock conductivity

    International Nuclear Information System (INIS)

    Svetov, B.S.; Ageev, V.V.

    1999-01-01

    The influence of frequency dispersion of conductivity (induced polarization) of rocks on the results of electromagnetic (EM) sounding was studied on the basis of calculation of electric field of vertical magnetic dipole above horizontally layered polarizable sections. Frequency dispersion was approximated by the Debye formula. Polarizable homogeneous half space, two, three and multilayered section were analyzed in frequency and tim domains. The calculations for different values of charge ability and time constants of polarization were performed. In the far zone of a source, the IP of rocks led to quasi-wave phenomena. They produced rapid fluctuations of frequency and transient sounding curves (interference phenomena, multireflections in polarizable layers). In the case of transient sounding in the near zone of a source quasistatic distortions prevailed, caused by the counter electromotive force arising in polarizable layers which may lead to strong change in transient curves. In same case in quasiwave and quasistatic phenomena made EM sounding curves non-interpretable in the class of quasistationary curves over non-dispersive sections. On the other hand, they could increase the resolution and depth of investigation of EM sounding. This was confirmed by an experience of 'high-resolution' electroprospectring in Russia. The problem of interpretation of EM sounding data in polarizable sections is non unique. To achieve uniqueness it is probably to complement them by sounding of other type

  11. High resolution electromagnetic methods and low frequency dispersion of rock conductivity

    Energy Technology Data Exchange (ETDEWEB)

    Svetov, B.S.; Ageev, V.V. [Geoelectromagnetic Research Institute, Institute of Physics of the Earth, RAS, Moscow (Russian Federation)

    1999-08-01

    The influence of frequency dispersion of conductivity (induced polarization) of rocks on the results of electromagnetic (EM) sounding was studied on the basis of calculation of electric field of vertical magnetic dipole above horizontally layered polarizable sections. Frequency dispersion was approximated by the Debye formula. Polarizable homogeneous half space, two, three and multilayered section were analyzed in frequency and tim domains. The calculations for different values of charge ability and time constants of polarization were performed. In the far zone of a source, the IP of rocks led to quasi-wave phenomena. They produced rapid fluctuations of frequency and transient sounding curves (interference phenomena, multireflections in polarizable layers). In the case of transient sounding in the near zone of a source quasistatic distortions prevailed, caused by the counter electromotive force arising in polarizable layers which may lead to strong change in transient curves. In same case in quasi wave and quasistatic phenomena made Em sounding curves non-interpretable in the class of quasistationary curves over non-dispersive sections. On the other hand, they could increase the resolution and depth of investigation of Em sounding. This was confirmed by an experience of 'high-resolution' electroprospectring in Russia. The problem of interpretation of EM sounding data in polarizable sections is non unique. To achieve uniqueness it is probably to complement them by sounding of other type.

  12. Effect of fluid elasticity on the numerical stability of high-resolution schemes for high shearing contraction flows using OpenFOAM

    Directory of Open Access Journals (Sweden)

    T. Chourushi

    2017-01-01

    Full Text Available Viscoelastic fluids due to their non-linear nature play an important role in process and polymer industries. These non-linear characteristics of fluid, influence final outcome of the product. Such processes though look simple are numerically challenging to study, due to the loss of numerical stability. Over the years, various methodologies have been developed to overcome this numerical limitation. In spite of this, numerical solutions are considered distant from accuracy, as first-order upwind-differencing scheme (UDS is often employed for improving the stability of algorithm. To elude this effect, some works been reported in the past, where high-resolution-schemes (HRS were employed and Deborah number was varied. However, these works are limited to creeping flows and do not detail any information on the numerical stability of HRS. Hence, this article presents the numerical study of high shearing contraction flows, where stability of HRS are addressed in reference to fluid elasticity. Results suggest that all HRS show some order of undue oscillations in flow variable profiles, measured along vertical lines placed near contraction region in the upstream section of domain, at varied elasticity number E≈5. Furthermore, by E, a clear relationship between numerical stability of HRS and E was obtained, which states that the order of undue oscillations in flow variable profiles is directly proportional to E.

  13. Analysis of numerical methods

    CERN Document Server

    Isaacson, Eugene

    1994-01-01

    This excellent text for advanced undergraduates and graduate students covers norms, numerical solution of linear systems and matrix factoring, iterative solutions of nonlinear equations, eigenvalues and eigenvectors, polynomial approximation, and other topics. It offers a careful analysis and stresses techniques for developing new methods, plus many examples and problems. 1966 edition.

  14. Abstracts of International Conference on Experimental and Computing Methods in High Resolution Diffraction Applied for Structure Characterization of Modern Materials - HREDAMM

    International Nuclear Information System (INIS)

    2004-01-01

    The conference addressed all aspects of high resolution diffraction. The topics of meeting include advanced experimental diffraction methods and computer data analysis for characterization of modern materials as well as the progress and new achievements in high resolution diffraction (X-ray, electrons, neutrons). Application of these methods for characterization of modern materials are widely presented among the invited, oral and poster contributions

  15. A Residential Area Extraction Method for High Resolution Remote Sensing Imagery by Using Visual Saliency and Perceptual Organization

    Directory of Open Access Journals (Sweden)

    CHEN Yixiang

    2017-12-01

    Full Text Available Inspired by human visual cognitive mechanism,a method of residential area extraction from high-resolution remote sensing images was proposed based on visual saliency and perceptual organization. Firstly,the data field theory of cognitive physics was introduced to model the visual saliency and the candidate residential areas were produced by adaptive thresholding. Then,the exact residential areas were obtained and refined by perceptual organization based on the high-frequency features of multi-scale wavelet transform. Finally,the validity of the proposed method was verified by experiments conducted on ZY-3 and Quickbird image data sets.

  16. Proposed Use of the NASA Ames Nebula Cloud Computing Platform for Numerical Weather Prediction and the Distribution of High Resolution Satellite Imagery

    Science.gov (United States)

    Limaye, Ashutosh S.; Molthan, Andrew L.; Srikishen, Jayanthi

    2010-01-01

    The development of the Nebula Cloud Computing Platform at NASA Ames Research Center provides an open-source solution for the deployment of scalable computing and storage capabilities relevant to the execution of real-time weather forecasts and the distribution of high resolution satellite data to the operational weather community. Two projects at Marshall Space Flight Center may benefit from use of the Nebula system. The NASA Short-term Prediction Research and Transition (SPoRT) Center facilitates the use of unique NASA satellite data and research capabilities in the operational weather community by providing datasets relevant to numerical weather prediction, and satellite data sets useful in weather analysis. SERVIR provides satellite data products for decision support, emphasizing environmental threats such as wildfires, floods, landslides, and other hazards, with interests in numerical weather prediction in support of disaster response. The Weather Research and Forecast (WRF) model Environmental Modeling System (WRF-EMS) has been configured for Nebula cloud computing use via the creation of a disk image and deployment of repeated instances. Given the available infrastructure within Nebula and the "infrastructure as a service" concept, the system appears well-suited for the rapid deployment of additional forecast models over different domains, in response to real-time research applications or disaster response. Future investigations into Nebula capabilities will focus on the development of a web mapping server and load balancing configuration to support the distribution of high resolution satellite data sets to users within the National Weather Service and international partners of SERVIR.

  17. Mesoscale spiral vortex embedded within a Lake Michigan snow squall band - High resolution satellite observations and numerical model simulations

    Science.gov (United States)

    Lyons, Walter A.; Keen, Cecil S.; Hjelmfelt, Mark; Pease, Steven R.

    1988-01-01

    It is known that Great Lakes snow squall convection occurs in a variety of different modes depending on various factors such as air-water temperature contrast, boundary-layer wind shear, and geostrophic wind direction. An exceptional and often neglected source of data for mesoscale cloud studies is the ultrahigh resolution multispectral data produced by Landsat satellites. On October 19, 1972, a clearly defined spiral vortex was noted in a Landsat-1 image near the southern end of Lake Michigan during an exceptionally early cold air outbreak over a still very warm lake. In a numerical simulation using a three-dimensional Eulerian hydrostatic primitive equation mesoscale model with an initially uniform wind field, a definite analog to the observed vortex was generated. This suggests that intense surface heating can be a principal cause in the development of a low-level mesoscale vortex.

  18. High-resolution numerical model of the middle and inner ear for a detailed analysis of radio frequency absorption

    International Nuclear Information System (INIS)

    Schmid, Gernot; Ueberbacher, Richard; Samaras, Theodoros; Jappel, Alexandra; Baumgartner, Wolf-Dieter; Tschabitscher, Manfred; Mazal, Peter R

    2007-01-01

    In order to enable a detailed analysis of radio frequency (RF) absorption in the human middle and inner ear organs, a numerical model of these organs was developed at a spatial resolution of 0.1 mm, based on a real human tissue sample. The dielectric properties of the liquids (perilymph and endolymph) inside the bony labyrinth were measured on samples of ten freshly deceased humans. After inserting this model into a commercially available numerical head model, FDTD-based computations for exposure scenarios with generic models of handheld devices operated close to the head in the frequency range 400-3700 MHz were carried out. For typical output power values of real handheld mobile communication devices the obtained results showed only very small amounts of absorbed RF power in the middle and inner ear organs. Highest absorption in the middle and inner ear was found for the 400 MHz irradiation. In this case, the RF power absorbed inside the labyrinth and the vestibulocochlear nerve was as low as 166 μW and 12 μW, respectively, when considering a device of 500 mW output power operated close to the ear. For typical mobile phone frequencies (900 MHz and 1850 MHz) and output power values (250 mW and 125 mW) the corresponding values of absorbed RF power were found to be more than one order of magnitude lower than the values given above. These results indicate that temperature-related biologically relevant effects on the middle and inner ear, induced by the RF emissions of typical handheld mobile communication devices, are unlikely

  19. A Coastal Bay Summer Breeze Study, Part 2: High-resolution Numerical Simulation of Sea-breeze Local Influences

    Science.gov (United States)

    Calmet, Isabelle; Mestayer, Patrice G.; van Eijk, Alexander M. J.; Herlédant, Olivier

    2018-04-01

    We complete the analysis of the data obtained during the experimental campaign around the semi circular bay of Quiberon, France, during two weeks in June 2006 (see Part 1). A reanalysis of numerical simulations performed with the Advanced Regional Prediction System model is presented. Three nested computational domains with increasing horizontal resolution down to 100 m, and a vertical resolution of 10 m at the lowest level, are used to reproduce the local-scale variations of the breeze close to the water surface of the bay. The Weather Research and Forecasting mesoscale model is used to assimilate the meteorological data. Comparisons of the simulations with the experimental data obtained at three sites reveal a good agreement of the flow over the bay and around the Quiberon peninsula during the daytime periods of sea-breeze development and weakening. In conditions of offshore synoptic flow, the simulations demonstrate that the semi-circular shape of the bay induces a corresponding circular shape in the offshore zones of stagnant flow preceding the sea-breeze onset, which move further offshore thereafter. The higher-resolution simulations are successful in reproducing the small-scale impacts of the peninsula and local coasts (breeze deviations, wakes, flow divergences), and in demonstrating the complexity of the breeze fields close to the surface over the bay. Our reanalysis also provides guidance for numerical simulation strategies for analyzing the structure and evolution of the near-surface breeze over a semi-circular bay, and for forecasting important flow details for use in upcoming sailing competitions.

  20. Essential numerical computer methods

    CERN Document Server

    Johnson, Michael L

    2010-01-01

    The use of computers and computational methods has become ubiquitous in biological and biomedical research. During the last 2 decades most basic algorithms have not changed, but what has is the huge increase in computer speed and ease of use, along with the corresponding orders of magnitude decrease in cost. A general perception exists that the only applications of computers and computer methods in biological and biomedical research are either basic statistical analysis or the searching of DNA sequence data bases. While these are important applications they only scratch the surface of the current and potential applications of computers and computer methods in biomedical research. The various chapters within this volume include a wide variety of applications that extend far beyond this limited perception. As part of the Reliable Lab Solutions series, Essential Numerical Computer Methods brings together chapters from volumes 210, 240, 321, 383, 384, 454, and 467 of Methods in Enzymology. These chapters provide ...

  1. A New Method Based on Two-Stage Detection Mechanism for Detecting Ships in High-Resolution SAR Images

    Directory of Open Access Journals (Sweden)

    Xu Yongli

    2017-01-01

    Full Text Available Ship detection in synthetic aperture radar (SAR remote sensing images, being a fundamental but challenging problem in the field of satellite image analysis, plays an important role for a wide range of applications and is receiving significant attention in recent years. Aiming at the requirements of ship detection in high-resolution SAR images, the accuracy, the intelligent level, a better real-time operation and processing efficiency, The characteristics of ocean background and ship target in high-resolution SAR images were analyzed, we put forward a ship detection algorithm in high-resolution SAR images. The algorithm consists of two detection stages: The first step designs a pre-training classifier based on improved spectral residual visual model to obtain the visual salient regions containing ship targets quickly, then achieve the purpose of probably detection of ships. In the second stage, considering the Bayesian theory of binary hypothesis detection, a local maximum posterior probability (MAP classifier is designed for the classification of pixels. After the parameter estimation and judgment criterion, the classification of pixels are carried out in the target areas to achieve the classification of two types of pixels in the salient regions. In the paper, several types of satellite image data, such as TerraSAR-X (TS-X, Radarsat-2, are used to evaluate the performance of detection methods. Comparing with classical CFAR detection algorithms, experimental results show that the algorithm can achieve a better effect of suppressing false alarms, which caused by the speckle noise and ocean clutter background inhomogeneity. At the same time, the detection speed is increased by 25% to 45%.

  2. Quantitative precipitation estimation based on high-resolution numerical weather prediction and data assimilation with WRF – a performance test

    Directory of Open Access Journals (Sweden)

    Hans-Stefan Bauer

    2015-04-01

    Full Text Available Quantitative precipitation estimation and forecasting (QPE and QPF are among the most challenging tasks in atmospheric sciences. In this work, QPE based on numerical modelling and data assimilation is investigated. Key components are the Weather Research and Forecasting (WRF model in combination with its 3D variational assimilation scheme, applied on the convection-permitting scale with sophisticated model physics over central Europe. The system is operated in a 1-hour rapid update cycle and processes a large set of in situ observations, data from French radar systems, the European GPS network and satellite sensors. Additionally, a free forecast driven by the ECMWF operational analysis is included as a reference run representing current operational precipitation forecasting. The verification is done both qualitatively and quantitatively by comparisons of reflectivity, accumulated precipitation fields and derived verification scores for a complex synoptic situation that developed on 26 and 27 September 2012. The investigation shows that even the downscaling from ECMWF represents the synoptic situation reasonably well. However, significant improvements are seen in the results of the WRF QPE setup, especially when the French radar data are assimilated. The frontal structure is more defined and the timing of the frontal movement is improved compared with observations. Even mesoscale band-like precipitation structures on the rear side of the cold front are reproduced, as seen by radar. The improvement in performance is also confirmed by a quantitative comparison of the 24-hourly accumulated precipitation over Germany. The mean correlation of the model simulations with observations improved from 0.2 in the downscaling experiment and 0.29 in the assimilation experiment without radar data to 0.56 in the WRF QPE experiment including the assimilation of French radar data.

  3. Inverse transformation algorithm of transient electromagnetic field and its high-resolution continuous imaging interpretation method

    International Nuclear Information System (INIS)

    Qi, Zhipeng; Li, Xiu; Lu, Xushan; Zhang, Yingying; Yao, Weihua

    2015-01-01

    We introduce a new and potentially useful method for wave field inverse transformation and its application in transient electromagnetic method (TEM) 3D interpretation. The diffusive EM field is known to have a unique integral representation in terms of a fictitious wave field that satisfies a wave equation. The continuous imaging of TEM can be accomplished using the imaging methods in seismic interpretation after the diffusion equation is transformed into a fictitious wave equation. The interpretation method based on the imaging of a fictitious wave field could be used as a fast 3D inversion method. Moreover, the fictitious wave field possesses some wave field features making it possible for the application of a wave field interpretation method in TEM to improve the prospecting resolution.Wave field transformation is a key issue in the migration imaging of a fictitious wave field. The equation in the wave field transformation belongs to the first class Fredholm integration equation, which is a typical ill-posed equation. Additionally, TEM has a large dynamic time range, which also facilitates the weakness of this ill-posed problem. The wave field transformation is implemented by using pre-conditioned regularized conjugate gradient method. The continuous imaging of a fictitious wave field is implemented by using Kirchhoff integration. A synthetic aperture and deconvolution algorithm is also introduced to improve the interpretation resolution. We interpreted field data by the method proposed in this paper, and obtained a satisfying interpretation result. (paper)

  4. High resolution 2D numerical models from rift to break-up: Crustal hyper-extension, Margin asymmetry, Sequential faulting

    Science.gov (United States)

    Brune, Sascha; Heine, Christian; Pérez-Gussinyé, Marta; Sobolev, Stephan

    2013-04-01

    Numerical modelling is a powerful tool to integrate a multitude of geological and geophysical data while addressing fundamental questions of passive margin formation such as the occurrence of crustal hyper-extension, (a-)symmetries between conjugate margin pairs, and the sometimes significant structural differences between adjacent margin segments. This study utilises knowledge gathered from two key examples of non-magmatic, asymmetric, conjugate margin pairs, i.e. Iberia-New Foundland and Southern Africa-Brazil, where many published seismic lines provide solid knowledge on individual margin geometry. While both margins involve crustal hyper-extension, it is much more pronounced in the South Atlantic. We investigate the evolution of these two margin pairs by carefully constraining our models with detailed plate kinematic history, laboratory-based rheology, and melt fraction evaluation of mantle upwelling. Our experiments are consistent with observed fault patterns, crustal thickness, and basin stratigraphy. We conduct 2D thermomechanical rift models using the finite element code SLIM3D that operates with nonlinear stress- and temperature-dependent elasto-visco-plastic rheology, with parameters provided by laboratory experiments on major crustal and upper mantle rocks. In our models we also calculate the melt fraction within the upwelling asthenosphere, which allows us to control whether the model indeed corresponds to the non-magmatic margin type or not. Our modelling highlights two processes as fundamental for the formation of hyper-extension and margin asymmetry at non-magmatic margins: (1) Strain hardening in the rift center due to cooling of upwelling mantle material (2) The formation of a weak crustal domain adjacent to the rift center caused by localized viscous strain softening and heat transfer from the mantle. Simultaneous activity of both processes promotes lateral rift migration in a continuous way that generates a wide layer of hyper-extended crust on

  5. High-resolution subject-specific mitral valve imaging and modeling: experimental and computational methods.

    Science.gov (United States)

    Toma, Milan; Bloodworth, Charles H; Einstein, Daniel R; Pierce, Eric L; Cochran, Richard P; Yoganathan, Ajit P; Kunzelman, Karyn S

    2016-12-01

    The diversity of mitral valve (MV) geometries and multitude of surgical options for correction of MV diseases necessitates the use of computational modeling. Numerical simulations of the MV would allow surgeons and engineers to evaluate repairs, devices, procedures, and concepts before performing them and before moving on to more costly testing modalities. Constructing, tuning, and validating these models rely upon extensive in vitro characterization of valve structure, function, and response to change due to diseases. Micro-computed tomography ([Formula: see text]CT) allows for unmatched spatial resolution for soft tissue imaging. However, it is still technically challenging to obtain an accurate geometry of the diastolic MV. We discuss here the development of a novel technique for treating MV specimens with glutaraldehyde fixative in order to minimize geometric distortions in preparation for [Formula: see text]CT scanning. The technique provides a resulting MV geometry which is significantly more detailed in chordal structure, accurate in leaflet shape, and closer to its physiological diastolic geometry. In this paper, computational fluid-structure interaction (FSI) simulations are used to show the importance of more detailed subject-specific MV geometry with 3D chordal structure to simulate a proper closure validated against [Formula: see text]CT images of the closed valve. Two computational models, before and after use of the aforementioned technique, are used to simulate closure of the MV.

  6. High resolution CSMT method for shallow structure; Senjo ochi ni okeru kobunkaino denjiho tansa

    Energy Technology Data Exchange (ETDEWEB)

    Yamane, K; Takasugi, S [GERD Geothermal Energy Research and Development Co. Ltd., Tokyo (Japan); Inazaki, S [Geological Survey of Japan, Tsukuba (Japan); Sasaki, Y

    1997-10-22

    Electromagnetic exploration methods and seismic reflection methods are conducted for the study of applicability of geophysical exploration to shallow layer geological survey. In the study, four north-south traverse lines are set in 400mtimes200m grassland. Tensor type CSMT methods are employed, a CSMT method using 96kHz-1kHz artificial magnetic fields and an MT method using 1kHz-10kHz natural magnetic fields, for the determination of resistivity distribution in the ground. Distributed in the site are a surface layer composed of gravel-containing sand and silt, andesitic fractured lava, and massive andesite, and the exploration reaches several tens of meters below the surface. The results of CSMT measurements are found to be in agreement with reflection profiles acquired simultaneously with these CSMT measurements. It is found, furthermore, that CSMT profiles help identify reflection waveforms in a domain where reflection is obscure. It is also found that electromagnetic methods are effective in fault logging because they are very sensitive to porosity, or the amount of pore water, which is higher in a domain with subsurface cracks than in the neighborhood without cracks. 1 ref., 4 figs.

  7. Hot gas in the cold dark matter scenario: X-ray clusters from a high-resolution numerical simulation

    Science.gov (United States)

    Kang, Hyesung; Cen, Renyue; Ostriker, Jeremiah P.; Ryu, Dongsu

    1994-01-01

    A new, three-dimensional, shock-capturing hydrodynamic code is utilized to determine the distribution of hot gas in a standard cold dark matter (CDM) model of the universe. Periodic boundary conditions are assumed: a box with size 85 h(exp -1) Mpc having cell size 0.31 h(exp -1) Mpc is followed in a simulation with 270(exp 3) = 10(exp 7.3) cells. Adopting standard parameters determined from COBE and light-element nucleosynthesis, sigma(sub 8) = 1.05, omega(sub b) = 0.06, and assuming h = 0.5, we find the X-ray-emitting clusters and compute the luminosity function at several wavelengths, the temperature distribution, and estimated sizes, as well as the evolution of these quantities with redshift. We find that most of the total X-ray emissivity in our box originates in a relatively small number of identifiable clusters which occupy approximately 10(exp -3) of the box volume. This standard CDM model, normalized to COBE, produces approximately 5 times too much emission from clusters having L(sub x) is greater than 10(exp 43) ergs/s, a not-unexpected result. If all other parameters were unchanged, we would expect adequate agreement for sigma(sub 8) = 0.6. This provides a new and independent argument for lower small-scale power than standard CDM at the 8 h(exp -1) Mpc scale. The background radiation field at 1 keV due to clusters in this model is approximately one-third of the observed background, which, after correction for numerical effects, again indicates approximately 5 times too much emission and the appropriateness of sigma(sub 8) = 0.6. If we have used the observed ratio of gas to total mass in clusters, rather than basing the mean density on light-element nucleosynthesis, then the computed luminosity of each cluster would have increased still further, by a factor of approximately 10. The number density of clusters increases to z approximately 1, but the luminosity per typical cluster decreases, with the result that evolution in the number density of bright

  8. EXTRACTION OF ROOF LINES FROM HIGH-RESOLUTION IMAGES BY A GROUPING METHOD

    Directory of Open Access Journals (Sweden)

    A. P. Dal Poz

    2016-06-01

    Full Text Available This paper proposes a method for extracting groups of straight lines that represent roof boundaries and roof ridgelines from highresolution aerial images using corresponding Airborne Laser Scanner (ALS roof polyhedrons as initial approximations. The proposed method is based on two main steps. First, straight lines that are candidates to represent roof ridgelines and roof boundaries of a building are extracted from the aerial image. Second, a group of straight lines that represent roof boundaries and roof ridgelines of a selected building is obtained through the optimization of a Markov Random Field (MRF-based energy function using the genetic algorithm optimization method. The formulation of this energy function considers several attributes, such as the proximity of the extracted straight lines to the corresponding projected ALS-derived roof polyhedron and the rectangularity (extracted straight lines that intersect at nearly 90°. Experimental results are presented and discussed in this paper.

  9. An efficient cloud detection method for high resolution remote sensing panchromatic imagery

    Science.gov (United States)

    Li, Chaowei; Lin, Zaiping; Deng, Xinpu

    2018-04-01

    In order to increase the accuracy of cloud detection for remote sensing satellite imagery, we propose an efficient cloud detection method for remote sensing satellite panchromatic images. This method includes three main steps. First, an adaptive intensity threshold value combined with a median filter is adopted to extract the coarse cloud regions. Second, a guided filtering process is conducted to strengthen the textural features difference and then we conduct the detection process of texture via gray-level co-occurrence matrix based on the acquired texture detail image. Finally, the candidate cloud regions are extracted by the intersection of two coarse cloud regions above and we further adopt an adaptive morphological dilation to refine them for thin clouds in boundaries. The experimental results demonstrate the effectiveness of the proposed method.

  10. A frequency domain radar interferometric imaging (FII) technique based on high-resolution methods

    Science.gov (United States)

    Luce, H.; Yamamoto, M.; Fukao, S.; Helal, D.; Crochet, M.

    2001-01-01

    In the present work, we propose a frequency-domain interferometric imaging (FII) technique for a better knowledge of the vertical distribution of the atmospheric scatterers detected by MST radars. This is an extension of the dual frequency-domain interferometry (FDI) technique to multiple frequencies. Its objective is to reduce the ambiguity (resulting from the use of only two adjacent frequencies), inherent with the FDI technique. Different methods, commonly used in antenna array processing, are first described within the context of application to the FII technique. These methods are the Fourier-based imaging, the Capon's and the singular value decomposition method used with the MUSIC algorithm. Some preliminary simulations and tests performed on data collected with the middle and upper atmosphere (MU) radar (Shigaraki, Japan) are also presented. This work is a first step in the developments of the FII technique which seems to be very promising.

  11. Cloud detection method for Chinese moderate high resolution satellite imagery (Conference Presentation)

    Science.gov (United States)

    Zhong, Bo; Chen, Wuhan; Wu, Shanlong; Liu, Qinhuo

    2016-10-01

    Cloud detection of satellite imagery is very important for quantitative remote sensing research and remote sensing applications. However, many satellite sensors don't have enough bands for a quick, accurate, and simple detection of clouds. Particularly, the newly launched moderate to high spatial resolution satellite sensors of China, such as the charge-coupled device on-board the Chinese Huan Jing 1 (HJ-1/CCD) and the wide field of view (WFV) sensor on-board the Gao Fen 1 (GF-1), only have four available bands including blue, green, red, and near infrared bands, which are far from the requirements of most could detection methods. In order to solve this problem, an improved and automated cloud detection method for Chinese satellite sensors called OCM (Object oriented Cloud and cloud-shadow Matching method) is presented in this paper. It firstly modified the Automatic Cloud Cover Assessment (ACCA) method, which was developed for Landsat-7 data, to get an initial cloud map. The modified ACCA method is mainly based on threshold and different threshold setting produces different cloud map. Subsequently, a strict threshold is used to produce a cloud map with high confidence and large amount of cloud omission and a loose threshold is used to produce a cloud map with low confidence and large amount of commission. Secondly, a corresponding cloud-shadow map is also produced using the threshold of near-infrared band. Thirdly, the cloud maps and cloud-shadow map are transferred to cloud objects and cloud-shadow objects. Cloud and cloud-shadow are usually in pairs; consequently, the final cloud and cloud-shadow maps are made based on the relationship between cloud and cloud-shadow objects. OCM method was tested using almost 200 HJ-1/CCD images across China and the overall accuracy of cloud detection is close to 90%.

  12. Integration of high resolution geophysical methods. Detection of shallow depth bodies of archaeological interest

    Energy Technology Data Exchange (ETDEWEB)

    Cammarano, F.; Piro, S.; Rosso, F.; Versino, L. [Centro Nazionale delle Ricerche, Rome (Italy). Istituto per le Tecnologie Applicate ai Beni Culturali; Mauriello, P. [Neaples, Univ. `Federico II` (Italy). Dip. di Scienze Fisiche

    1998-08-01

    A combined survey using ground penetrating radar, self-potential, geo electrical and magnetic methods has been carried out to detect near-surface tombs in the archaeological test site of the Sabine Necropolis at Colle del Forno, Rome, Italy. A 2D data acquisition mode has been adopted to obtain a 3D image of the investigated volumes. The multi-methodological approach has not only demonstrated the reliability of each method in delineating the spatial behaviour of the governing parameter, but mainly helped to obtain a detailed physical image closely conforming to the target geometry through the whole set of parameters involved

  13. Integration of high resolution geophysical methods. Detection of shallow depth bodies of archaeological interest

    Directory of Open Access Journals (Sweden)

    F. Rosso

    1998-06-01

    Full Text Available A combined survey using ground penetrating radar, self-potential, geoelectrical and magnetic methods has been carried out to detect near-surface tombs in the archaeological test site of the Sabine Necropolis at Colle del Forno, Rome, Italy. A 2D data acquisition mode has been adopted to obtain a 3D image of the investigated volumes. The multi-methodological approach has not only demonstrated the reliability of each method in delineating the spatial behaviour of the governing parameter, but mainly helped to obtain a detailed physical image closely conforming to the target geometry through the whole set of parameters involved.

  14. A novel typing method for Listeria monocytogenes using high-resolution melting analysis (HRMA) of tandem repeat regions.

    Science.gov (United States)

    Ohshima, Chihiro; Takahashi, Hajime; Iwakawa, Ai; Kuda, Takashi; Kimura, Bon

    2017-07-17

    Listeria monocytogenes, which is responsible for causing food poisoning known as listeriosis, infects humans and animals. Widely distributed in the environment, this bacterium is known to contaminate food products after being transmitted to factories via raw materials. To minimize the contamination of products by food pathogens, it is critical to identify and eliminate factory entry routes and pathways for the causative bacteria. High resolution melting analysis (HRMA) is a method that takes advantage of differences in DNA sequences and PCR product lengths that are reflected by the disassociation temperature. Through our research, we have developed a multiple locus variable-number tandem repeat analysis (MLVA) using HRMA as a simple and rapid method to differentiate L. monocytogenes isolates. While evaluating our developed method, the ability of MLVA-HRMA, MLVA using capillary electrophoresis, and multilocus sequence typing (MLST) was compared for their ability to discriminate between strains. The MLVA-HRMA method displayed greater discriminatory ability than MLST and MLVA using capillary electrophoresis, suggesting that the variation in the number of repeat units, along with mutations within the DNA sequence, was accurately reflected by the melting curve of HRMA. Rather than relying on DNA sequence analysis or high-resolution electrophoresis, the MLVA-HRMA method employs the same process as PCR until the analysis step, suggesting a combination of speed and simplicity. The result of MLVA-HRMA method is able to be shared between different laboratories. There are high expectations that this method will be adopted for regular inspections at food processing facilities in the near future. Copyright © 2017. Published by Elsevier B.V.

  15. Methods to assess high-resolution subsurface gas concentrations and gas fluxes in wetland ecosystems

    DEFF Research Database (Denmark)

    Elberling, Bo; Kühl, Michael; Glud, Ronnie Nøhr

    2013-01-01

    The need for measurements of soil gas concentrations and surface fluxes of greenhouse gases at high temporal and spatial resolution in wetland ecosystem has lead to the introduction of several new analytical techniques and methods. In addition to the automated flux chamber methodology for high-re...

  16. A high-resolution computational localization method for transcranial magnetic stimulation mapping.

    Science.gov (United States)

    Aonuma, Shinta; Gomez-Tames, Jose; Laakso, Ilkka; Hirata, Akimasa; Takakura, Tomokazu; Tamura, Manabu; Muragaki, Yoshihiro

    2018-05-15

    Transcranial magnetic stimulation (TMS) is used for the mapping of brain motor functions. The complexity of the brain deters determining the exact localization of the stimulation site using simplified methods (e.g., the region below the center of the TMS coil) or conventional computational approaches. This study aimed to present a high-precision localization method for a specific motor area by synthesizing computed non-uniform current distributions in the brain for multiple sessions of TMS. Peritumoral mapping by TMS was conducted on patients who had intra-axial brain neoplasms located within or close to the motor speech area. The electric field induced by TMS was computed using realistic head models constructed from magnetic resonance images of patients. A post-processing method was implemented to determine a TMS hotspot by combining the computed electric fields for the coil orientations and positions that delivered high motor-evoked potentials during peritumoral mapping. The method was compared to the stimulation site localized via intraoperative direct brain stimulation and navigated TMS. Four main results were obtained: 1) the dependence of the computed hotspot area on the number of peritumoral measurements was evaluated; 2) the estimated localization of the hand motor area in eight non-affected hemispheres was in good agreement with the position of a so-called "hand-knob"; 3) the estimated hotspot areas were not sensitive to variations in tissue conductivity; and 4) the hand motor areas estimated by this proposal and direct electric stimulation (DES) were in good agreement in the ipsilateral hemisphere of four glioma patients. The TMS localization method was validated by well-known positions of the "hand-knob" in brains for the non-affected hemisphere, and by a hotspot localized via DES during awake craniotomy for the tumor-containing hemisphere. Copyright © 2018 Elsevier Inc. All rights reserved.

  17. A method of incident angle estimation for high resolution spectral recovery in filter-array-based spectrometers

    Science.gov (United States)

    Kim, Cheolsun; Lee, Woong-Bi; Ju, Gun Wu; Cho, Jeonghoon; Kim, Seongmin; Oh, Jinkyung; Lim, Dongsung; Lee, Yong Tak; Lee, Heung-No

    2017-02-01

    In recent years, there has been an increasing interest in miniature spectrometers for research and development. Especially, filter-array-based spectrometers have advantages of low cost and portability, and can be applied in various fields such as biology, chemistry and food industry. Miniaturization in optical filters causes degradation of spectral resolution due to limitations on spectral responses and the number of filters. Nowadays, many studies have been reported that the filter-array-based spectrometers have achieved resolution improvements by using digital signal processing (DSP) techniques. The performance of the DSP-based spectral recovery highly depends on the prior information of transmission functions (TFs) of the filters. The TFs vary with respect to an incident angle of light onto the filter-array. Conventionally, it is assumed that the incident angle of light on the filters is fixed and the TFs are known to the DSP. However, the incident angle is inconstant according to various environments and applications, and thus TFs also vary, which leads to performance degradation of spectral recovery. In this paper, we propose a method of incident angle estimation (IAE) for high resolution spectral recovery in the filter-array-based spectrometers. By exploiting sparse signal reconstruction of the L1- norm minimization, IAE estimates an incident angle among all possible incident angles which minimizes the error of the reconstructed signal. Based on IAE, DSP effectively provides a high resolution spectral recovery in the filter-array-based spectrometers.

  18. A multi-method high-resolution geophysical survey in the Machado de Castro museum, central Portugal

    International Nuclear Information System (INIS)

    Grangeia, Carlos; Matias, Manuel; Hermozilha, Hélder; Figueiredo, Fernando; Carvalho, Pedro; Silva, Ricardo

    2011-01-01

    Restoration of historical buildings is a delicate operation as they are often built over more ancient and important structures. The Machado de Castro Museum, Coimbra, Central Portugal, has suffered several interventions in historical times and lies over the ancient Roman forum of Coimbra. This building went through a restoration project. These works were preceded by an extensive geophysical survey that aimed at investigating subsurface stratigraphy, including archeological remains, and the internal structure of the actual walls. Owing to the needs of the project, geophysical data interpretation required not only integration but also high resolution. The study consisted of data acquisition over perpendicular planes and different levels that required detailed survey planning and integration of data from different locations that complement images of the surveyed area. Therefore a multi-method, resistivity imaging and a 3D ground probing radar (GPR), high-resolution geophysical survey was done inside the museum. Herein, radargrams are compared with the revealed stratigraphy so that signatures are interpreted, characterized and assigned to archeological structures. Although resistivity and GPR have different resolution capabilities, their data are overlapped and compared, bearing in mind the specific characteristics of this survey. It was also possible to unravel the inner structure of the actual walls, to establish connections between walls, foundations and to find older remains with the combined use and spatial integration of the GPR and resistivity imaging data

  19. Method for high resolution magnetic resonance analysis using magic angle technique

    Science.gov (United States)

    Wind, Robert A.; Hu, Jian Zhi

    2003-12-30

    A method of performing a magnetic resonance analysis of a biological object that includes placing the object in a main magnetic field (that has a static field direction) and in a radio frequency field; rotating the object at a frequency of less than about 100 Hz around an axis positioned at an angle of about 54.degree.44' relative to the main magnetic static field direction; pulsing the radio frequency to provide a sequence that includes a phase-corrected magic angle turning pulse segment; and collecting data generated by the pulsed radio frequency. The object may be reoriented about the magic angle axis between three predetermined positions that are related to each other by 120.degree.. The main magnetic field may be rotated mechanically or electronically. Methods for magnetic resonance imaging of the object are also described.

  20. An Object-Oriented Classification Method on High Resolution Satellite Data

    Science.gov (United States)

    2004-11-01

    25th ACRS 2004 Chiang Mai , Thailand 347 Data Processing B-4.6 AN OBJECT-ORIENTED CLASSIFICATION METHOD ON...unlimited 13. SUPPLEMENTARY NOTES Proceedings of the 25th Asian Conference on Remote Sensing, Held in Chiang Mai , Thailand on 22-26 November 2004...panchromatic (left) and multispectral (right) 25th ACRS 2004 Chiang Mai , Thailand 349 Data Processing B-4.6 First of all, the

  1. A FUZZY AUTOMATIC CAR DETECTION METHOD BASED ON HIGH RESOLUTION SATELLITE IMAGERY AND GEODESIC MORPHOLOGY

    Directory of Open Access Journals (Sweden)

    N. Zarrinpanjeh

    2017-09-01

    Full Text Available Automatic car detection and recognition from aerial and satellite images is mostly practiced for the purpose of easy and fast traffic monitoring in cities and rural areas where direct approaches are proved to be costly and inefficient. Towards the goal of automatic car detection and in parallel with many other published solutions, in this paper, morphological operators and specifically Geodesic dilation are studied and applied on GeoEye-1 images to extract car items in accordance with available vector maps. The results of Geodesic dilation are then segmented and labeled to generate primitive car items to be introduced to a fuzzy decision making system, to be verified. The verification is performed inspecting major and minor axes of each region and the orientations of the cars with respect to the road direction. The proposed method is implemented and tested using GeoEye-1 pansharpen imagery. Generating the results it is observed that the proposed method is successful according to overall accuracy of 83%. It is also concluded that the results are sensitive to the quality of available vector map and to overcome the shortcomings of this method, it is recommended to consider spectral information in the process of hypothesis verification.

  2. An object-oriented classification method of high resolution imagery based on improved AdaTree

    International Nuclear Information System (INIS)

    Xiaohe, Zhang; Liang, Zhai; Jixian, Zhang; Huiyong, Sang

    2014-01-01

    With the popularity of the application using high spatial resolution remote sensing image, more and more studies paid attention to object-oriented classification on image segmentation as well as automatic classification after image segmentation. This paper proposed a fast method of object-oriented automatic classification. First, edge-based or FNEA-based segmentation was used to identify image objects and the values of most suitable attributes of image objects for classification were calculated. Then a certain number of samples from the image objects were selected as training data for improved AdaTree algorithm to get classification rules. Finally, the image objects could be classified easily using these rules. In the AdaTree, we mainly modified the final hypothesis to get classification rules. In the experiment with WorldView2 image, the result of the method based on AdaTree showed obvious accuracy and efficient improvement compared with the method based on SVM with the kappa coefficient achieving 0.9242

  3. a Fuzzy Automatic CAR Detection Method Based on High Resolution Satellite Imagery and Geodesic Morphology

    Science.gov (United States)

    Zarrinpanjeh, N.; Dadrassjavan, F.

    2017-09-01

    Automatic car detection and recognition from aerial and satellite images is mostly practiced for the purpose of easy and fast traffic monitoring in cities and rural areas where direct approaches are proved to be costly and inefficient. Towards the goal of automatic car detection and in parallel with many other published solutions, in this paper, morphological operators and specifically Geodesic dilation are studied and applied on GeoEye-1 images to extract car items in accordance with available vector maps. The results of Geodesic dilation are then segmented and labeled to generate primitive car items to be introduced to a fuzzy decision making system, to be verified. The verification is performed inspecting major and minor axes of each region and the orientations of the cars with respect to the road direction. The proposed method is implemented and tested using GeoEye-1 pansharpen imagery. Generating the results it is observed that the proposed method is successful according to overall accuracy of 83%. It is also concluded that the results are sensitive to the quality of available vector map and to overcome the shortcomings of this method, it is recommended to consider spectral information in the process of hypothesis verification.

  4. New high resolution Random Telegraph Noise (RTN) characterization method for resistive RAM

    Science.gov (United States)

    Maestro, M.; Diaz, J.; Crespo-Yepes, A.; Gonzalez, M. B.; Martin-Martinez, J.; Rodriguez, R.; Nafria, M.; Campabadal, F.; Aymerich, X.

    2016-01-01

    Random Telegraph Noise (RTN) is one of the main reliability problems of resistive switching-based memories. To understand the physics behind RTN, a complete and accurate RTN characterization is required. The standard equipment used to analyse RTN has a typical time resolution of ∼2 ms which prevents evaluating fast phenomena. In this work, a new RTN measurement procedure, which increases the measurement time resolution to 2 μs, is proposed. The experimental set-up, together with the recently proposed Weighted Time Lag (W-LT) method for the analysis of RTN signals, allows obtaining a more detailed and precise information about the RTN phenomenon.

  5. High Resolution Ultrasonic Method for 3D Fingerprint Representation in Biometrics

    Science.gov (United States)

    Maev, R. Gr.; Bakulin, E. Y.; Maeva, E. Y.; Severin, F. M.

    Biometrics is an important field which studies different possible ways of personal identification. Among a number of existing biometric techniques fingerprint recognition stands alone - because very large database of fingerprints has already been acquired. Also, fingerprints are an important evidence that can be collected at a crime scene. Therefore, of all automated biometric techniques, especially in the field of law enforcement, fingerprint identification seems to be the most promising. Ultrasonic method of fingerprint imaging was originally introduced over a decade as the mapping of the reflection coefficient at the interface between the finger and a covering plate and has shown very good reliability and free from imperfections of previous two methods. This work introduces a newer development of the ultrasonic fingerprint imaging, focusing on the imaging of the internal structures of fingerprints (including sweat pores) with raw acoustic resolution of about 500 dpi (0.05 mm) using a scanning acoustic microscope to obtain images and acoustic data in the form of 3D data array. C-scans from different depths inside the fingerprint area of fingers of several volunteers were obtained and showed good contrast of ridges-and-valleys patterns and practically exact correspondence to the standard ink-and-paper prints of the same areas. Important feature reveled on the acoustic images was the clear appearance of the sweat pores, which could provide additional means of identification.

  6. Solutions of the Taylor-Green Vortex Problem Using High-Resolution Explicit Finite Difference Methods

    Science.gov (United States)

    DeBonis, James R.

    2013-01-01

    A computational fluid dynamics code that solves the compressible Navier-Stokes equations was applied to the Taylor-Green vortex problem to examine the code s ability to accurately simulate the vortex decay and subsequent turbulence. The code, WRLES (Wave Resolving Large-Eddy Simulation), uses explicit central-differencing to compute the spatial derivatives and explicit Low Dispersion Runge-Kutta methods for the temporal discretization. The flow was first studied and characterized using Bogey & Bailley s 13-point dispersion relation preserving (DRP) scheme. The kinetic energy dissipation rate, computed both directly and from the enstrophy field, vorticity contours, and the energy spectra are examined. Results are in excellent agreement with a reference solution obtained using a spectral method and provide insight into computations of turbulent flows. In addition the following studies were performed: a comparison of 4th-, 8th-, 12th- and DRP spatial differencing schemes, the effect of the solution filtering on the results, the effect of large-eddy simulation sub-grid scale models, and the effect of high-order discretization of the viscous terms.

  7. High resolution mass spectrometry method and system for analysis of whole proteins and other large molecules

    Science.gov (United States)

    Reilly, Peter T. A. [Knoxville, TN; Harris, William A [Naperville, IL

    2010-03-02

    A matrix assisted laser desorption/ionization (MALDI) method and related system for analyzing high molecular weight analytes includes the steps of providing at least one matrix-containing particle inside an ion trap, wherein at least one high molecular weight analyte molecule is provided within the matrix-containing particle, and MALDI on the high molecular weight particle while within the ion trap. A laser power used for ionization is sufficient to completely vaporize the particle and form at least one high molecular weight analyte ion, but is low enough to avoid fragmenting the high molecular weight analyte ion. The high molecular weight analyte ion is extracted out from the ion trap, and is then analyzed using a detector. The detector is preferably a pyrolyzing and ionizing detector.

  8. A new method to generate a high-resolution global distribution map of lake chlorophyll

    Science.gov (United States)

    Sayers, Michael J; Grimm, Amanda G.; Shuchman, Robert A.; Deines, Andrew M.; Bunnell, David B.; Raymer, Zachary B; Rogers, Mark W.; Woelmer, Whitney; Bennion, David; Brooks, Colin N.; Whitley, Matthew A.; Warner, David M.; Mychek-Londer, Justin G.

    2015-01-01

    A new method was developed, evaluated, and applied to generate a global dataset of growing-season chlorophyll-a (chl) concentrations in 2011 for freshwater lakes. Chl observations from freshwater lakes are valuable for estimating lake productivity as well as assessing the role that these lakes play in carbon budgets. The standard 4 km NASA OceanColor L3 chlorophyll concentration products generated from MODIS and MERIS sensor data are not sufficiently representative of global chl values because these can only resolve larger lakes, which generally have lower chl concentrations than lakes of smaller surface area. Our new methodology utilizes the 300 m-resolution MERIS full-resolution full-swath (FRS) global dataset as input and does not rely on the land mask used to generate standard NASA products, which masks many lakes that are otherwise resolvable in MERIS imagery. The new method produced chl concentration values for 78,938 and 1,074 lakes in the northern and southern hemispheres, respectively. The mean chl for lakes visible in the MERIS composite was 19.2 ± 19.2, the median was 13.3, and the interquartile range was 3.90–28.6 mg m−3. The accuracy of the MERIS-derived values was assessed by comparison with temporally near-coincident and globally distributed in situmeasurements from the literature (n = 185, RMSE = 9.39, R2 = 0.72). This represents the first global-scale dataset of satellite-derived chl estimates for medium to large lakes.

  9. Gamma-line intensity difference method for sup 1 sup 1 sup 7 sup m Sn at high resolution

    CERN Document Server

    Remeikis, V; Mazeika, K

    1998-01-01

    The method for detection of small differences in the gamma-spectrum line intensity for the radionuclide in different environments has been developed for measurements at high resolution. The experiments were realized with the pure germanium planar detector. Solution of the methodical problems allowed to measure the relative difference DELTA IOTA subgamma/IOTA subgamma=(3.4+-1.5)*10 sup - sup 4 of the sup 1 sup 1 sup 7 sup m Sn 156.02 keV gamma-line intensity for the radionuclide in SnO sub 2 with respect to SnS from the difference in the gamma-spectra. The error of the result is caused mainly by the statistical accuracy. It is limited by the highest counting rate at sufficiently high energy resolution and relatively short half-life of sup 1 sup 1 sup 7 sup m Sn. (author)

  10. The digital structural analysis of cadmium selenide crystals by a method of ion beam thinning for high resolution electron microscopy

    International Nuclear Information System (INIS)

    Kanaya, Koichi; Baba, Norio; Naka, Michiaki; Kitagawa, Yukihisa; Suzuki, Kunio

    1986-01-01

    A digital processing method using a scanning densitometer system for structural analysis of electron micrographs was successfully applied to a study of cadmium selenide crystals, which were prepared by an argon-ion beam thinning method. Based on Fourier techniques for structural analysis from a computer-generated diffractogram, it was demonstrated that when cadmium selenide crystals were sufficiently thin to display the higher order diffraction spots at a high resolution approaching the atomic level, they constitute an alternative hexagonal lattice of imperfect wurtzite phase from a superposition of individual harmonic images by the enhanced scattering amplitude and corrected phase. From the structural analysis data, a Fourier synthetic lattice image was reconstructed, representing the precise location and three-dimensional arrangement of each of the atoms in the unit cell. Extensively enhanced lattice defect images of dislocations and stacking faults were also derived and shown graphically. (author)

  11. Species identification in meat products: A new screening method based on high resolution melting analysis of cyt b gene.

    Science.gov (United States)

    Lopez-Oceja, A; Nuñez, C; Baeta, M; Gamarra, D; de Pancorbo, M M

    2017-12-15

    Meat adulteration by substitution with lower value products and/or mislabeling involves economic, health, quality and socio-religious issues. Therefore, identification and traceability of meat species has become an important subject to detect possible fraudulent practices. In the present study the development of a high resolution melt (HRM) screening method for the identification of eight common meat species is reported. Samples from Bos taurus, Ovis aries, Sus scrofa domestica, Equus caballus, Oryctolagus cuniculus, Gallus gallus domesticus, Meleagris gallopavo and Coturnix coturnix were analyzed through the amplification of a 148 bp fragment from the cyt b gene with a universal primer pair in HRM analyses. Melting profiles from each species, as well as from several DNA mixtures of these species and blind samples, allowed a successful species differentiation. The results demonstrated that the HRM method here proposed is a fast, reliable, and low-cost screening technique. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Analytical method by high resolution liquid chromatography for the stability study of cloratidine syrup 0.1 %

    International Nuclear Information System (INIS)

    Torres Amaro, Leonid; Garcia Penna, Caridad M; Pardo Ruiz, Zenia

    2007-01-01

    A high resolution liquid chromatography method was validated to study the stability of cloratidine syrup 0.1 %. The calibration curve in the range from 13.6 to 3.36 μg/mL was lineal, with a coefficient of correlation equal to 0.99975. The intercept and slope statistical test was not significant. The recovery obtained was 100.2 % in the concentration range studied, and the Cochran and Student (t) tests results were not important. The variation coefficient in the repeatability study was equal to 0.41 % for 10 replications assayed, whereas in the reproducibility Fischer and Student tests were not remarkable. The method proved to be specific, lineal, accurate, and exact. (Author)

  13. High Resolution Ultrasonic Method for 3D Fingerprint Recognizable Characteristics in Biometrics Identification

    Science.gov (United States)

    Maev, R. Gr.; Bakulin, E. Yu.; Maeva, A.; Severin, F.

    Biometrics is a rapidly evolving scientific and applied discipline that studies possible ways of personal identification by means of unique biological characteristics. Such identification is important in various situations requiring restricted access to certain areas, information and personal data and for cases of medical emergencies. A number of automated biometric techniques have been developed, including fingerprint, hand shape, eye and facial recognition, thermographic imaging, etc. All these techniques differ in the recognizable parameters, usability, accuracy and cost. Among these, fingerprint recognition stands alone since a very large database of fingerprints has already been acquired. Also, fingerprints are key evidence left at a crime scene and can be used to indentify suspects. Therefore, of all automated biometric techniques, especially in the field of law enforcement, fingerprint identification seems to be the most promising. We introduce a newer development of the ultrasonic fingerprint imaging. The proposed method obtains a scan only once and then varies the C-scan gate position and width to visualize acoustic reflections from any appropriate depth inside the skin. Also, B-scans and A-scans can be recreated from any position using such data array, which gives the control over the visualization options. By setting the C-scan gate deeper inside the skin, distribution of the sweat pores (which are located along the ridges) can be easily visualized. This distribution should be unique for each individual so this provides a means of personal identification, which is not affected by any changes (accidental or intentional) of the fingers' surface conditions. This paper discusses different setups, acoustic parameters of the system, signal and image processing options and possible ways of 3-dimentional visualization that could be used as a recognizable characteristic in biometric identification.

  14. The geometric phase analysis method based on the local high resolution discrete Fourier transform for deformation measurement

    International Nuclear Information System (INIS)

    Dai, Xianglu; Xie, Huimin; Wang, Huaixi; Li, Chuanwei; Wu, Lifu; Liu, Zhanwei

    2014-01-01

    The geometric phase analysis (GPA) method based on the local high resolution discrete Fourier transform (LHR-DFT) for deformation measurement, defined as LHR-DFT GPA, is proposed to improve the measurement accuracy. In the general GPA method, the fundamental frequency of the image plays a crucial role. However, the fast Fourier transform, which is generally employed in the general GPA method, could make it difficult to locate the fundamental frequency accurately when the fundamental frequency is not located at an integer pixel position in the Fourier spectrum. This study focuses on this issue and presents a LHR-DFT algorithm that can locate the fundamental frequency with sub-pixel precision in a specific frequency region for the GPA method. An error analysis is offered and simulation is conducted to verify the effectiveness of the proposed method; both results show that the LHR-DFT algorithm can accurately locate the fundamental frequency and improve the measurement accuracy of the GPA method. Furthermore, typical tensile and bending tests are carried out and the experimental results verify the effectiveness of the proposed method. (paper)

  15. Three-Dimensional Imaging and Numerical Reconstruction of Graphite/Epoxy Composite Microstructure Based on Ultra-High Resolution X-Ray Computed Tomography

    Science.gov (United States)

    Czabaj, M. W.; Riccio, M. L.; Whitacre, W. W.

    2014-01-01

    A combined experimental and computational study aimed at high-resolution 3D imaging, visualization, and numerical reconstruction of fiber-reinforced polymer microstructures at the fiber length scale is presented. To this end, a sample of graphite/epoxy composite was imaged at sub-micron resolution using a 3D X-ray computed tomography microscope. Next, a novel segmentation algorithm was developed, based on concepts adopted from computer vision and multi-target tracking, to detect and estimate, with high accuracy, the position of individual fibers in a volume of the imaged composite. In the current implementation, the segmentation algorithm was based on Global Nearest Neighbor data-association architecture, a Kalman filter estimator, and several novel algorithms for virtualfiber stitching, smoothing, and overlap removal. The segmentation algorithm was used on a sub-volume of the imaged composite, detecting 508 individual fibers. The segmentation data were qualitatively compared to the tomographic data, demonstrating high accuracy of the numerical reconstruction. Moreover, the data were used to quantify a) the relative distribution of individual-fiber cross sections within the imaged sub-volume, and b) the local fiber misorientation relative to the global fiber axis. Finally, the segmentation data were converted using commercially available finite element (FE) software to generate a detailed FE mesh of the composite volume. The methodology described herein demonstrates the feasibility of realizing an FE-based, virtual-testing framework for graphite/fiber composites at the constituent level.

  16. SU-E-I-40: New Method for Measurement of Task-Specific, High-Resolution Detector System Performance

    Energy Technology Data Exchange (ETDEWEB)

    Loughran, B; Singh, V; Jain, A; Bednarek, D; Rudin, S [University at Buffalo, Buffalo, NY (United States)

    2014-06-01

    Purpose: Although generalized linear system analytic metrics such as GMTF and GDQE can evaluate performance of the whole imaging system including detector, scatter and focal-spot, a simplified task-specific measured metric may help to better compare detector systems. Methods: Low quantum-noise images of a neuro-vascular stent with a modified ANSI head phantom were obtained from the average of many exposures taken with the high-resolution Micro-Angiographic Fluoroscope (MAF) and with a Flat Panel Detector (FPD). The square of the Fourier Transform of each averaged image, equivalent to the measured product of the system GMTF and the object function in spatial-frequency space, was then divided by the normalized noise power spectra (NNPS) for each respective system to obtain a task-specific generalized signal-to-noise ratio. A generalized measured relative object detectability (GM-ROD) was obtained by taking the ratio of the integral of the resulting expressions for each detector system to give an overall metric that enables a realistic systems comparison for the given detection task. Results: The GM-ROD provides comparison of relative performance of detector systems from actual measurements of the object function as imaged by those detector systems. This metric includes noise correlations and spatial frequencies relevant to the specific object. Additionally, the integration bounds for the GM-ROD can be selected to emphasis the higher frequency band of each detector if high-resolution image details are to be evaluated. Examples of this new metric are discussed with a comparison of the MAF to the FPD for neuro-vascular interventional imaging. Conclusion: The GM-ROD is a new direct-measured task-specific metric that can provide clinically relevant comparison of the relative performance of imaging systems. Supported by NIH Grant: 2R01EB002873 and an equipment grant from Toshiba Medical Systems Corporation.

  17. FBG Interrogation Method with High Resolution and Response Speed Based on a Reflective-Matched FBG Scheme

    Science.gov (United States)

    Cui, Jiwen; Hu, Yang; Feng, Kunpeng; Li, Junying; Tan, Jiubin

    2015-01-01

    In this paper, a high resolution and response speed interrogation method based on a reflective-matched Fiber Bragg Grating (FBG) scheme is investigated in detail. The nonlinear problem of the reflective-matched FBG sensing interrogation scheme is solved by establishing and optimizing the mathematical model. A mechanical adjustment to optimize the interrogation method by tuning the central wavelength of the reference FBG to improve the stability and anti-temperature perturbation performance is investigated. To satisfy the measurement requirements of optical and electric signal processing, a well- designed acquisition circuit board is prepared, and experiments on the performance of the interrogation method are carried out. The experimental results indicate that the optical power resolution of the acquisition circuit border is better than 8 pW, and the stability of the interrogation method with the mechanical adjustment can reach 0.06%. Moreover, the nonlinearity of the interrogation method is 3.3% in the measurable range of 60 pm; the influence of temperature is significantly reduced to 9.5%; the wavelength resolution and response speed can achieve values of 0.3 pm and 500 kHz, respectively. PMID:26184195

  18. FBG Interrogation Method with High Resolution and Response Speed Based on a Reflective-Matched FBG Scheme.

    Science.gov (United States)

    Cui, Jiwen; Hu, Yang; Feng, Kunpeng; Li, Junying; Tan, Jiubin

    2015-07-08

    In this paper, a high resolution and response speed interrogation method based on a reflective-matched Fiber Bragg Grating (FBG) scheme is investigated in detail. The nonlinear problem of the reflective-matched FBG sensing interrogation scheme is solved by establishing and optimizing the mathematical model. A mechanical adjustment to optimize the interrogation method by tuning the central wavelength of the reference FBG to improve the stability and anti-temperature perturbation performance is investigated. To satisfy the measurement requirements of optical and electric signal processing, a well- designed acquisition circuit board is prepared, and experiments on the performance of the interrogation method are carried out. The experimental results indicate that the optical power resolution of the acquisition circuit border is better than 8 pW, and the stability of the interrogation method with the mechanical adjustment can reach 0.06%. Moreover, the nonlinearity of the interrogation method is 3.3% in the measurable range of 60 pm; the influence of temperature is significantly reduced to 9.5%; the wavelength resolution and response speed can achieve values of 0.3 pm and 500 kHz, respectively.

  19. Development of an improved high resolution mass spectrometry based multi-residue method for veterinary drugs in various food matrices.

    Science.gov (United States)

    Kaufmann, A; Butcher, P; Maden, K; Walker, S; Widmer, M

    2011-08-26

    Multi-residue methods for veterinary drugs or pesticides in food are increasingly often based on ultra performance liquid chromatography (UPLC) coupled to high resolution mass spectrometry (HRMS). Previous available time of flight (TOF) technologies, showing resolutions up to 15,000 full width at half maximum (FWHM), were not sufficiently selective for monitoring low residue concentrations in difficult matrices (e.g. hormones in tissue or antibiotics in honey). The approach proposed in this paper is based on a single stage Orbitrap mass spectrometer operated at 50,000 FWHM. Extracts (liver and kidney) which were produced according to a validated multi-residue method (time of flight detection based) could not be analyzed by Orbitrap because of extensive signal suppression. This required the improvement of established extraction and clean-up procedures. The introduced, more extensive deproteinzation steps and dedicated instrumental settings successfully eliminated these detrimental suppression effects. The reported method, covering more than 100 different veterinary dugs, was validated according to the EU Commission Decision 2002/657/EEC. Validated matrices include muscle, kidney, liver, fish and honey. Significantly better performance parameters (e.g. linearity, reproducibility and detection limits) were obtained when comparing the new method with the older, TOF based method. These improvements are attributed to the higher resolution (50,000 versus 12,000 FWHM) and the superior mass stability of the of the Orbitrap over the previously utilized TOF instrument. Copyright © 2010 Elsevier B.V. All rights reserved.

  20. Delineation of wetland areas from high resolution WorldView-2 data by object-based method

    International Nuclear Information System (INIS)

    Hassan, N; Hamid, J R A; Adnan, N A; Jaafar, M

    2014-01-01

    Various classification methods are available that can be used to delineate land cover types. Object-based is one of such methods for delineating the land cover from satellite imageries. This paper focuses on the digital image processing aspects of discriminating wetland areas via object-based method using high resolution satellite multispectral WorldView-2 image data taken over part of Penang Island region. This research is an attempt to improve the wetland area delineation in conjunction with a range of classification techniques which can be applied to satellite data with high spatial and spectral resolution such as World View 2. The intent is to determine a suitable approach to delineate and map these wetland areas more appropriately. There are common parameters to take into account that are pivotal in object-based method which are the spatial resolution and the range of spectral channels of the imaging sensor system. The preliminary results of the study showed object-based analysis is capable of delineating wetland region of interest with an accuracy that is acceptable to the required tolerance for land cover classification

  1. An Effective Method for Detecting Potential Woodland Vernal Pools Using High-Resolution LiDAR Data and Aerial Imagery

    Directory of Open Access Journals (Sweden)

    Qiusheng Wu

    2014-11-01

    Full Text Available Effective conservation of woodland vernal pools—important components of regional amphibian diversity and ecosystem services—depends on locating and mapping these pools accurately. Current methods for identifying potential vernal pools are primarily based on visual interpretation and digitization of aerial photographs, with variable accuracy and low repeatability. In this paper, we present an effective and efficient method for detecting and mapping potential vernal pools using stochastic depression analysis with additional geospatial analysis. Our method was designed to take advantage of high-resolution light detection and ranging (LiDAR data, which are becoming increasingly available, though not yet frequently employed in vernal pool studies. We successfully detected more than 2000 potential vernal pools in a ~150 km2 study area in eastern Massachusetts. The accuracy assessment in our study indicated that the commission rates ranged from 2.5% to 6.0%, while the proxy omission rate was 8.2%, rates that are much lower than reported errors of previous vernal pool studies conducted in the northeastern United States. One significant advantage of our semi-automated approach for vernal pool identification is that it may reduce inconsistencies and alleviate repeatability concerns associated with manual photointerpretation methods. Another strength of our strategy is that, in addition to detecting the point-based vernal pool locations for the inventory, the boundaries of vernal pools can be extracted as polygon features to characterize their geometric properties, which are not available in the current statewide vernal pool databases in Massachusetts.

  2. Ability of High-Resolution Manometry to Determine Feeding Method and to Predict Aspiration Pneumonia in Patients With Dysphagia.

    Science.gov (United States)

    Park, Chul-Hyun; Lee, Yong-Taek; Yi, Youbin; Lee, Jung-Sang; Park, Jung Ho; Yoon, Kyung Jae

    2017-07-01

    The introduction of high-resolution manometry (HRM) offered an improved method to objectively analyze the status of pharynx and esophagus. At present, HRM for patients with oropharyngeal dysphagia has been poorly studied. We aimed to determine feeding method and predict the development of aspiration pneumonia in patients with oropharyngeal dysphagia using HRM. We recruited 120 patients with dysphagia who underwent both HRM and videofluoroscopic swallow study. HRM was used to estimate pressure events from velopharynx (VP) to upper esophageal sphincter (UES). Feeding methods were determined to non-oral or oral feeding according to dysphagia severity. We prospectively followed patients to assess the development of aspiration pneumonia. VP maximal pressure and UES relaxation duration were independently associated with non-oral feeding. Non-oral feeding was determined based on optimal cutoff value of 105.0 mm Hg for VP maximal pressure (95.0% sensitivity and 70.0% specificity) and 0.45 s for UES relaxation duration (76.3% sensitivity and 57.5% specificity), respectively. During a mean follow-up of 18.8 months, 15.8% of patients developed aspiration pneumonia. On multivariate Cox regression analysis, VP maximal pressure (Pdysphagia.

  3. Determination of 230Th/232Th and correct methods by High Resolution Inductively Coupled Plasma Mass Spectrometry

    International Nuclear Information System (INIS)

    Xie Shengkai; Guo Dongfa; Tan Jing; Zhang Yanhui; Huang Qiuhong; Gao Aiguo

    2013-01-01

    It is very important for the rapid and reliable determination of 230 Th/ 232 Th in the thorium-230 dating. A method of measuring 230 Th/ 232 Th in natural samples by high resolution inductively coupled plasma mass spectrometer (HR-ICP-MS) was developed on the base of our former work. The precise and accurate of natural 230 Th in geology samples is challenging, as the peak tailing to the high intensity of neighboring peak at 232 Th and the mass discrimination of the instrument. The peak tailing of 238 U to 236 U was used to decrease the peak tailing effect of 232 Th to 230 Th. The mass discrimination factor K between ture and measured isotope ratio was calculated after measuring different 230 Th/ 232 Th ratio solutions. Lab used standard samples was digested in mixed acids of HN0 3 -HF-HCI-HCl0 4 , and separated by the Bio-rad AG 1 × 8 Cl - resin. The measurement method of blank-standard-blank-sample procession was used to determinate the 230 Th/ 232 Th. The measured result of 230 Th/ 232 Th was at (7.29 ± 0.34) × 10 -6 , which agreed with the reference value of (7.33 ± 0.17) × 10 -6 . (authors)

  4. A sediment extraction and cleanup method for wide-scope multitarget screening by liquid chromatography-high-resolution mass spectrometry.

    Science.gov (United States)

    Massei, Riccardo; Byers, Harry; Beckers, Liza-Marie; Prothmann, Jens; Brack, Werner; Schulze, Tobias; Krauss, Martin

    2018-01-01

    Previous studies on organic sediment contaminants focused mainly on a limited number of highly hydrophobic micropollutants accessible to gas chromatography using nonpolar, aprotic extraction solvents. The development of liquid chromatography-high-resolution mass spectrometry (LC-HRMS) permits the spectrum of analysis to be expanded to a wider range of more polar and ionic compounds present in sediments and allows target, suspect, and nontarget screening to be conducted with high sensitivity and selectivity. In this study, we propose a comprehensive multitarget extraction and sample preparation method for characterization of sediment pollution covering a broad range of physicochemical properties that is suitable for LC-HRMS screening analysis. We optimized pressurized liquid extraction, cleanup, and sample dilution for a target list of 310 compounds. Finally, the method was tested on sediment samples from a small river and its tributaries. The results show that the combination of 100 °C for ethyl acetate-acetone (50:50, neutral extract) followed by 80 °C for acetone-formic acid (100:1, acidic extract) and methanol-10 mM sodium tetraborate in water (90:10, basic extract) offered the best extraction recoveries for 287 of 310 compounds. At a spiking level of 1 μg mL -1 , we obtained satisfactory cleanup recoveries for the neutral extract-(93 ± 23)%-and for the combined acidic/basic extracts-(42 ± 16)%-after solvent exchange. Among the 69 compounds detected in environmental samples, we successfully quantified several pharmaceuticals and polar pesticides.

  5. A new method of measuring centre-of-mass velocities of radially pulsating stars from high-resolution spectroscopy

    Science.gov (United States)

    Britavskiy, N.; Pancino, E.; Tsymbal, V.; Romano, D.; Fossati, L.

    2018-03-01

    We present a radial velocity analysis of 20 solar neighbourhood RR Lyrae and three Population II Cepheid variables. We obtained high-resolution, moderate-to-high signal-to-noise ratio spectra for most stars; these spectra covered different pulsation phases for each star. To estimate the gamma (centre-of-mass) velocities of the programme stars, we use two independent methods. The first, `classic' method is based on RR Lyrae radial velocity curve templates. The second method is based on the analysis of absorption-line profile asymmetry to determine both pulsational and gamma velocities. This second method is based on the least-squares deconvolution (LSD) technique applied to analyse the line asymmetry that occurs in the spectra. We obtain measurements of the pulsation component of the radial velocity with an accuracy of ±3.5 km s-1. The gamma velocity was determined with an accuracy of ±10 km s-1, even for those stars having a small number of spectra. The main advantage of this method is the possibility of obtaining an estimation of gamma velocity even from one spectroscopic observation with uncertain pulsation phase. A detailed investigation of LSD profile asymmetry shows that the projection factor p varies as a function of the pulsation phase - this is a key parameter, which converts observed spectral line radial velocity variations into photospheric pulsation velocities. As a by-product of our study, we present 41 densely spaced synthetic grids of LSD profile bisectors based on atmospheric models of RR Lyr covering all pulsation phases.

  6. High Resolution Melting Analysis Targeting hsp70 as a Fast and Efficient Method for the Discrimination of Leishmania Species.

    Science.gov (United States)

    Zampieri, Ricardo Andrade; Laranjeira-Silva, Maria Fernanda; Muxel, Sandra Marcia; Stocco de Lima, Ana Carolina; Shaw, Jeffrey Jon; Floeter-Winter, Lucile Maria

    2016-02-01

    Protozoan parasites of the genus Leishmania cause a large spectrum of clinical manifestations known as Leishmaniases. These diseases are increasingly important public health problems in many countries both within and outside endemic regions. Thus, an accurate differential diagnosis is extremely relevant for understanding epidemiological profiles and for the administration of the best therapeutic protocol. Exploring the High Resolution Melting (HRM) dissociation profiles of two amplicons using real time polymerase chain reaction (real-time PCR) targeting heat-shock protein 70 coding gene (hsp70) revealed differences that allowed the discrimination of genomic DNA samples of eight Leishmania species found in the Americas, including Leishmania (Leishmania) infantum chagasi, L. (L.) amazonensis, L. (L.) mexicana, L. (Viannia) lainsoni, L. (V.) braziliensis, L. (V.) guyanensis, L. (V.) naiffi and L. (V.) shawi, and three species found in Eurasia and Africa, including L. (L.) tropica, L. (L.) donovani and L. (L.) major. In addition, we tested DNA samples obtained from standard promastigote culture, naturally infected phlebotomines, experimentally infected mice and clinical human samples to validate the proposed protocol. HRM analysis of hsp70 amplicons is a fast and robust strategy that allowed for the detection and discrimination of all Leishmania species responsible for the Leishmaniases in Brazil and Eurasia/Africa with high sensitivity and accuracy. This method could detect less than one parasite per reaction, even in the presence of host DNA.

  7. High Resolution Melting Analysis Targeting hsp70 as a Fast and Efficient Method for the Discrimination of Leishmania Species.

    Directory of Open Access Journals (Sweden)

    Ricardo Andrade Zampieri

    2016-02-01

    Full Text Available Protozoan parasites of the genus Leishmania cause a large spectrum of clinical manifestations known as Leishmaniases. These diseases are increasingly important public health problems in many countries both within and outside endemic regions. Thus, an accurate differential diagnosis is extremely relevant for understanding epidemiological profiles and for the administration of the best therapeutic protocol.Exploring the High Resolution Melting (HRM dissociation profiles of two amplicons using real time polymerase chain reaction (real-time PCR targeting heat-shock protein 70 coding gene (hsp70 revealed differences that allowed the discrimination of genomic DNA samples of eight Leishmania species found in the Americas, including Leishmania (Leishmania infantum chagasi, L. (L. amazonensis, L. (L. mexicana, L. (Viannia lainsoni, L. (V. braziliensis, L. (V. guyanensis, L. (V. naiffi and L. (V. shawi, and three species found in Eurasia and Africa, including L. (L. tropica, L. (L. donovani and L. (L. major. In addition, we tested DNA samples obtained from standard promastigote culture, naturally infected phlebotomines, experimentally infected mice and clinical human samples to validate the proposed protocol.HRM analysis of hsp70 amplicons is a fast and robust strategy that allowed for the detection and discrimination of all Leishmania species responsible for the Leishmaniases in Brazil and Eurasia/Africa with high sensitivity and accuracy. This method could detect less than one parasite per reaction, even in the presence of host DNA.

  8. Nontargeted Screening Method for Illegal Additives Based on Ultrahigh-Performance Liquid Chromatography-High-Resolution Mass Spectrometry.

    Science.gov (United States)

    Fu, Yanqing; Zhou, Zhihui; Kong, Hongwei; Lu, Xin; Zhao, Xinjie; Chen, Yihui; Chen, Jia; Wu, Zeming; Xu, Zhiliang; Zhao, Chunxia; Xu, Guowang

    2016-09-06

    Identification of illegal additives in complex matrixes is important in the food safety field. In this study a nontargeted screening strategy was developed to find illegal additives based on ultrahigh-performance liquid chromatography-high-resolution mass spectrometry (UHPLC-HRMS). First, an analytical method for possible illegal additives in complex matrixes was established including fast sample pretreatment, accurate UHPLC separation, and HRMS detection. Second, efficient data processing and differential analysis workflow were suggested and applied to find potential risk compounds. Third, structure elucidation of risk compounds was performed by (1) searching online databases [Metlin and the Human Metabolome Database (HMDB)] and an in-house database which was established at the above-defined conditions of UHPLC-HRMS analysis and contains information on retention time, mass spectra (MS), and tandem mass spectra (MS/MS) of 475 illegal additives, (2) analyzing fragment ions, and (3) referring to fragmentation rules. Fish was taken as an example to show the usefulness of the nontargeted screening strategy, and six additives were found in suspected fish samples. Quantitative analysis was further carried out to determine the contents of these compounds. The satisfactory application of this strategy in fish samples means that it can also be used in the screening of illegal additives in other kinds of food samples.

  9. Image Quality Assessment of High-Resolution Satellite Images with Mtf-Based Fuzzy Comprehensive Evaluation Method

    Science.gov (United States)

    Wu, Z.; Luo, Z.; Zhang, Y.; Guo, F.; He, L.

    2018-04-01

    A Modulation Transfer Function (MTF)-based fuzzy comprehensive evaluation method was proposed in this paper for the purpose of evaluating high-resolution satellite image quality. To establish the factor set, two MTF features and seven radiant features were extracted from the knife-edge region of image patch, which included Nyquist, MTF0.5, entropy, peak signal to noise ratio (PSNR), average difference, edge intensity, average gradient, contrast and ground spatial distance (GSD). After analyzing the statistical distribution of above features, a fuzzy evaluation threshold table and fuzzy evaluation membership functions was established. The experiments for comprehensive quality assessment of different natural and artificial objects was done with GF2 image patches. The results showed that the calibration field image has the highest quality scores. The water image has closest image quality to the calibration field, quality of building image is a little poor than water image, but much higher than farmland image. In order to test the influence of different features on quality evaluation, the experiment with different weights were tested on GF2 and SPOT7 images. The results showed that different weights correspond different evaluating effectiveness. In the case of setting up the weights of edge features and GSD, the image quality of GF2 is better than SPOT7. However, when setting MTF and PSNR as main factor, the image quality of SPOT7 is better than GF2.

  10. High-resolution hydrogen profiling in AlGaN/GaN heterostructures grown by different epitaxial methods

    Energy Technology Data Exchange (ETDEWEB)

    Gonzalez-Posada Flores, F; Redondo-Cubero, A; Bengoechea, A; Brana, A F; Munoz, E [Instituto de Sistemas Optoelectronicos y Microtecnologia (ISOM) and Dpto. IngenierIa Electronica (DIE), ETSI de Telecomunicacion, Universidad Politecnica de Madrid, E-28040 Madrid (Spain); Gago, R [Centro de Micro-Analisis de Materiales, Universidad Autonoma de Madrid, E-28049 Madrid (Spain); Jimenez, A [Dpto. Electronica, Escuela Politecnica Superior, Universidad de Alcala, E-28805 Alcala de Henares, Madrid (Spain); Grambole, D, E-mail: fposada@die.upm.e [Institute of Ion Beam Physics and Materials Research, Forschungszentrum Dresden-Rossendorf, PF 51019, D-01314 Dresden (Germany)

    2009-03-07

    Hydrogen (H) incorporation into AlGaN/GaN heterostructures used in high electron mobility transistors, grown by different methods, is studied by high-resolution depth profiling. Samples grown on sapphire and Si(1 1 1) substrates by molecular-beam epitaxy and metal-organic vapour phase epitaxy; involving H-free and H-containing precursors, were analysed to evaluate the eventual incorporation of H into the wafer. The amount of H was measured by means of nuclear reaction analysis (NRA) using the {sup 1}H({sup 15}N,{alpha}{gamma}){sup 12}C reaction up to a depth of {approx}110 nm into the heterostructures. Interestingly, the H profiles are similar in all the samples analysed, with an increasing H content towards the surface and a negligible H incorporation into the GaN layer (0.24 {+-} 0.08 at%) or at the AlGaN/GaN interface. Therefore, NRA shows that H uptake is not related to the growth process or technique employed and that H contamination may be due to external sources after growth. The eventual correlation between topographical defects on the AlGaN surface and the H concentration are also discussed.

  11. High resolution melting curve analysis, a rapid and affordable method for mutation analysis in childhood acute myeloid leukemia

    Directory of Open Access Journals (Sweden)

    Yin eLiu

    2014-09-01

    Full Text Available Background: Molecular genetic alterations with prognostic significance have been described in childhood acute myeloid leukemia (AML. The aim of this study was to establish cost-effective techniques to detect mutations of FMS-like tyrosine kinase 3 (FLT3, Nucleophosmin 1 (NPM1, and a partial tandem duplication within the mixed lineage leukemia (MLL-PTD genes in childhood AML. Procedure: Ninety-nine children with newly diagnosed AML were included in this study. We developed a fluoresent dye SYTO-82 based high resolution melting curve (HRM anaylsis to detect FLT3 internal tandem duplication (FLT3-ITD, FLT3 tyrosine kinase domain (FLT3-TKD and NPM1 mutations. MLL-PTD was screened by real-time quantitative PCR. Results: The HRM methodology correlated well with gold standard Sanger sequencing with less cost. Among the 99 patients studied, the FLT3-ITD mutation was associated with significantly worse event free survival (EFS. Patients with the NPM1 mutation had significantly better EFS and overall survival. However, HRM was not sensitive enough for minimal residual disease monitoring. Conclusions: HRM was a rapid and efficient method for screening of FLT3 and NPM1 gene mutations. It was both affordable and accurate, especially in resource underprivileged regions. Our results indicated that HRM could be a useful clinical tool for rapid and cost effective screening of the FLT3 and NPM1 mutations in AML patients.

  12. Object-oriented Method of Hierarchical Urban Building Extraction from High-resolution Remote-Sensing Imagery

    Directory of Open Access Journals (Sweden)

    TAO Chao

    2016-02-01

    Full Text Available An automatic urban building extraction method for high-resolution remote-sensing imagery,which combines building segmentation based on neighbor total variations with object-oriented analysis,is presented in this paper. Aimed at different extraction complexity from various buildings in the segmented image,a hierarchical building extraction strategy with multi-feature fusion is adopted. Firstly,we extract some rectangle buildings which remain intact after segmentation through shape analysis. Secondly,in order to ensure each candidate building target to be independent,multidirectional morphological road-filtering algorithm is designed which can separate buildings from the neighboring roads with similar spectrum. Finally,we take the extracted buildings and the excluded non-buildings as samples to establish probability model respectively,and Bayesian discriminating classifier is used for making judgment of the other candidate building objects to get the ultimate extraction result. The experimental results have shown that the approach is able to detect buildings with different structure and spectral features in the same image. The results of performance evaluation also support the robustness and precision of the approach developed.

  13. Assessment of vulnerability in karst aquifers using a quantitative integrated numerical model: catchment characterization and high resolution monitoring - Application to semi-arid regions- Lebanon.

    Science.gov (United States)

    Doummar, Joanna; Aoun, Michel; Andari, Fouad

    2016-04-01

    Karst aquifers are highly heterogeneous and characterized by a duality of recharge (concentrated; fast versus diffuse; slow) and a duality of flow which directly influences groundwater flow and spring responses. Given this heterogeneity in flow and infiltration, karst aquifers do not always obey standard hydraulic laws. Therefore the assessment of their vulnerability reveals to be challenging. Studies have shown that vulnerability of aquifers is highly governed by recharge to groundwater. On the other hand specific parameters appear to play a major role in the spatial and temporal distribution of infiltration on a karst system, thus greatly influencing the discharge rates observed at a karst spring, and consequently the vulnerability of a spring. This heterogeneity can only be depicted using an integrated numerical model to quantify recharge spatially and assess the spatial and temporal vulnerability of a catchment for contamination. In the framework of a three-year PEER NSF/USAID funded project, the vulnerability of a karst catchment in Lebanon is assessed quantitatively using a numerical approach. The aim of the project is also to refine actual evapotranspiration rates and spatial recharge distribution in a semi arid environment. For this purpose, a monitoring network was installed since July 2014 on two different pilot karst catchment (drained by Qachqouch Spring and Assal Spring) to collect high resolution data to be used in an integrated catchment numerical model with MIKE SHE, DHI including climate, unsaturated zone, and saturated zone. Catchment characterization essential for the model included geological mapping and karst features (e.g., dolines) survey as they contribute to fast flow. Tracer experiments were performed under different flow conditions (snow melt and low flow) to delineate the catchment area, reveal groundwater velocities and response to snowmelt events. An assessment of spring response after precipitation events allowed the estimation of the

  14. An Efficient Method for Mapping High-Resolution Global River Discharge Based on the Algorithms of Drainage Network Extraction

    Directory of Open Access Journals (Sweden)

    Jiaye Li

    2018-04-01

    Full Text Available River discharge, which represents the accumulation of surface water flowing into rivers and ultimately into the ocean or other water bodies, may have great impacts on water quality and the living organisms in rivers. However, the global knowledge of river discharge is still poor and worth exploring. This study proposes an efficient method for mapping high-resolution global river discharge based on the algorithms of drainage network extraction. Using the existing global runoff map and digital elevation model (DEM data as inputs, this method consists of three steps. First, the pixels of the runoff map and the DEM data are resampled into the same resolution (i.e., 0.01-degree. Second, the flow direction of each pixel of the DEM data (identified by the optimal flow path method used in drainage network extraction is determined and then applied to the corresponding pixel of the runoff map. Third, the river discharge of each pixel of the runoff map is calculated by summing the runoffs of all the pixels in the upstream of this pixel, similar to the upslope area accumulation step in drainage network extraction. Finally, a 0.01-degree global map of the mean annual river discharge is obtained. Moreover, a 0.5-degree global map of the mean annual river discharge is produced to display the results with a more intuitive perception. Compared against the existing global river discharge databases, the 0.01-degree map is of a generally high accuracy for the selected river basins, especially for the Amazon River basin with the lowest relative error (RE of 0.3% and the Yangtze River basin within the RE range of ±6.0%. However, it is noted that the results of the Congo and Zambezi River basins are not satisfactory, with RE values over 90%, and it is inferred that there may be some accuracy problems with the runoff map in these river basins.

  15. Surface electromyography based muscle fatigue detection using high-resolution time-frequency methods and machine learning algorithms.

    Science.gov (United States)

    Karthick, P A; Ghosh, Diptasree Maitra; Ramakrishnan, S

    2018-02-01

    Surface electromyography (sEMG) based muscle fatigue research is widely preferred in sports science and occupational/rehabilitation studies due to its noninvasiveness. However, these signals are complex, multicomponent and highly nonstationary with large inter-subject variations, particularly during dynamic contractions. Hence, time-frequency based machine learning methodologies can improve the design of automated system for these signals. In this work, the analysis based on high-resolution time-frequency methods, namely, Stockwell transform (S-transform), B-distribution (BD) and extended modified B-distribution (EMBD) are proposed to differentiate the dynamic muscle nonfatigue and fatigue conditions. The nonfatigue and fatigue segments of sEMG signals recorded from the biceps brachii of 52 healthy volunteers are preprocessed and subjected to S-transform, BD and EMBD. Twelve features are extracted from each method and prominent features are selected using genetic algorithm (GA) and binary particle swarm optimization (BPSO). Five machine learning algorithms, namely, naïve Bayes, support vector machine (SVM) of polynomial and radial basis kernel, random forest and rotation forests are used for the classification. The results show that all the proposed time-frequency distributions (TFDs) are able to show the nonstationary variations of sEMG signals. Most of the features exhibit statistically significant difference in the muscle fatigue and nonfatigue conditions. The maximum number of features (66%) is reduced by GA and BPSO for EMBD and BD-TFD respectively. The combination of EMBD- polynomial kernel based SVM is found to be most accurate (91% accuracy) in classifying the conditions with the features selected using GA. The proposed methods are found to be capable of handling the nonstationary and multicomponent variations of sEMG signals recorded in dynamic fatiguing contractions. Particularly, the combination of EMBD- polynomial kernel based SVM could be used to

  16. High resolution ultrasonic densitometer

    International Nuclear Information System (INIS)

    Dress, W.B.

    1983-01-01

    The velocity of torsional stress pulses in an ultrasonic waveguide of non-circular cross section is affected by the temperature and density of the surrounding medium. Measurement of the transit times of acoustic echoes from the ends of a sensor section are interpreted as level, density, and temperature of the fluid environment surrounding that section. This paper examines methods of making these measurements to obtain high resolution, temperature-corrected absolute and relative density and level determinations of the fluid. Possible applications include on-line process monitoring, a hand-held density probe for battery charge state indication, and precise inventory control for such diverse fluids as uranium salt solutions in accountability storage and gasoline in service station storage tanks

  17. Annotation of the human serum metabolome by coupling three liquid chromatography methods to high-resolution mass spectrometry.

    Science.gov (United States)

    Boudah, Samia; Olivier, Marie-Françoise; Aros-Calt, Sandrine; Oliveira, Lydie; Fenaille, François; Tabet, Jean-Claude; Junot, Christophe

    2014-09-01

    This work aims at evaluating the relevance and versatility of liquid chromatography coupled to high resolution mass spectrometry (LC/HRMS) for performing a qualitative and comprehensive study of the human serum metabolome. To this end, three different chromatographic systems based on a reversed phase (RP), hydrophilic interaction chromatography (HILIC) and a pentafluorophenylpropyl (PFPP) stationary phase were used, with detection in both positive and negative electrospray modes. LC/HRMS platforms were first assessed for their ability to detect, retain and separate 657 metabolite standards representative of the chemical families occurring in biological fluids. More than 75% were efficiently retained in either one LC-condition and less than 5% were exclusively retained by the RP column. These three LC/HRMS systems were then evaluated for their coverage of serum metabolome. The combination of RP, HILIC and PFPP based LC/HRMS methods resulted in the annotation of about 1328 features in the negative ionization mode, and 1358 in the positive ionization mode on the basis of their accurate mass and precise retention time in at least one chromatographic condition. Less than 12% of these annotations were shared by the three LC systems, which highlights their complementarity. HILIC column ensured the greatest metabolome coverage in the negative ionization mode, whereas PFPP column was the most effective in the positive ionization mode. Altogether, 192 annotations were confirmed using our spectral database and 74 others by performing MS/MS experiments. This resulted in the formal or putative identification of 266 metabolites, among which 59 are reported for the first time in human serum. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Validation of the Mass-Extraction-Window for Quantitative Methods Using Liquid Chromatography High Resolution Mass Spectrometry.

    Science.gov (United States)

    Glauser, Gaétan; Grund, Baptiste; Gassner, Anne-Laure; Menin, Laure; Henry, Hugues; Bromirski, Maciej; Schütz, Frédéric; McMullen, Justin; Rochat, Bertrand

    2016-03-15

    A paradigm shift is underway in the field of quantitative liquid chromatography-mass spectrometry (LC-MS) analysis thanks to the arrival of recent high-resolution mass spectrometers (HRMS). The capability of HRMS to perform sensitive and reliable quantifications of a large variety of analytes in HR-full scan mode is showing that it is now realistic to perform quantitative and qualitative analysis with the same instrument. Moreover, HR-full scan acquisition offers a global view of sample extracts and allows retrospective investigations as virtually all ionized compounds are detected with a high sensitivity. In time, the versatility of HRMS together with the increasing need for relative quantification of hundreds of endogenous metabolites should promote a shift from triple-quadrupole MS to HRMS. However, a current "pitfall" in quantitative LC-HRMS analysis is the lack of HRMS-specific guidance for validated quantitative analyses. Indeed, false positive and false negative HRMS detections are rare, albeit possible, if inadequate parameters are used. Here, we investigated two key parameters for the validation of LC-HRMS quantitative analyses: the mass accuracy (MA) and the mass-extraction-window (MEW) that is used to construct the extracted-ion-chromatograms. We propose MA-parameters, graphs, and equations to calculate rational MEW width for the validation of quantitative LC-HRMS methods. MA measurements were performed on four different LC-HRMS platforms. Experimentally determined MEW values ranged between 5.6 and 16.5 ppm and depended on the HRMS platform, its working environment, the calibration procedure, and the analyte considered. The proposed procedure provides a fit-for-purpose MEW determination and prevents false detections.

  19. High-resolution gas chromatography/mas spectrometry method for characterization and quantitative analysis of ginkgolic acids in ginkgo biloba plants, extracts, and dietary supplements

    Science.gov (United States)

    A high resolution GC/MS with Selected Ion Monitor (SIM) method focusing on the characterization and quantitative analysis of ginkgolic acids (GAs) in Ginkgo biloba L. plant materials, extracts and commercial products was developed and validated. The method involved sample extraction with (1:1) meth...

  20. High resolution numerical simulation (WRF V3) of an extreme rainy event over the Guadeloupe archipelago: Case of 3-5 january 2011.

    Science.gov (United States)

    Bernard, Didier C.; Cécé, Raphaël; Dorville, Jean-François

    2013-04-01

    During the dry season, the Guadeloupe archipelago may be affected by extreme rainy disturbances which may induce floods in a very short time. C. Brévignon (2003) considered a heavy rain event for rainfall upper 100 mm per day (out of mountainous areas) for this tropical region. During a cold front passage (3-5 January 2011), torrential rainfalls caused floods, major damages, landslides and five deaths. This phenomenon has put into question the current warning system based on large scale numerical models. This low-resolution forecasting (around 50-km scale) has been unsuitable for small tropical island like Guadeloupe (1600 km2). The most affected area was the middle of Grande-Terre island which is the main flat island of the archipelago (area of 587 km2, peak at 136 m). It is the most populated sector of Guadeloupe. In this area, observed rainfall have reached to 100-160 mm in 24 hours (this amount is equivalent to two months of rain for January (C. Brévignon, 2003)), in less 2 hours drainage systems have been saturated, and five people died in a ravine. Since two years, the atmospheric model WRF ARW V3 (Skamarock et al., 2008) has been used to modeling meteorological variables fields observed over the Guadeloupe archipelago at high resolution 1-km scale (Cécé et al., 2011). The model error estimators show that meteorological variables seem to be properly simulated for standard types of weather: undisturbed, strong or weak trade winds. These simulations indicate that for synoptic winds weak to moderate, a small island like Grande-Terre is able to generate inland convergence zones during daytime. In this presentation, we apply this high resolution model to simulate this extreme rainy disturbance of 3-5 January 2011. The evolution of modeling meteorological variable fields is analyzed in the most affected area of Grande-Terre (city of Les Abymes). The main goal is to examine local quasi-stationary updraft systems and highlight their convective mechanisms. The

  1. Introduction to precise numerical methods

    CERN Document Server

    Aberth, Oliver

    2007-01-01

    Precise numerical analysis may be defined as the study of computer methods for solving mathematical problems either exactly or to prescribed accuracy. This book explains how precise numerical analysis is constructed. The book also provides exercises which illustrate points from the text and references for the methods presented. All disc-based content for this title is now available on the Web. · Clearer, simpler descriptions and explanations ofthe various numerical methods· Two new types of numerical problems; accurately solving partial differential equations with the included software and computing line integrals in the complex plane.

  2. Quantitative analysis of multiple high-resolution mass spectrometry images using chemometric methods: quantitation of chlordecone in mouse liver.

    Science.gov (United States)

    Mohammadi, Saeedeh; Parastar, Hadi

    2018-05-15

    In this work, a chemometrics-based strategy is developed for quantitative mass spectrometry imaging (MSI). In this regard, quantification of chlordecone as a carcinogenic organochlorinated pesticide (C10Cll0O) in mouse liver using the matrix-assisted laser desorption ionization MSI (MALDI-MSI) method is used as a case study. The MSI datasets corresponded to 1, 5 and 10 days of mouse exposure to the standard chlordecone in the quantity range of 0 to 450 μg g-1. The binning approach in the m/z direction is used to group high resolution m/z values and to reduce the big data size. To consider the effect of bin size on the quality of results, three different bin sizes of 0.25, 0.5 and 1.0 were chosen. Afterwards, three-way MSI data arrays (two spatial and one m/z dimensions) for seven standards and four unknown samples were column-wise augmented with m/z values as the common mode. Then, these datasets were analyzed using multivariate curve resolution-alternating least squares (MCR-ALS) using proper constraints. The resolved mass spectra were used for identification of chlordecone in the presence of a complex background and interference. Additionally, the augmented spatial profiles were post-processed and 2D images for each component were obtained in calibration and unknown samples. The sum of these profiles was utilized to set the calibration curve and to obtain the analytical figures of merit (AFOMs). Inspection of the results showed that the lower bin size (i.e., 0.25) provides more accurate results. Finally, the obtained results by MCR for three datasets were compared with those of gas chromatography-mass spectrometry (GC-MS) and MALDI-MSI. The results showed that the MCR-assisted method gives a higher amount of chlordecone than MALDI-MSI and a lower amount than GC-MS. It is concluded that a combination of chemometric methods with MSI can be considered as an alternative way for MSI quantification.

  3. High-resolution melting (HRM) analysis as a feasible method for detecting spinal muscular atrophy via dried blood spots.

    Science.gov (United States)

    Er, Tze-Kiong; Kan, Tzu-Min; Su, Yu-Fa; Liu, Ta-Chih; Chang, Jan-Gowth; Hung, Shih-Ya; Jong, Yuh-Jyh

    2012-11-12

    Spinal muscular atrophy (SMA) is a neurodegenerative disease with the leading genetic cause of infant mortality. More than 95% of patients with SMA have a homozygous disruption in the survival motor neuron1 (SMN1) gene, caused by mutation, deletion, or rearrangement. Recently, treatment in humans in the immediate postnatal period, prior to the development of weakness or very early in the course of the disease, may be effective. Therefore, our objective was to establish a feasible method for SMA screening. High-resolution melting (HRM) analysis is rapidly becoming the most important mutation-scanning methodology that allows mutation scanning and genotyping without the need for costly labeled oligonucleotides. In the current study, we aim to develop a method for identifying the substitution of single nucleotide in SMN1 exon 7 (c.840C>T) by HRM analysis. Genomic DNA was extracted from peripheral blood samples and dried blood spots obtained from 30 patients with SMA and 30 normal individuals. All results were previously confirmed by denaturing high-performance liquid chromatography (DHPLC). In order to identify the substitution of single nucleotide in SMN1 exon 7 (c.840C>T) by HRM analysis, a primer set was used in HRM analysis. At first, we failed to identify the substitution of single nucleotide in SMN1 exon 7 (c.840C>T) by HRM analysis because the homozygous CC and homozygous TT cannot be distinguished by HRM analysis. Therefore, all samples were mixed with a known SMN1/SMN2 copy number (SMN1/SMN2=0:3), which we may call driver. This strategy is used to differentiate between homozygous CC and homozygous TT. After mixing with driver, the melting profile of homozygous CC becomes heteroduplex; however, the homozygous TT remains the same in the normalized and temperature-shifted difference plots. HRM analysis can be successfully applied to screen SMA via DNA obtained from whole blood and dried blood spots. We strongly believe that HRM analysis, a high-throughput method

  4. Combining the Pixel-based and Object-based Methods for Building Change Detection Using High-resolution Remote Sensing Images

    Directory of Open Access Journals (Sweden)

    ZHANG Zhiqiang

    2018-01-01

    Full Text Available Timely and accurate change detection of buildings provides important information for urban planning and management.Accompanying with the rapid development of satellite remote sensing technology,detecting building changes from high-resolution remote sensing images have received wide attention.Given that pixel-based methods of change detection often lead to low accuracy while object-based methods are complicated for uses,this research proposes a method that combines pixel-based and object-based methods for detecting building changes from high-resolution remote sensing images.First,based on the multiple features extracted from the high-resolution images,a random forest classifier is applied to detect changed building at the pixel level.Then,a segmentation method is applied to segement the post-phase remote sensing image and to get post-phase image objects.Finally,both changed building at the pixel level and post-phase image objects are fused to recognize the changed building objects.Multi-temporal QuickBird images are used as experiment data for building change detection with high-resolution remote sensing images,the results indicate that the proposed method could reduce the influence of environmental difference,such as light intensity and view angle,on building change detection,and effectively improve the accuracies of building change detection.

  5. High-resolution numerical modeling of tectonic underplating in circum-Pacific subduction zones: toward a better understanding of deformation in the episodic tremor and slip region?

    Science.gov (United States)

    Menant, A.; Angiboust, S.; Gerya, T.; Lacassin, R.; Simoes, M.; Grandin, R.

    2017-12-01

    Study of now-exhumed ancient subduction systems have evidenced km-scale tectonic units of marine sediments and oceanic crust that have been tectonically underplated (i.e. basally accreted) from the downgoing plate to the overriding plate at more than 30-km depth. Such huge mass transfers must have a major impact, both in term of long-term topographic variations and seismic/aseismic deformation in subduction zones. However, the quantification of such responses to the underplating process remains poorly constrained. Using high-resolution visco-elasto-plastic thermo-mechanical models, we present with unprecedented details the dynamics of formation and destruction of underplated complexes in subductions zones. Initial conditions in our experiments are defined in order to fit different subduction systems of the circum-Pacific region where underplating process is strongly suspected (e.g. the Cascadia, SW-Japan, New Zealand, and Chilean subduction zones). It appears that whatever the subduction system considered, underplating of sediments and oceanic crust always occur episodically forming a coherent nappe stacking at depths comprised between 10 and 50 km. At higher depth, a tectonic mélange with a serpentinized mantle wedge matrix developed along the plates interface. The size of these underplated complexes changes according to the subduction system considered. For instance, a 15-km thick nappe stacking is obtained for the N-Chilean subduction zone after a series of underplating events. Such an episodic event lasts 4-5 Myrs and can be responsible of a 2-km high uplift in the forearc region. Subsequent basal erosion of these underplated complexes results in their only partial preservation at crustal and mantle depth, suggesting that, after exhumation, only a tiny section of the overall underplated material can be observed nowadays in ancient subduction systems. Finally, tectonic underplating in our numerical models is systematically associated with (1) an increasing

  6. High Resolution Elevation Contours

    Data.gov (United States)

    Minnesota Department of Natural Resources — This dataset contains contours generated from high resolution data sources such as LiDAR. Generally speaking this data is 2 foot or less contour interval.

  7. Setting up of a liquid chromatography-high resolution tandem mass spectrometry method for the detection of caseins in food. A comparison with ELISA method

    Directory of Open Access Journals (Sweden)

    Daniela Gastaldi

    2013-06-01

    Full Text Available Determination of caseins in food matrices is usually performed by using the competitiveenzyme- linked immunosorbent assay (ELISA technique. However such a technique suffers from a number of limitations. Among these, the applicability to a narrow concentration range, a non linear (logarithmic response, a non-negligible cross-reactivity and a high cost per kit. At the time of the completion of this study, in case of ELISA positive feedback, there was poor availability in the literature of finding reliable instrumental methods able to determine both qualitatively and quantitatively this class of substances. In the present study, a liquid chromatography-high resolution tandem mass spectrometry (HPLC-HRMS/MS instrumental method was developed with a high resolution mass spectrometer (Orbitrap. Real samples of sausages in which caseins were detected by ELISA technique were analysed. A casein-free sample of ham was used as a blank. The analytical characteristics of the instrumental method were compared with those of a commercial ELISA test, declared specific for α- and β-casein.

  8. Comparison of infrared spectroscopy techniques: developing an efficient method for high resolution analysis of sediment properties from long records

    Science.gov (United States)

    Hahn, Annette; Rosén, Peter; Kliem, Pierre; Ohlendorf, Christian; Persson, Per; Zolitschka, Bernd; Pasado Science Team

    2010-05-01

    The analysis of sediment samples in visible to mid-infrared spectra is ideal for high-resolution records. It requires only small amounts (0.01-0.1g dry weight) of sample material and facilitates rapid and cost efficient analysis of a wide variety of biogeochemical properties on minerogenic and organic substances (Kellner et al. 1998). One of these techniques, the Diffuse Reflectance Fourier Transform Infrared Spectrometry (DRIFTS), has already been successfully applied to lake sediment from very different settings and has shown to be a promising technique for high resolution analyses of long sedimentary records on glacial-interglacial timescales (Rosén et al. 2009). However, the DRIFTS technique includes a time-consuming step where sediment samples are mixed with KBr. To assess if alternative and more rapid infrared (IR) techniques can be used, four different IR spectroscopy techniques are compared for core catcher sediment samples from Laguna Potrok Aike - an ICDP site located in southernmost South America. Partial least square (PLS) calibration models were developed using the DRIFTS technique. The correlation coefficients (R) for correlations between DRIFTS-inferred and conventionally measured biogeochemical properties show values of 0.80 for biogenic silica (BSi), 0.95 for total organic carbon (TOC), 0.91 for total nitrogen (TN), and 0.92 for total inorganic carbon (TIC). Good statistical performance was also obtained by using the Attenuated Total Reflectance Fourier Transform Infrared Spectroscopy ATR-FTIRS technique which requires less sample preparation. Two devices were used, the full-sized Bruker Equinox 252 and the smaller and less expensive Bruker Alpha. R for ATR-FTIRS-inferred and conventionally measured biogeochemical properties were 0.87 (BSi), 0.93 (TOC), 0.90 (TN), and 0.91 (TIC) for the Alpha, and 0.78 (TOC), 0.85 (TN), 0.79 (TIC) for the Equinox 252 device. As the penetration depth of the IR beam is frequency dependent, a firm surface contact of

  9. High resolution solar observations

    International Nuclear Information System (INIS)

    Title, A.

    1985-01-01

    Currently there is a world-wide effort to develop optical technology required for large diffraction limited telescopes that must operate with high optical fluxes. These developments can be used to significantly improve high resolution solar telescopes both on the ground and in space. When looking at the problem of high resolution observations it is essential to keep in mind that a diffraction limited telescope is an interferometer. Even a 30 cm aperture telescope, which is small for high resolution observations, is a big interferometer. Meter class and above diffraction limited telescopes can be expected to be very unforgiving of inattention to details. Unfortunately, even when an earth based telescope has perfect optics there are still problems with the quality of its optical path. The optical path includes not only the interior of the telescope, but also the immediate interface between the telescope and the atmosphere, and finally the atmosphere itself

  10. Evaluation of the 3D high resolution seismic method at the Tournemire site around the IPSN experimental station

    International Nuclear Information System (INIS)

    Cabrera Nunez, J.

    2003-01-01

    The IPSN experimental station of Tournemire is localized at a 200 m depth inside an abandoned railway tunnel dug in a Jurassic clayey formation. The a priori knowledge of the existing geologic structures of the clayey formations allows to test the reliability of the 3D high resolution seismic survey technique and its capability to detect these structures and discontinuities. This test study is reported in this technical note. It comprises several steps: a bibliographic synthesis and a state-of-the-art of the 3D seismic survey technique, the construction of a velocity model for the different strata of the site, a simulation of the possible seismic response of these strata with respect to the velocities chosen, the processing of the data and finally their interpretation. (J.S.)

  11. A method for analyzing low statistics high resolution spectra from 210Pb in underground coal miners from Brazil

    International Nuclear Information System (INIS)

    Dantas, A.L.A.; Dantas, B.M.; Lipsztein, J.L.; Spitz, H.B.

    2006-01-01

    A survey conducted by the IRD-CNEN determined that some workers from an underground coal mine in the south of Brazil were exposed to elevated airborne concentrations of 222 Rn. Because inhalation of high airborne concentrations of 222 Rn can lead to an increase of 210 Pb in bone, in vivo measurements of 210 Pb in the skeleton were performed in selected underground workers from this mine. Measurements were performed using an array of high-resolution germanium detectors positioned around the head and knee to detect the low abundant 46.5 keV photon emitted by 210 Pb. The gamma-ray spectra were analyzed using a moving median smoothing function to detect the presence of a photopeak at 46.5 keV. The minimum detectable activity of 210 Pb in the skeleton using this methodology was 50 Bq. (author)

  12. A multi-sample based method for identifying common CNVs in normal human genomic structure using high-resolution aCGH data.

    Directory of Open Access Journals (Sweden)

    Chihyun Park

    Full Text Available BACKGROUND: It is difficult to identify copy number variations (CNV in normal human genomic data due to noise and non-linear relationships between different genomic regions and signal intensity. A high-resolution array comparative genomic hybridization (aCGH containing 42 million probes, which is very large compared to previous arrays, was recently published. Most existing CNV detection algorithms do not work well because of noise associated with the large amount of input data and because most of the current methods were not designed to analyze normal human samples. Normal human genome analysis often requires a joint approach across multiple samples. However, the majority of existing methods can only identify CNVs from a single sample. METHODOLOGY AND PRINCIPAL FINDINGS: We developed a multi-sample-based genomic variations detector (MGVD that uses segmentation to identify common breakpoints across multiple samples and a k-means-based clustering strategy. Unlike previous methods, MGVD simultaneously considers multiple samples with different genomic intensities and identifies CNVs and CNV zones (CNVZs; CNVZ is a more precise measure of the location of a genomic variant than the CNV region (CNVR. CONCLUSIONS AND SIGNIFICANCE: We designed a specialized algorithm to detect common CNVs from extremely high-resolution multi-sample aCGH data. MGVD showed high sensitivity and a low false discovery rate for a simulated data set, and outperformed most current methods when real, high-resolution HapMap datasets were analyzed. MGVD also had the fastest runtime compared to the other algorithms evaluated when actual, high-resolution aCGH data were analyzed. The CNVZs identified by MGVD can be used in association studies for revealing relationships between phenotypes and genomic aberrations. Our algorithm was developed with standard C++ and is available in Linux and MS Windows format in the STL library. It is freely available at: http://embio.yonsei.ac.kr/~Park/mgvd.php.

  13. A Decision Mixture Model-Based Method for Inshore Ship Detection Using High-Resolution Remote Sensing Images.

    Science.gov (United States)

    Bi, Fukun; Chen, Jing; Zhuang, Yin; Bian, Mingming; Zhang, Qingjun

    2017-06-22

    With the rapid development of optical remote sensing satellites, ship detection and identification based on large-scale remote sensing images has become a significant maritime research topic. Compared with traditional ocean-going vessel detection, inshore ship detection has received increasing attention in harbor dynamic surveillance and maritime management. However, because the harbor environment is complex, gray information and texture features between docked ships and their connected dock regions are indistinguishable, most of the popular detection methods are limited by their calculation efficiency and detection accuracy. In this paper, a novel hierarchical method that combines an efficient candidate scanning strategy and an accurate candidate identification mixture model is presented for inshore ship detection in complex harbor areas. First, in the candidate region extraction phase, an omnidirectional intersected two-dimension scanning (OITDS) strategy is designed to rapidly extract candidate regions from the land-water segmented images. In the candidate region identification phase, a decision mixture model (DMM) is proposed to identify real ships from candidate objects. Specifically, to improve the robustness regarding the diversity of ships, a deformable part model (DPM) was employed to train a key part sub-model and a whole ship sub-model. Furthermore, to improve the identification accuracy, a surrounding correlation context sub-model is built. Finally, to increase the accuracy of candidate region identification, these three sub-models are integrated into the proposed DMM. Experiments were performed on numerous large-scale harbor remote sensing images, and the results showed that the proposed method has high detection accuracy and rapid computational efficiency.

  14. A new method to discriminate secondary organic aerosols from different sources using high-resolution aerosol mass spectra

    Science.gov (United States)

    Heringa, M. F.; Decarlo, P. F.; Chirico, R.; Tritscher, T.; Clairotte, M.; Mohr, C.; Crippa, M.; Slowik, J. G.; Pfaffenberger, L.; Dommen, J.; Weingartner, E.; Prévôt, A. S. H.; Baltensperger, U.

    2012-02-01

    Organic aerosol (OA) represents a significant and often major fraction of the non-refractory PM1 (particulate matter with an aerodynamic diameter da car and a two-stroke Euro 2 scooter were characterized with an Aerodyne high-resolution time-of-flight aerosol mass spectrometer (HR-TOF-AMS) and compared to SOA from α-pinene. The emissions were sampled from the chimney/tailpipe by a heated inlet system and filtered before injection into a smog chamber. The gas phase emissions were irradiated by xenon arc lamps to initiate photo-chemistry which led to nucleation and subsequent particle growth by SOA production. Duplicate experiments were performed for each SOA type, with the averaged organic mass spectra showing Pearson's r values >0.94 for the correlations between the four different SOA types after five hours of aging. High-resolution mass spectra (HR-MS) showed that the dominant peaks in the MS, m/z 43 and 44, are dominated by the oxygenated ions C2H3O+ and CO2+, respectively, similarly to the relatively fresh semi-volatile oxygenated OA (SV-OOA) observed in the ambient aerosol. The atomic O:C ratios were found to be in the range of 0.25-0.55 with no major increase during the first five hours of aging. On average, the diesel SOA showed the lowest O:C ratio followed by SOA from wood burning, α-pinene and the scooter emissions. Grouping the fragment ions revealed that the SOA source with the highest O:C ratio had the largest fraction of small ions. The HR data of the four sources could be clustered and separated using principal component analysis (PCA). The model showed a significant separation of the four SOA types and clustering of the duplicate experiments on the first two principal components (PCs), which explained 79% of the total variance. Projection of ambient SV-OOA spectra resolved by positive matrix factorization (PMF) showed that this approach could be useful to identify large contributions of the tested SOA sources to SV-OOA. The first results from this

  15. Effect of fluid elasticity on the numerical stability of high-resolution schemes for high shearing contraction flows using OpenFOAM

    OpenAIRE

    Chourushi, T.

    2017-01-01

    Viscoelastic fluids due to their non-linear nature play an important role in process and polymer industries. These non-linear characteristics of fluid, influence final outcome of the product. Such processes though look simple are numerically challenging to study, due to the loss of numerical stability. Over the years, various methodologies have been developed to overcome this numerical limitation. In spite of this, numerical solutions are considered distant from accuracy, as first-order upwin...

  16. Creation of High Resolution Terrain Models of Barringer Meteorite Crater (Meteor Crater) Using Photogrammetry and Terrestrial Laser Scanning Methods

    Science.gov (United States)

    Brown, Richard B.; Navard, Andrew R.; Holland, Donald E.; McKellip, Rodney D.; Brannon, David P.

    2010-01-01

    Barringer Meteorite Crater or Meteor Crater, AZ, has been a site of high interest for lunar and Mars analog crater and terrain studies since the early days of the Apollo-Saturn program. It continues to be a site of exceptional interest to lunar, Mars, and other planetary crater and impact analog studies because of its relatively young age (est. 50 thousand years) and well-preserved structure. High resolution (2 meter to 1 decimeter) digital terrain models of Meteor Crater in whole or in part were created at NASA Stennis Space Center to support several lunar surface analog modeling activities using photogrammetric and ground based laser scanning techniques. The dataset created by this activity provides new and highly accurate 3D models of the inside slope of the crater as well as the downslope rock distribution of the western ejecta field. The data are presented to the science community for possible use in furthering studies of Meteor Crater and impact craters in general as well as its current near term lunar exploration use in providing a beneficial test model for lunar surface analog modeling and surface operation studies.

  17. Evaluation of different shadow detection and restoration methods and their impact on vegetation indices using UAV high-resolution imageries over vineyards

    Science.gov (United States)

    Aboutalebi, M.; Torres-Rua, A. F.; McKee, M.; Kustas, W. P.; Nieto, H.

    2017-12-01

    Shadows are an unavoidable component of high-resolution imagery. Although shadows can be a useful source of information about terrestrial features, they are a hindrance for image processing and lead to misclassification errors and increased uncertainty in defining surface reflectance properties. In precision agriculture activities, shadows may affect the performance of vegetation indices at pixel and plant scales. Thus, it becomes necessary to evaluate existing shadow detection and restoration methods, especially for applications that makes direct use of pixel information to estimate vegetation biomass, leaf area index (LAI), plant water use and stress, chlorophyll content, just to name a few. In this study, four high-resolution imageries captured by the Utah State University - AggieAir Unmanned Aerial Vehicle (UAV) system flown in 2014, 2015, and 2016 over a commercial vineyard located in the California for the USDA-Agricultural Research Service Grape Remote sensing Atmospheric Profile and Evapotranspiration Experiment (GRAPEX) Program are used for shadow detection and restoration. Four different methods for shadow detection are compared: (1) unsupervised classification, (2) supervised classification, (3) index-based method, and (4) physically-based method. Also, two different shadow restoration methods are evaluated: (1) linear correlation correction, and (2) gamma correction. The models' performance is evaluated over two vegetation indices: normalized difference vegetation index (NDVI) and LAI for both sunlit and shadowed pixels. Histogram and analysis of variance (ANOVA) are used as performance indicators. Results indicated that the performance of the supervised classification and the index-based method are better than other methods. In addition, there is a statistical difference between the average of NDVI and LAI on the sunlit and shadowed pixels. Among the shadow restoration methods, gamma correction visually works better than the linear correlation

  18. High resolution data acquisition

    Science.gov (United States)

    Thornton, Glenn W.; Fuller, Kenneth R.

    1993-01-01

    A high resolution event interval timing system measures short time intervals such as occur in high energy physics or laser ranging. Timing is provided from a clock (38) pulse train (37) and analog circuitry (44) for generating a triangular wave (46) synchronously with the pulse train (37). The triangular wave (46) has an amplitude and slope functionally related to the time elapsed during each clock pulse in the train. A converter (18, 32) forms a first digital value of the amplitude and slope of the triangle wave at the start of the event interval and a second digital value of the amplitude and slope of the triangle wave at the end of the event interval. A counter (26) counts the clock pulse train (37) during the interval to form a gross event interval time. A computer (52) then combines the gross event interval time and the first and second digital values to output a high resolution value for the event interval.

  19. ANL high resolution injector

    International Nuclear Information System (INIS)

    Minehara, E.; Kutschera, W.; Hartog, P.D.; Billquist, P.

    1985-01-01

    The ANL (Argonne National Laboratory) high-resolution injector has been installed to obtain higher mass resolution and higher preacceleration, and to utilize effectively the full mass range of ATLAS (Argonne Tandem Linac Accelerator System). Preliminary results of the first beam test are reported briefly. The design and performance, in particular a high-mass-resolution magnet with aberration compensation, are discussed. 7 refs., 5 figs., 2 tabs

  20. High resolution (transformers.

    Science.gov (United States)

    Garcia-Souto, Jose A; Lamela-Rivera, Horacio

    2006-10-16

    A novel fiber-optic interferometric sensor is presented for vibrations measurements and analysis. In this approach, it is shown applied to the vibrations of electrical structures within power transformers. A main feature of the sensor is that an unambiguous optical phase measurement is performed using the direct detection of the interferometer output, without external modulation, for a more compact and stable implementation. High resolution of the interferometric measurement is obtained with this technique (transformers are also highlighted.

  1. A new method to discriminate secondary organic aerosols from different sources using high-resolution aerosol mass spectra

    Directory of Open Access Journals (Sweden)

    M. F. Heringa

    2012-02-01

    Full Text Available Organic aerosol (OA represents a significant and often major fraction of the non-refractory PM1 (particulate matter with an aerodynamic diameter da < 1 μm mass. Secondary organic aerosol (SOA is an important contributor to the OA and can be formed from biogenic and anthropogenic precursors. Here we present results from the characterization of SOA produced from the emissions of three different anthropogenic sources. SOA from a log wood burner, a Euro 2 diesel car and a two-stroke Euro 2 scooter were characterized with an Aerodyne high-resolution time-of-flight aerosol mass spectrometer (HR-TOF-AMS and compared to SOA from α-pinene.

    The emissions were sampled from the chimney/tailpipe by a heated inlet system and filtered before injection into a smog chamber. The gas phase emissions were irradiated by xenon arc lamps to initiate photo-chemistry which led to nucleation and subsequent particle growth by SOA production.

    Duplicate experiments were performed for each SOA type, with the averaged organic mass spectra showing Pearson's r values >0.94 for the correlations between the four different SOA types after five hours of aging. High-resolution mass spectra (HR-MS showed that the dominant peaks in the MS, m/z 43 and 44, are dominated by the oxygenated ions C2H3O+ and CO2+, respectively, similarly to the relatively fresh semi-volatile oxygenated OA (SV-OOA observed in the ambient aerosol. The atomic O:C ratios were found to be in the range of 0.25–0.55 with no major increase during the first five hours of aging. On average, the diesel SOA showed the lowest O:C ratio followed by SOA from wood burning, α-pinene and the scooter emissions. Grouping the fragment ions revealed that the SOA source with the highest O:C ratio had the largest fraction of small ions.

    The HR data of the four sources could be clustered and separated using

  2. Interpretation of high resolution airborne magnetic data (HRAMD of Ilesha and its environs, Southwest Nigeria, using Euler deconvolution method

    Directory of Open Access Journals (Sweden)

    Olurin Oluwaseun Tolutope

    2017-12-01

    Full Text Available Interpretation of high resolution aeromagnetic data of Ilesha and its environs within the basement complex of the geological setting of Southwestern Nigeria was carried out in the study. The study area is delimited by geographic latitudes 7°30′–8°00′N and longitudes 4°30′–5°00′E. This investigation was carried out using Euler deconvolution on filtered digitised total magnetic data (Sheet Number 243 to delineate geological structures within the area under consideration. The digitised airborne magnetic data acquired in 2009 were obtained from the archives of the Nigeria Geological Survey Agency (NGSA. The airborne magnetic data were filtered, processed and enhanced; the resultant data were subjected to qualitative and quantitative magnetic interpretation, geometry and depth weighting analyses across the study area using Euler deconvolution filter control file in Oasis Montag software. Total magnetic intensity distribution in the field ranged from –77.7 to 139.7 nT. Total magnetic field intensities reveal high-magnitude magnetic intensity values (high-amplitude anomaly and magnetic low intensities (low-amplitude magnetic anomaly in the area under consideration. The study area is characterised with high intensity correlated with lithological variation in the basement. The sharp contrast is enhanced due to the sharp contrast in magnetic intensity between the magnetic susceptibilities of the crystalline and sedimentary rocks. The reduced-to-equator (RTE map is characterised by high frequencies, short wavelengths, small size, weak intensity, sharp low amplitude and nearly irregular shaped anomalies, which may due to near-surface sources, such as shallow geologic units and cultural features. Euler deconvolution solution indicates a generally undulating basement, with a depth ranging from −500 to 1000 m. The Euler deconvolution results show that the basement relief is generally gentle and flat, lying within the basement terrain.

  3. Numerical methods in multibody dynamics

    CERN Document Server

    Eich-Soellner, Edda

    1998-01-01

    Today computers play an important role in the development of complex mechanical systems, such as cars, railway vehicles or machines. Efficient simulation of these systems is only possible when based on methods that explore the strong link between numerics and computational mechanics. This book gives insight into modern techniques of numerical mathematics in the light of an interesting field of applications: multibody dynamics. The important interaction between modeling and solution techniques is demonstrated by using a simplified multibody model of a truck. Different versions of this mechanical model illustrate all key concepts in static and dynamic analysis as well as in parameter identification. The book focuses in particular on constrained mechanical systems. Their formulation in terms of differential-algebraic equations is the backbone of nearly all chapters. The book is written for students and teachers in numerical analysis and mechanical engineering as well as for engineers in industrial research labor...

  4. Operator theory and numerical methods

    CERN Document Server

    Fujita, H; Suzuki, T

    2001-01-01

    In accordance with the developments in computation, theoretical studies on numerical schemes are now fruitful and highly needed. In 1991 an article on the finite element method applied to evolutionary problems was published. Following the method, basically this book studies various schemes from operator theoretical points of view. Many parts are devoted to the finite element method, but other schemes and problems (charge simulation method, domain decomposition method, nonlinear problems, and so forth) are also discussed, motivated by the observation that practically useful schemes have fine mathematical structures and the converses are also true. This book has the following chapters: 1. Boundary Value Problems and FEM. 2. Semigroup Theory and FEM. 3. Evolution Equations and FEM. 4. Other Methods in Time Discretization. 5. Other Methods in Space Discretization. 6. Nonlinear Problems. 7. Domain Decomposition Method.

  5. Numerical methods for metamaterial design

    CERN Document Server

    2013-01-01

    This book describes a relatively new approach for the design of electromagnetic metamaterials.  Numerical optimization routines are combined with electromagnetic simulations to tailor the broadband optical properties of a metamaterial to have predetermined responses at predetermined wavelengths. After a review of both the major efforts within the field of metamaterials and the field of mathematical optimization, chapters covering both gradient-based and derivative-free design methods are considered.  Selected topics including surrogate-base optimization, adaptive mesh search, and genetic algorithms are shown to be effective, gradient-free optimization strategies.  Additionally, new techniques for representing dielectric distributions in two dimensions, including level sets, are demonstrated as effective methods for gradient-based optimization.  Each chapter begins with a rigorous review of the optimization strategy used, and is followed by numerous examples that combine the strategy with either electromag...

  6. A robust object-based shadow detection method for cloud-free high resolution satellite images over urban areas and water bodies

    Science.gov (United States)

    Tatar, Nurollah; Saadatseresht, Mohammad; Arefi, Hossein; Hadavand, Ahmad

    2018-06-01

    Unwanted contrast in high resolution satellite images such as shadow areas directly affects the result of further processing in urban remote sensing images. Detecting and finding the precise position of shadows is critical in different remote sensing processing chains such as change detection, image classification and digital elevation model generation from stereo images. The spectral similarity between shadow areas, water bodies, and some dark asphalt roads makes the development of robust shadow detection algorithms challenging. In addition, most of the existing methods work on pixel-level and neglect the contextual information contained in neighboring pixels. In this paper, a new object-based shadow detection framework is introduced. In the proposed method a pixel-level shadow mask is built by extending established thresholding methods with a new C4 index which enables to solve the ambiguity of shadow and water bodies. Then the pixel-based results are further processed in an object-based majority analysis to detect the final shadow objects. Four different high resolution satellite images are used to validate this new approach. The result shows the superiority of the proposed method over some state-of-the-art shadow detection method with an average of 96% in F-measure.

  7. Wide-Scope Screening Method for Multiclass Veterinary Drug Residues in Fish, Shrimp, and Eel Using Liquid Chromatography-Quadrupole High-Resolution Mass Spectrometry.

    Science.gov (United States)

    Turnipseed, Sherri B; Storey, Joseph M; Lohne, Jack J; Andersen, Wendy C; Burger, Robert; Johnson, Aaron S; Madson, Mark R

    2017-08-30

    A screening method for veterinary drug residues in fish, shrimp, and eel using LC with a high-resolution MS instrument has been developed and validated. The method was optimized for over 70 test compounds representing a variety of veterinary drug classes. Tissues were extracted by vortex mixing with acetonitrile acidified with 2% acetic acid and 0.2% p-toluenesulfonic acid. A centrifuged portion of the extract was passed through a novel solid phase extraction cartridge designed to remove interfering matrix components from tissue extracts. The eluent was then evaporated and reconstituted for analysis. Data were collected with a quadrupole-Orbitrap high-resolution mass spectrometer using both nontargeted and targeted acquisition methods. Residues were detected on the basis of the exact mass of the precursor and a product ion along with isotope pattern and retention time matching. Semiquantitative data analysis compared MS 1 signal to a one-point extracted matrix standard at a target testing level. The test compounds were detected and identified in salmon, tilapia, catfish, shrimp, and eel extracts fortified at the target testing levels. Fish dosed with selected analytes and aquaculture samples previously found to contain residues were also analyzed. The screening method can be expanded to monitor for an additional >260 veterinary drugs on the basis of exact mass measurements and retention times.

  8. Numerical methods in matrix computations

    CERN Document Server

    Björck, Åke

    2015-01-01

    Matrix algorithms are at the core of scientific computing and are indispensable tools in most applications in engineering. This book offers a comprehensive and up-to-date treatment of modern methods in matrix computation. It uses a unified approach to direct and iterative methods for linear systems, least squares and eigenvalue problems. A thorough analysis of the stability, accuracy, and complexity of the treated methods is given. Numerical Methods in Matrix Computations is suitable for use in courses on scientific computing and applied technical areas at advanced undergraduate and graduate level. A large bibliography is provided, which includes both historical and review papers as well as recent research papers. This makes the book useful also as a reference and guide to further study and research work. Åke Björck is a professor emeritus at the Department of Mathematics, Linköping University. He is a Fellow of the Society of Industrial and Applied Mathematics.

  9. Numerical methods for image registration

    CERN Document Server

    Modersitzki, Jan

    2003-01-01

    Based on the author's lecture notes and research, this well-illustrated and comprehensive text is one of the first to provide an introduction to image registration with particular emphasis on numerical methods in medical imaging. Ideal for researchers in industry and academia, it is also a suitable study guide for graduate mathematicians, computer scientists, engineers, medical physicists, and radiologists.Image registration is utilised whenever information obtained from different viewpoints needs to be combined or compared and unwanted distortion needs to be eliminated. For example, CCTV imag

  10. Anthropogenic and volcanic emission impacts on SO2 dynamics and acid rain profiles. Numerical study using WRF-Chem in a high-resolution modeling

    Science.gov (United States)

    Vela, A. V.; González, C. M.; Ynoue, R.; Rojas, N. Y.; Aristizábal, B. H.; Wahl, M.

    2017-12-01

    Eulerian 3-D chemistry transport models (CTM) have been widely used for the study of air quality in urban environments, becoming an essential tool for studying the impacts and dynamics of gases and aerosols on air quality. However, their use in Colombia is scarce, especially in medium-sized cities, which are experimenting a fast urban growth, increasing the risk associated with possible air pollution episodes. In the densely populated medium-sized Andean city of Manizales, Colombia - a city located on the western slopes of the central range of the Andes (urban population 368000; 2150 m.a.s.l), there is an influence of the active Nevado del Ruiz volcano, located 28 km to the southwest. This natural source emits daily gas and particle fluxes, which could influence the atmospheric chemistry of the city and neighboring towns. Hence, the zone presents a unique combination of anthropogenic and volcanic sulfur gas emissions, which affects SO2 dynamics in the urban area, influencing also in the formation of acid rain phenomenon in the city. Therefore, studies analyzing the relative contribution of anthropogenic and volcanic emission could contribute with a deep understanding about causes and dynamics of both acid rain phenomenon and ambient SO2 levels in Manizales. This work aimed to analyze the influence of anthropogenic (on-road vehicular and industrial point-sources) and volcanic sulfur emissions in SO2 atmospheric chemistry dynamics, evaluating its possible effects on acid rain profiles. Ambient SO2 levels and day-night rain samples were measured and used to analyze results obtained from the application of the fully-coupled on-line WRF-Chem model. Two high-resolution simulations were performed during two dry and wet one-week periods in 2015. Analysis of SO2 dispersion patterns and comparison with SO2 observations in the urban area were performed for three different scenarios in which natural and anthropogenic emissions were simulated separately. Results suggest that

  11. High resolution seismic refraction method with multichannel digital data acquisition system; Digital ta channel sokutei system wo mochiita koseido kussetsuho jishin tansa

    Energy Technology Data Exchange (ETDEWEB)

    Hayashi, K [Oyo Corp., Tokyo (Japan)

    1997-05-27

    This paper introduces a multichannel digital data acquisition system and examples of measurements with the system in seismic exploration using the high resolution seismic refraction method. The high resolution seismic refraction system performs analyses nearly automatically by using a computer after initial travel time has been read. Therefore, the system requires high-accuracy travel time data, for which a multichannel digital measuring instrument developed recently for seismic exploration using the refraction method has been used for the measurement. The specification specifies the number of channels at 144 as a maximum, a sampling time of 62.5 {mu}sec to 4 m sec, the maximum number of sampling of 80,000 samples, and gain accuracy of {plus_minus} 1%. The system was used for surveying a tunnel having a maximum soil cover of about 800 m. The traverse line length is about 6 km, the distance between vibration receiving points is 50 m, and the number of vibration receiving points is 194. Executing measurements of single point system using GPS can derive accurate velocity in the vicinity of the basic face of the tunnel construction. Results were obtained from the investigation, which can serve more for actual construction work. 10 refs., 6 figs., 1 tab.

  12. An Effective Method for Detecting Potential Woodland Vernal Pools Using High-Resolution LiDAR Data and Aerial Imagery

    Science.gov (United States)

    Effective conservation of woodland vernal pools – important components of regional amphibian diversity and ecosystem services – depends on locating and mapping these pools accurately. Current methods for identifying potential vernal pools are primarily based on visual interpretat...

  13. A New Alignment Method Based on The Wavelet Multi-Scale Cross-Correlation for Noisy High Resolution ECG Records

    National Research Council Canada - National Science Library

    Laciar, E

    2001-01-01

    ... between the wavelet transforms of the template and the detected beat, respectively. The wavelet and temporal methods were tested for several simulated records corrupted with white noise and electromyographic (EMG...

  14. Compensation Methods for Non-uniform and Incomplete Data Sampling in High Resolution PET with Multiple Scintillation Crystal Layers

    International Nuclear Information System (INIS)

    Lee, Jae Sung; Kim, Soo Mee; Lee, Dong Soo; Hong, Jong Hong; Sim, Kwang Souk; Rhee, June Tak

    2008-01-01

    To establish the methods for sinogram formation and correction in order to appropriately apply the filtered backprojection (FBP) reconstruction algorithm to the data acquired using PET scanner with multiple scintillation crystal layers. Formation for raw PET data storage and conversion methods from listmode data to histogram and sinogram were optimized. To solve the various problems occurred while the raw histogram was converted into sinogram, optimal sampling strategy and sampling efficiency correction method were investigated. Gap compensation methods that is unique in this system were also investigated. All the sinogram data were reconstructed using 2D filtered backprojection algorithm and compared to estimate the improvements by the correction algorithms. Optimal radial sampling interval and number of angular samples in terms of the sampling theorem and sampling efficiency correction algorithm were pitch/2 and 120, respectively. By applying the sampling efficiency correction and gap compensation, artifacts and background noise on the reconstructed image could be reduced. Conversion method from the histogram to sinogram was investigated for the FBP reconstruction of data acquired using multiple scintillation crystal layers. This method will be useful for the fast 2D reconstruction of multiple crystal layer PET data

  15. High resolution method for the magnetic axis localization for multipole magnets on the base of the garnet films technology

    International Nuclear Information System (INIS)

    Gertsev, K.F.; Gribkov, V.L.; Liskov, V.A.; Chervonenkis, A.J.

    1992-01-01

    The methods of stretched wires for the localization of the magnetic axis may be inconvenient sometimes in accelerators and colliders of very high energies because of high gradients, large lengths and small apertures. High gradients may deform the wires due to the nonzero magnetic susceptibility and microscopic ferromagnetic particles on their surface. Long wires have large sagittas and small apertures of magnets limit the transversal working domains for the measuring devices. Precision optics magnets possess extreme parameters, in particular, in interaction regions. The magneto-optic (MO) methods of the measurements present some new possibilities for the solution of the above problems. The use of MO films for magnetic field visualization and mapping was proposed and shown that on the basis of Bi-substituted iron garnet films and MO Faraday effect it's possible to obtain the quantitative vector maps of complicated magnetic field structure. Later this was described on a large scale. This method was discussed in terms of its applicability to the magnetic axis localization in quadrupoles of accelerators. In our opinion, the films technology has great advantages as compared with the colloidal solution. In this paper the principles and variants of the films method are presented and further development of the method under discussion is described

  16. A New Method for Estimating the Number of Harmonic Components in Noise with Application in High Resolution Radar

    Directory of Open Access Journals (Sweden)

    Radoi Emanuel

    2004-01-01

    Full Text Available In order to operate properly, the superresolution methods based on orthogonal subspace decomposition, such as multiple signal classification (MUSIC or estimation of signal parameters by rotational invariance techniques (ESPRIT, need accurate estimation of the signal subspace dimension, that is, of the number of harmonic components that are superimposed and corrupted by noise. This estimation is particularly difficult when the S/N ratio is low and the statistical properties of the noise are unknown. Moreover, in some applications such as radar imagery, it is very important to avoid underestimation of the number of harmonic components which are associated to the target scattering centers. In this paper, we propose an effective method for the estimation of the signal subspace dimension which is able to operate against colored noise with performances superior to those exhibited by the classical information theoretic criteria of Akaike and Rissanen. The capabilities of the new method are demonstrated through computer simulations and it is proved that compared to three other methods it carries out the best trade-off from four points of view, S/N ratio in white noise, frequency band of colored noise, dynamic range of the harmonic component amplitudes, and computing time.

  17. Feasibility of a semi-automated method for cardiac conduction velocity analysis of high-resolution activation maps

    NARCIS (Netherlands)

    Doshi, Ashish N.; Walton, Richard D.; Krul, Sébastien P.; de Groot, Joris R.; Bernus, Olivier; Efimov, Igor R.; Boukens, Bastiaan J.; Coronel, Ruben

    2015-01-01

    Myocardial conduction velocity is important for the genesis of arrhythmias. In the normal heart, conduction is primarily dependent on fiber direction (anisotropy) and may be discontinuous at sites with tissue heterogeneities (trabeculated or fibrotic tissue). We present a semi-automated method for

  18. Optimization of a method by liquid chromatography of high resolution to determine residues of ethilenthiourea in samples of tomato

    International Nuclear Information System (INIS)

    Mora, D.; Rodriguez, O.M.

    2002-01-01

    A method was optimized to determine the present residues of ethilenthiourea in samples of tomatoes. The method consisted of three stages: a extraction of ultrasonic bath with methanol; a cleaning of the extract through a glass column of 11 mm of diameter stuffed with 2,5 g of neutral aluminium's mixture and activated coal (97,5:2,5) and 2,5 g of pure-neutral aluminium, it is dissolved with 250 ml of methanol. The third stage was of quantification by HPLC in a C 18 ' column with a methanol and water mixture (90:10) like a mobile phase to an flow of 2,0 ml/min. and with UV detection to 232 nm. The retention's time under theses conditions was of 2,15 minutes. The merit's parameters of the method were determined, proving and lineal sphere between 1,0 and 28,0 (g/ml of ETU; some quantification and detection limits have been calculated by the method of Hubaux and Vos (22) of 0,153 and 0,306 mg/ml respectively and a recuperation of 84%. (Author) [es

  19. Methods for high-resolution anisotropic finite element modeling of the human head: automatic MR white matter anisotropy-adaptive mesh generation.

    Science.gov (United States)

    Lee, Won Hee; Kim, Tae-Seong

    2012-01-01

    This study proposes an advanced finite element (FE) head modeling technique through which high-resolution FE meshes adaptive to the degree of tissue anisotropy can be generated. Our adaptive meshing scheme (called wMesh) uses MRI structural information and fractional anisotropy maps derived from diffusion tensors in the FE mesh generation process, optimally reflecting electrical properties of the human brain. We examined the characteristics of the wMeshes through various qualitative and quantitative comparisons to the conventional FE regular-sized meshes that are non-adaptive to the degree of white matter anisotropy. We investigated numerical differences in the FE forward solutions that include the electrical potential and current density generated by current sources in the brain. The quantitative difference was calculated by two statistical measures of relative difference measure (RDM) and magnification factor (MAG). The results show that the wMeshes are adaptive to the anisotropic density of the WM anisotropy, and they better reflect the density and directionality of tissue conductivity anisotropy. Our comparison results between various anisotropic regular mesh and wMesh models show that there are substantial differences in the EEG forward solutions in the brain (up to RDM=0.48 and MAG=0.63 in the electrical potential, and RDM=0.65 and MAG=0.52 in the current density). Our analysis results indicate that the wMeshes produce different forward solutions that are different from the conventional regular meshes. We present some results that the wMesh head modeling approach enhances the sensitivity and accuracy of the FE solutions at the interfaces or in the regions where the anisotropic conductivities change sharply or their directional changes are complex. The fully automatic wMesh generation technique should be useful for modeling an individual-specific and high-resolution anisotropic FE head model incorporating realistic anisotropic conductivity distributions

  20. Iron oxide nanoparticle-based magnetic resonance method to monitor release kinetics from polymeric particles with high resolution.

    Science.gov (United States)

    Chan, Minnie; Schopf, Eric; Sankaranarayanan, Jagadis; Almutairi, Adah

    2012-09-18

    A new method to precisely monitor rapid release kinetics from polymeric particles using super paramagnetic iron oxide nanoparticles, specifically by measuring spin-spin relaxation time (T(2)), is reported. Previously, we have published the formulation of logic gate particles from an acid-sensitive poly-β-aminoester ketal-2 polymer. Here, a series of poly-β-aminoester ketal-2 polymers with varying hydrophobicities were synthesized and used to formulate particles. We attempted to measure fluorescence of released Nile red to determine whether the structural adjustments could finely tune the release kinetics in the range of minutes to hours; however, this standard technique did not differentiate each release rate of our series. Thus, a new method based on encapsulation of iron oxide nanoparticles was developed, which enabled us to resolve the release kinetics of our particles. Moreover, the kinetics matched the relative hydrophobicity order determined by octanol-water partition coefficients. To the best of our knowledge, this method provides the highest resolution of release kinetics to date.

  1. Investigation into the Formation, Structure, and Evolution of an EF4 Tornado in East China Using a High-Resolution Numerical Simulation

    Science.gov (United States)

    Yao, Dan; Xue, Haile; Yin, Jinfang; Sun, Jisong; Liang, Xudong; Guo, Jianping

    2018-04-01

    Devastating tornadoes in China have received growing attention in recent years, but little is known about their formation, structure, and evolution on the tornadic scale. Most of these tornadoes develop within the East Asian monsoon regime, in an environment quite different from tornadoes in the U.S. In this study, we used an idealized, highresolution (25-m grid spacing) numerical simulation to investigate the deadly EF4 (Enhanced Fujita scale category 4) tornado that occurred on 23 June 2016 and claimed 99 lives in Yancheng, Jiangsu Province. A tornadic supercell developed in the simulation that had striking similarities to radar observations. The violent tornado in Funing County was reproduced, exceeding EF4 (74 m s-1), consistent with the on-site damage survey. It was accompanied by a funnel cloud that extended to the surface, and exhibited a double-helix vorticity structure. The signal of tornado genesis was found first at the cloud base in the pressure perturbation field, and then developed both upward and downward in terms of maximum vertical velocity overlapping with the intense vertical vorticity centers. The tornado's demise was found to accompany strong downdrafts overlapping with the intense vorticity centers. One of the interesting findings of this work is that a violent surface vortex was able to be generated and maintained, even though the simulation employed a free-slip lower boundary condition. The success of this simulation, despite using an idealized numerical approach, provides a means to investigate more historical tornadoes in China.

  2. High-resolution intravital microscopy.

    Directory of Open Access Journals (Sweden)

    Volker Andresen

    Full Text Available Cellular communication constitutes a fundamental mechanism of life, for instance by permitting transfer of information through synapses in the nervous system and by leading to activation of cells during the course of immune responses. Monitoring cell-cell interactions within living adult organisms is crucial in order to draw conclusions on their behavior with respect to the fate of cells, tissues and organs. Until now, there is no technology available that enables dynamic imaging deep within the tissue of living adult organisms at sub-cellular resolution, i.e. detection at the level of few protein molecules. Here we present a novel approach called multi-beam striped-illumination which applies for the first time the principle and advantages of structured-illumination, spatial modulation of the excitation pattern, to laser-scanning-microscopy. We use this approach in two-photon-microscopy--the most adequate optical deep-tissue imaging-technique. As compared to standard two-photon-microscopy, it achieves significant contrast enhancement and up to 3-fold improved axial resolution (optical sectioning while photobleaching, photodamage and acquisition speed are similar. Its imaging depth is comparable to multifocal two-photon-microscopy and only slightly less than in standard single-beam two-photon-microscopy. Precisely, our studies within mouse lymph nodes demonstrated 216% improved axial and 23% improved lateral resolutions at a depth of 80 µm below the surface. Thus, we are for the first time able to visualize the dynamic interactions between B cells and immune complex deposits on follicular dendritic cells within germinal centers (GCs of live mice. These interactions play a decisive role in the process of clonal selection, leading to affinity maturation of the humoral immune response. This novel high-resolution intravital microscopy method has a huge potential for numerous applications in neurosciences, immunology, cancer research and

  3. High-Resolution Intravital Microscopy

    Science.gov (United States)

    Andresen, Volker; Pollok, Karolin; Rinnenthal, Jan-Leo; Oehme, Laura; Günther, Robert; Spiecker, Heinrich; Radbruch, Helena; Gerhard, Jenny; Sporbert, Anje; Cseresnyes, Zoltan; Hauser, Anja E.; Niesner, Raluca

    2012-01-01

    Cellular communication constitutes a fundamental mechanism of life, for instance by permitting transfer of information through synapses in the nervous system and by leading to activation of cells during the course of immune responses. Monitoring cell-cell interactions within living adult organisms is crucial in order to draw conclusions on their behavior with respect to the fate of cells, tissues and organs. Until now, there is no technology available that enables dynamic imaging deep within the tissue of living adult organisms at sub-cellular resolution, i.e. detection at the level of few protein molecules. Here we present a novel approach called multi-beam striped-illumination which applies for the first time the principle and advantages of structured-illumination, spatial modulation of the excitation pattern, to laser-scanning-microscopy. We use this approach in two-photon-microscopy - the most adequate optical deep-tissue imaging-technique. As compared to standard two-photon-microscopy, it achieves significant contrast enhancement and up to 3-fold improved axial resolution (optical sectioning) while photobleaching, photodamage and acquisition speed are similar. Its imaging depth is comparable to multifocal two-photon-microscopy and only slightly less than in standard single-beam two-photon-microscopy. Precisely, our studies within mouse lymph nodes demonstrated 216% improved axial and 23% improved lateral resolutions at a depth of 80 µm below the surface. Thus, we are for the first time able to visualize the dynamic interactions between B cells and immune complex deposits on follicular dendritic cells within germinal centers (GCs) of live mice. These interactions play a decisive role in the process of clonal selection, leading to affinity maturation of the humoral immune response. This novel high-resolution intravital microscopy method has a huge potential for numerous applications in neurosciences, immunology, cancer research and developmental biology

  4. Analytical method by high resolution-liquid chromatography for assay of 2 g Clonazepam tablets disolution without lactose

    International Nuclear Information System (INIS)

    Garcia Penna, Caridad M; Diego Leon, Rafael; Castinneira Diaz, Mirta; Hernandez Cervera, Mirna; Martinez Espinosa, Vivian

    2007-01-01

    A valid method was developed to assess 2 mg Clonazepam tablets disolution without lactose by high-performance liquid chromatography by UV detection to 254 nm. Establishment of Clonazepam in dissolution medium was confirmed, and parameters of specificity, linearity, and accuracy were assessed, as well as influence of filtration and stability or active principle. Linearity curve was drawed in rank of 1, 2-2,6 μg/mL, with a correlation coefficient similar to 0,99418; statistical test wasn't statistically significant for interceptor and slope. In study, it was possible to demonstrate that stability or active principle in dissolution was maintained in more than double of duration of dissolution assay. (Author)

  5. Strongly correlated systems numerical methods

    CERN Document Server

    Mancini, Ferdinando

    2013-01-01

    This volume presents, for the very first time, an exhaustive collection of those modern numerical methods specifically tailored for the analysis of Strongly Correlated Systems. Many novel materials, with functional properties emerging from macroscopic quantum behaviors at the frontier of modern research in physics, chemistry and material science, belong to this class of systems. Any technique is presented in great detail by its own inventor or by one of the world-wide recognized main contributors. The exposition has a clear pedagogical cut and fully reports on the most relevant case study where the specific technique showed to be very successful in describing and enlightening the puzzling physics of a particular strongly correlated system. The book is intended for advanced graduate students and post-docs in the field as textbook and/or main reference, but also for other researchers in the field who appreciate consulting a single, but comprehensive, source or wishes to get acquainted, in a as painless as possi...

  6. Neuroanatomy from Mesoscopic to Nanoscopic Scales: An Improved Method for the Observation of Semithin Sections by High-Resolution Scanning Electron Microscopy.

    Science.gov (United States)

    Rodríguez, José-Rodrigo; Turégano-López, Marta; DeFelipe, Javier; Merchán-Pérez, Angel

    2018-01-01

    Semithin sections are commonly used to examine large areas of tissue with an optical microscope, in order to locate and trim the regions that will later be studied with the electron microscope. Ideally, the observation of semithin sections would be from mesoscopic to nanoscopic scales directly, instead of using light microscopy and then electron microscopy (EM). Here we propose a method that makes it possible to obtain high-resolution scanning EM images of large areas of the brain in the millimeter to nanometer range. Since our method is compatible with light microscopy, it is also feasible to generate hybrid light and electron microscopic maps. Additionally, the same tissue blocks that have been used to obtain semithin sections can later be used, if necessary, for transmission EM, or for focused ion beam milling and scanning electron microscopy (FIB-SEM).

  7. A method for volume determination of the orbit and its contents by high resolution axial tomography and quantitative digital image analysis.

    Science.gov (United States)

    Cooper, W C

    1985-01-01

    The various congenital and acquired conditions which alter orbital volume are reviewed. Previous investigative work to determine orbital capacity is summarized. Since these studies were confined to postmortem evaluations, the need for a technique to measure orbital volume in the living state is presented. A method for volume determination of the orbit and its contents by high-resolution axial tomography and quantitative digital image analysis is reported. This procedure has proven to be accurate (the discrepancy between direct and computed measurements ranged from 0.2% to 4%) and reproducible (greater than 98%). The application of this method to representative clinical problems is presented and discussed. The establishment of a diagnostic system versatile enough to expand the usefulness of computerized axial tomography and polytomography should add a new dimension to ophthalmic investigation and treatment.

  8. Methods for enhancing numerical integration

    International Nuclear Information System (INIS)

    Doncker, Elise de

    2003-01-01

    We give a survey of common strategies for numerical integration (adaptive, Monte-Carlo, Quasi-Monte Carlo), and attempt to delineate their realm of applicability. The inherent accuracy and error bounds for basic integration methods are given via such measures as the degree of precision of cubature rules, the index of a family of lattice rules, and the discrepancy of uniformly distributed point sets. Strategies incorporating these basic methods often use paradigms to reduce the error by, e.g., increasing the number of points in the domain or decreasing the mesh size, locally or uniformly. For these processes the order of convergence of the strategy is determined by the asymptotic behavior of the error, and may be too slow in practice for the type of problem at hand. For certain problem classes we may be able to improve the effectiveness of the method or strategy by such techniques as transformations, absorbing a difficult part of the integrand into a weight function, suitable partitioning of the domain, transformations and extrapolation or convergence acceleration. Situations warranting the use of these techniques (possibly in an 'automated' way) are described and illustrated by sample applications

  9. A high resolution method for {sup 14}C analysis of a coral from South China Sea: Implication for “AD 775” {sup 14}C event

    Energy Technology Data Exchange (ETDEWEB)

    Ding, Ping [State Key Laboratory of Isotope Geochemistry, Guangzhou Institute of Geochemistry, Chinese Academy of Sciences, 510640 Guangzhou (China); Shen, Chengde, E-mail: cdshen@gig.ac.cn [State Key Laboratory of Isotope Geochemistry, Guangzhou Institute of Geochemistry, Chinese Academy of Sciences, 510640 Guangzhou (China); State Key Laboratory of Nuclear Physics and Technology, Peking University, 100871 Beijing (China); Yi, Weixi; Wang, Ning [State Key Laboratory of Isotope Geochemistry, Guangzhou Institute of Geochemistry, Chinese Academy of Sciences, 510640 Guangzhou (China); Ding, Xingfang; Liu, Kexin; Fu, Dongpo [State Key Laboratory of Nuclear Physics and Technology, Peking University, 100871 Beijing (China); Liu, Weiguo [State Key Laboratory of Loess and Quaternary Geology, Institute of Earth Environment, The Chinese Academy of Sciences, 710075 Xi’an (China); Liu, Yi [CAS Key Laboratory of Crust-Mantle Materials and Environments, School of Earth and Space Sciences, University of Science and Technology of China, 230026 Hefei (China)

    2015-10-15

    A pre-heating method that improves the background and precision of {sup 14}C dating significantly was applied for fossil coral dating with high resolution in our lab in Guangzhou Institute of Geochemistry, Chinese Academy of Sciences (GIGCAS). The reaction tube is heated under 300 °C in a vacuum line before it is used for graphitization. The method can reduce the contamination absorbed in TiH{sub 2}, Zn and Fe power placed in the graphitization tube. With the pre-heating and average drilling method, bi-weekly resolution {sup 14}C dating in a fossil coral is carried out to investigate the “AD 775 {sup 14}C spike event”. Different from the tree ring {sup 14}C archives with the {sup 14}C spike of ∼15‰ (Δ{sup 14}C), the {sup 14}C spike in the coral shows an abrupt peak of 45‰ and two smaller spikes of Δ{sup 14}C > 20‰ in half a year in AD 776. And then, the {sup 14}C content in coral decreases gradually in AD 777. The peak time of the {sup 14}C spike event likely occurs in the summer of AD 776 according to the δ{sup 18}O variation in coral. High-resolution dating of {sup 14}C in coral provides not only a more detail process of the event than that from tree rings, but also the first report of the event from sea ecosystem. Both of them suggest an extraterrestrial origin of the event cause.

  10. Constraints on the formation and properties of a Martian lobate debris apron: Insights from high-resolution topography, SHARAD radar data, and a numerical ice flow model

    Science.gov (United States)

    Parsons, Reid; Holt, John

    2016-03-01

    Lobate debris aprons (LDAs) are midlatitude deposits of debris-covered ice formed during one or more periods of glaciation during the Amazonian period. However, little is known about the climate conditions that led to LDA formation. We explore a hypothesis in which a single, extended period of precipitation of ice on the steep slopes of Euripus Mons (45°S, 105°E—east of the Hellas Basin) produced a flowing ice deposit which was protected from subsequent ablation to produce the LDA found at this location. We test this hypothesis with a numerical ice flow model using an ice rheology based on low-temperature ice deformation experiments. The model simulates ice accumulation and flow for the northern and southern lobes of the Euripus Mons LDA using basal topography constrained by data from the Shallow Radar (SHARAD) and a range of ice viscosities (determined by ice temperature and ice grain size). Simulations for the northern lobe of the Euripus LDA produce good fits to the surface topography. Assuming an LDA age of ˜60 Myr and an expected temperature range of 200 to 204 K (for various obliquities) gives an ice grain size of ≈2 mm. Simulations of the southern section produce poor fits to surface topography and result in much faster flow timescales unless multiple ice deposition events or higher ice viscosities are considered.

  11. Basic Diagnosis and Prediction of Persistent Contrail Occurrence using High-resolution Numerical Weather Analyses/Forecasts and Logistic Regression. Part I: Effects of Random Error

    Science.gov (United States)

    Duda, David P.; Minnis, Patrick

    2009-01-01

    Straightforward application of the Schmidt-Appleman contrail formation criteria to diagnose persistent contrail occurrence from numerical weather prediction data is hindered by significant bias errors in the upper tropospheric humidity. Logistic models of contrail occurrence have been proposed to overcome this problem, but basic questions remain about how random measurement error may affect their accuracy. A set of 5000 synthetic contrail observations is created to study the effects of random error in these probabilistic models. The simulated observations are based on distributions of temperature, humidity, and vertical velocity derived from Advanced Regional Prediction System (ARPS) weather analyses. The logistic models created from the simulated observations were evaluated using two common statistical measures of model accuracy, the percent correct (PC) and the Hanssen-Kuipers discriminant (HKD). To convert the probabilistic results of the logistic models into a dichotomous yes/no choice suitable for the statistical measures, two critical probability thresholds are considered. The HKD scores are higher when the climatological frequency of contrail occurrence is used as the critical threshold, while the PC scores are higher when the critical probability threshold is 0.5. For both thresholds, typical random errors in temperature, relative humidity, and vertical velocity are found to be small enough to allow for accurate logistic models of contrail occurrence. The accuracy of the models developed from synthetic data is over 85 percent for both the prediction of contrail occurrence and non-occurrence, although in practice, larger errors would be anticipated.

  12. Development of a new screening method for the detection of antibiotic residues in muscle tissues using liquid chromatography and high resolution mass spectrometry with a LC-LTQ-Orbitrap instrument.

    OpenAIRE

    2011-01-01

    Abstract In the present work, a liquid chromatography- high resolution mass spectrometry method was developed for the screening in meat of a wide range of antibiotics used in veterinary medicine. Full scan mode under high resolution mass spectral conditions using LTQ-Orbitrap mass spectrometer with resolving power 60.000 FWHM was applied for analysis of the samples. Samples were prepared using two extractions protocols prior to LC-MS analysis. The scope of the method focuses on the...

  13. High resolution CT in diffuse lung disease

    International Nuclear Information System (INIS)

    Webb, W.R.

    1995-01-01

    High resolution CT (computerized tomography) was discussed in detail. The conclusions were HRCT is able to define lung anatomy at the secondary lobular level and define a variety of abnormalities in patients with diffuse lung diseases. Evidence from numerous studies indicates that HRCT can play a major role in the assessment of diffuse infiltrative lung disease and is indicate clinically (95 refs.)

  14. High resolution CT in diffuse lung disease

    Energy Technology Data Exchange (ETDEWEB)

    Webb, W R [California Univ., San Francisco, CA (United States). Dept. of Radiology

    1996-12-31

    High resolution CT (computerized tomography) was discussed in detail. The conclusions were HRCT is able to define lung anatomy at the secondary lobular level and define a variety of abnormalities in patients with diffuse lung diseases. Evidence from numerous studies indicates that HRCT can play a major role in the assessment of diffuse infiltrative lung disease and is indicate clinically (95 refs.).

  15. Basic Diagnosis and Prediction of Persistent Contrail Occurrence using High-resolution Numerical Weather Analyses/Forecasts and Logistic Regression. Part II: Evaluation of Sample Models

    Science.gov (United States)

    Duda, David P.; Minnis, Patrick

    2009-01-01

    Previous studies have shown that probabilistic forecasting may be a useful method for predicting persistent contrail formation. A probabilistic forecast to accurately predict contrail formation over the contiguous United States (CONUS) is created by using meteorological data based on hourly meteorological analyses from the Advanced Regional Prediction System (ARPS) and from the Rapid Update Cycle (RUC) as well as GOES water vapor channel measurements, combined with surface and satellite observations of contrails. Two groups of logistic models were created. The first group of models (SURFACE models) is based on surface-based contrail observations supplemented with satellite observations of contrail occurrence. The second group of models (OUTBREAK models) is derived from a selected subgroup of satellite-based observations of widespread persistent contrails. The mean accuracies for both the SURFACE and OUTBREAK models typically exceeded 75 percent when based on the RUC or ARPS analysis data, but decreased when the logistic models were derived from ARPS forecast data.

  16. Spot auto-focusing and spot auto-stigmation methods with high-definition auto-correlation function in high-resolution TEM.

    Science.gov (United States)

    Isakozawa, Shigeto; Fuse, Taishi; Amano, Junpei; Baba, Norio

    2018-04-01

    As alternatives to the diffractogram-based method in high-resolution transmission electron microscopy, a spot auto-focusing (AF) method and a spot auto-stigmation (AS) method are presented with a unique high-definition auto-correlation function (HD-ACF). The HD-ACF clearly resolves the ACF central peak region in small amorphous-thin-film images, reflecting the phase contrast transfer function. At a 300-k magnification for a 120-kV transmission electron microscope, the smallest areas used are 64 × 64 pixels (~3 nm2) for the AF and 256 × 256 pixels for the AS. A useful advantage of these methods is that the AF function has an allowable accuracy even for a low s/n (~1.0) image. A reference database on the defocus dependency of the HD-ACF by the pre-acquisition of through-focus amorphous-thin-film images must be prepared to use these methods. This can be very beneficial because the specimens are not limited to approximations of weak phase objects but can be extended to objects outside such approximations.

  17. Monte Carlo Bayesian inference on a statistical model of sub-gridcolumn moisture variability using high-resolution cloud observations. Part 1: Method

    Science.gov (United States)

    Norris, Peter M.; da Silva, Arlindo M.

    2018-01-01

    A method is presented to constrain a statistical model of sub-gridcolumn moisture variability using high-resolution satellite cloud data. The method can be used for large-scale model parameter estimation or cloud data assimilation. The gridcolumn model includes assumed probability density function (PDF) intra-layer horizontal variability and a copula-based inter-layer correlation model. The observables used in the current study are Moderate Resolution Imaging Spectroradiometer (MODIS) cloud-top pressure, brightness temperature and cloud optical thickness, but the method should be extensible to direct cloudy radiance assimilation for a small number of channels. The algorithm is a form of Bayesian inference with a Markov chain Monte Carlo (MCMC) approach to characterizing the posterior distribution. This approach is especially useful in cases where the background state is clear but cloudy observations exist. In traditional linearized data assimilation methods, a subsaturated background cannot produce clouds via any infinitesimal equilibrium perturbation, but the Monte Carlo approach is not gradient-based and allows jumps into regions of non-zero cloud probability. The current study uses a skewed-triangle distribution for layer moisture. The article also includes a discussion of the Metropolis and multiple-try Metropolis versions of MCMC. PMID:29618847

  18. Monte Carlo Bayesian Inference on a Statistical Model of Sub-Gridcolumn Moisture Variability Using High-Resolution Cloud Observations. Part 1: Method

    Science.gov (United States)

    Norris, Peter M.; Da Silva, Arlindo M.

    2016-01-01

    A method is presented to constrain a statistical model of sub-gridcolumn moisture variability using high-resolution satellite cloud data. The method can be used for large-scale model parameter estimation or cloud data assimilation. The gridcolumn model includes assumed probability density function (PDF) intra-layer horizontal variability and a copula-based inter-layer correlation model. The observables used in the current study are Moderate Resolution Imaging Spectroradiometer (MODIS) cloud-top pressure, brightness temperature and cloud optical thickness, but the method should be extensible to direct cloudy radiance assimilation for a small number of channels. The algorithm is a form of Bayesian inference with a Markov chain Monte Carlo (MCMC) approach to characterizing the posterior distribution. This approach is especially useful in cases where the background state is clear but cloudy observations exist. In traditional linearized data assimilation methods, a subsaturated background cannot produce clouds via any infinitesimal equilibrium perturbation, but the Monte Carlo approach is not gradient-based and allows jumps into regions of non-zero cloud probability. The current study uses a skewed-triangle distribution for layer moisture. The article also includes a discussion of the Metropolis and multiple-try Metropolis versions of MCMC.

  19. Improvement of high resolution borehole seismics. Part 1: Development of processing methods for VSP surveys. Part 2: Piezoelectric signal transmitter for seismic measurements

    International Nuclear Information System (INIS)

    Cosma, C.; Heikkinen, P.; Pekonen, S.

    1991-05-01

    The purpose of the high resolution borehole seismics project has been to improve the reliability and resolution of seismic methods in the particular environment of nuclear waste repository sites. The results obtained, especially the data processing and interpretation methods developed, are applicable also to other geophysical methods (e.g. Georadar). The goals of the seismic development project have been: the development of processing and interpretation techniques for mapping fractured zones, and the design and construction of a seismic source complying with the requirements of repository site characterization programs. Because these two aspects of the work are very different in nature, we have structured the report as two self contained parts. Part 1 describes the development of interpretive techniques. We have used for demonstrating the effect of different methods a VSP data set collected at the SCV site during Stage 1 of the project. Five techniques have been studied: FK-filtering, three versions of Tau-p filtering and a new technique that we have developed lately, Image Space filtering. Part 2 refers to the construction of the piezoelectric source. Earlier results obtained over short distances with low energy piezoelectric transmitters let us believe that the same principle could be applied for seismic signal transmitters, if solutions for higher energy and lower frequency output were found. The instrument which we have constructed is a cylindrical unit which can be placed in a borehole and is able to produce a radial strain when excited axially. The minimum borehole diameter is 56 mm. (au)

  20. Establishment of a simple and rapid identification method for Listeria spp. by using high-resolution melting analysis, and its application in food industry.

    Science.gov (United States)

    Ohshima, Chihiro; Takahashi, Hajime; Phraephaisarn, Chirapiphat; Vesaratchavest, Mongkol; Keeratipibul, Suwimon; Kuda, Takashi; Kimura, Bon

    2014-01-01

    Listeria monocytogenes is the causative bacteria of listeriosis, which has a higher mortality rate than that of other causes of food poisoning. Listeria spp., of which L. monocytogenes is a member, have been isolated from food and manufacturing environments. Several methods have been published for identifying Listeria spp.; however, many of the methods cannot identify newly categorized Listeria spp. Additionally, they are often not suitable for the food industry, owing to their complexity, cost, or time consumption. Recently, high-resolution melting analysis (HRMA), which exploits DNA-sequence differences, has received attention as a simple and quick genomic typing method. In the present study, a new method for the simple, rapid, and low-cost identification of Listeria spp. has been presented using the genes rarA and ldh as targets for HRMA. DNA sequences of 9 Listeria species were first compared, and polymorphisms were identified for each species for primer design. Species specificity of each HRM curve pattern was estimated using type strains of all the species. Among the 9 species, 7 were identified by HRMA using rarA gene, including 3 new species. The remaining 2 species were identified by HRMA of ldh gene. The newly developed HRMA method was then used to assess Listeria isolates from the food industry, and the method efficiency was compared to that of identification by 16S rDNA sequence analysis. The 2 methods were in coherence for 92.6% of the samples, demonstrating the high accuracy of HRMA. The time required for identifying Listeria spp. was substantially low, and the process was considerably simplified, providing a useful and precise method for processing multiple samples per day. Our newly developed method for identifying Listeria spp. is highly valuable; its use is not limited to the food industry, and it can be used for the isolates from the natural environment.

  1. A Method for Simultaneous Determination of 20 Fusarium Toxins in Cereals by High-Resolution Liquid Chromatography-Orbitrap Mass Spectrometry with a Pentafluorophenyl Column

    Science.gov (United States)

    Tamura, Masayoshi; Mochizuki, Naoki; Nagatomi, Yasushi; Harayama, Koichi; Toriba, Akira; Hayakawa, Kazuichi

    2015-01-01

    A high-resolution liquid chromatography-Orbitrap mass spectrometry (LC-Orbitrap MS) method was developed for simultaneous determination of 20 Fusarium toxins (nivalenol, fusarenon-X, deoxynivalenol, 3-acetyl deoxynivalenol, 15-acetyl deoxynivalenol, HT-2 toxin, T-2 toxin, neosolaniol, diacetoxyscirpenol, fumonisin B1, fumonisin B2, fumonisin B3, fumonisin A1, fumonisin A2, fumonisin A3, zearalenone, α-zearalenol, β-zearalenol, α-zearalanol, and β-zearalanol) in cereals. The separation of 20 Fusarium toxins with good peak shapes was achieved using a pentafluorophenyl column, and Orbitrap MS was able to detect accurately from cereal matrix components within ±0.77 ppm. The samples were prepared using a QuEChERS kit for extraction and a multifunctional cartridge for purification. The linearity, repeatability, and recovery of the method were >0.9964, 0.8%–14.7%, and 71%–106%, respectively. Using this method, an analysis of 34 commercially available cereals detected the presence of deoxynivalenol, 15-acetyl deoxynivalenol, fumonisin B1, fumonisin B2, fumonisin B3, fumonisn A1, fumonisin A2, fumonisin A3, and zearalenone in corn samples with high concentration and frequency. Trichothecenes was detected from wheat samples with high frequency; in particular, the concentration of deoxynivalenol was high. Conversely, α-zearalenol, β-zearalenol, α-zearalanol, and β-zearalanol were not detected in any of the samples. PMID:26008230

  2. Introducing AAA-MS, a rapid and sensitive method for amino acid analysis using isotope dilution and high-resolution mass spectrometry.

    Science.gov (United States)

    Louwagie, Mathilde; Kieffer-Jaquinod, Sylvie; Dupierris, Véronique; Couté, Yohann; Bruley, Christophe; Garin, Jérôme; Dupuis, Alain; Jaquinod, Michel; Brun, Virginie

    2012-07-06

    Accurate quantification of pure peptides and proteins is essential for biotechnology, clinical chemistry, proteomics, and systems biology. The reference method to quantify peptides and proteins is amino acid analysis (AAA). This consists of an acidic hydrolysis followed by chromatographic separation and spectrophotometric detection of amino acids. Although widely used, this method displays some limitations, in particular the need for large amounts of starting material. Driven by the need to quantify isotope-dilution standards used for absolute quantitative proteomics, particularly stable isotope-labeled (SIL) peptides and PSAQ proteins, we developed a new AAA assay (AAA-MS). This method requires neither derivatization nor chromatographic separation of amino acids. It is based on rapid microwave-assisted acidic hydrolysis followed by high-resolution mass spectrometry analysis of amino acids. Quantification is performed by comparing MS signals from labeled amino acids (SIL peptide- and PSAQ-derived) with those of unlabeled amino acids originating from co-hydrolyzed NIST standard reference materials. For both SIL peptides and PSAQ standards, AAA-MS quantification results were consistent with classical AAA measurements. Compared to AAA assay, AAA-MS was much faster and was 100-fold more sensitive for peptide and protein quantification. Finally, thanks to the development of a labeled protein standard, we also extended AAA-MS analysis to the quantification of unlabeled proteins.

  3. Development and validation of a QuEChERS method coupled to liquid chromatography and high resolution mass spectrometry to determine pyrrolizidine and tropane alkaloids in honey.

    Science.gov (United States)

    Martinello, Marianna; Borin, Alice; Stella, Roberto; Bovo, Davide; Biancotto, Giancarlo; Gallina, Albino; Mutinelli, Franco

    2017-11-01

    Awareness about pyrrolizidine alkaloids (PAs) and tropane alkaloids (TAs) in food was recently raised by the European Food Safety Authority stressing the lack of data and gaps of knowledge required to improve the risk assessment strategy. The present study aimed at the elaboration and validation of a method to determine PAs and TAs in honey. QuEChERS sample treatment and liquid chromatography coupled to hybrid high resolution mass spectrometry, were used. The method resulted in good linearity (R 2 >0.99) and low limits of detection and quantification, ranging from 0.04 to 0.2µgkg -1 and from 0.1 to 0.7µgkg -1 respectively. Recoveries ranged from 92.3 to 114.8% with repeatability lying between 0.9 and 15.1% and reproducibility between 1.1 and 15.6%. These performances demonstrate the selectivity and sensitivity of the method for simultaneous trace detection and quantification of PAs and TAs in honey, verified through the analysis of forty commercial samples. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. High resolution present climate and surface mass balance (SMB) of Svalbard modelled by MAR and implementation of a new online SMB downscaling method

    Science.gov (United States)

    Lang, C.; Fettweis, X.; Kittel, C.; Erpicum, M.

    2017-12-01

    We present the results of high resolution simulations of the climate and SMB of Svalbard with the regional climate model MAR forced by ERA-40 then ERA-Interim, as well as an online downscaling method allowing us to model the SMB and its components at a resolution twice as high (2.5 vs 5 km here) using only about 25% more CPU time. Spitsbergen, the largest island in Svalbard, has a very hilly topography and a high spatial resolution is needed to correctly represent the local topography and the complex pattern of ice distribution and precipitation. However, high resolution runs with an RCM fully coupled to an energy balance module like MAR require a huge amount of computation time. The hydrostatic equilibrium hypothesis used in MAR also becomes less valid as the spatial resolution increases. We therefore developed in MAR a method to run the snow module at a resolution twice as high as the atmospheric module. Near-surface temperature and humidity are corrected on a grid with a resolution twice as high, as a function of their local gradients and the elevation difference between the corresponding pixels in the 2 grids. We compared the results of our runs at 5 km and with SMB downscaled at 2.5 km over 1960 — 2016 and compared those to previous 10 km runs. On Austfonna, where the slopes are gentle, the agreement between observations and the 5 km SMB is better than with the 10 km SMB. It is again improved at 2.5 km but the gain is relatively small, showing the interest of our method rather than running a time consuming classic 2.5 km resolution simulation. On Spitsbergen, we show that a spatial resolution of 2.5 km is still not enough to represent the complex pattern of topography, precipitation and SMB. Due to a change in the summer atmospheric circulation, from a westerly flow over Svalbard to a northwesterly flow bringing colder air, the SMB of Svalbard was stable between 2006 and 2012, while several melt records were broken in Greenland, due to conditions more

  5. High resolution and high sensitivity methods for oligosaccharide mapping and characterization by normal phase high performance liquid chromatography following derivatization with highly fluorescent anthranilic acid.

    Science.gov (United States)

    Anumula, K R; Dhume, S T

    1998-07-01

    Facile labeling of oligosaccharides (acidic and neutral) in a nonselective manner was achieved with highly fluorescent anthranilic acid (AA, 2-aminobenzoic acid) (more than twice the intensity of 2-aminobenzamide, AB) for specific detection at very high sensitivity. Quantitative labeling in acetate-borate buffered methanol (approximately pH 5.0) at 80 degreesC for 60 min resulted in negligible or no desialylation of the oligosaccharides. A high resolution high performance liquid chromatographic method was developed for quantitative oligosaccharide mapping on a polymeric-NH2bonded (Astec) column operating under normal phase and anion exchange (NP-HPAEC) conditions. For isolation of oligosaccharides from the map by simple evaporation, the chromatographic conditions developed use volatile acetic acid-triethylamine buffer (approximately pH 4.0) systems. The mapping and characterization technology was developed using well characterized standard glycoproteins. The fluorescent oligosaccharide maps were similar to the maps obtained by the high pH anion-exchange chromatography with pulsed amperometric detection (HPAEC-PAD), except that the fluorescent maps contained more defined peaks. In the map, the oligosaccharides separated into groups based on charge, size, linkage, and overall structure in a manner similar to HPAEC-PAD with contribution of -COOH function from the label, anthranilic acid. However, selectivity of the column for sialic acid linkages was different. A second dimension normal phase HPLC (NP-HPLC) method was developed on an amide column (TSK Gel amide-80) for separation of the AA labeled neutral complex type and isomeric structures of high mannose type oligosaccharides. The oligosaccharides labeled with AA are compatible with biochemical and biophysical techniques, and use of matrix assisted laser desorption mass spectrometry for rapid determination of oligosaccharide mass map of glycoproteins is demonstrated. High resolution of NP-HPAEC and NP-HPLC methods

  6. Correlation and agreement between eplet mismatches calculated using serological, low-intermediate and high resolution molecular human leukocyte antigen typing methods.

    Science.gov (United States)

    Fidler, Samantha; D'Orsogna, Lloyd; Irish, Ashley B; Lewis, Joshua R; Wong, Germaine; Lim, Wai H

    2018-03-02

    Structural human leukocyte antigen (HLA) matching at the eplet level can be identified by HLAMatchmaker, which requires the entry of four-digit alleles. The aim of this study was to evaluate the agreement between eplet mismatches calculated by serological and two-digit typing methods compared to high-resolution four-digit typing. In a cohort of 264 donor/recipient pairs, the evaluation of measurement error was assessed using intra-class correlation to confirm the absolute agreement between the number of eplet mismatches at class I (HLA-A, -B, C) and II loci (HLA-DQ and -DR) calculated using serological or two-digit molecular typing compared to four-digit molecular typing methods. The proportion of donor/recipient pairs with a difference of >5 eplet mismatches between the HLA typing methods was also determined. Intra-class correlation coefficients between serological and four-digit molecular typing methods were 0.969 (95% confidence intervals [95% CI] 0.960-0.975) and 0.926 (95% CI 0.899-0.944), respectively; and 0.995 (95% CI 0.994-0.996) and 0.993 (95% CI 0.991-0.995), respectively between two-digit and four-digit molecular typing methods. The proportion of donor/recipient pairs with a difference of >5 eplet mismatches at class I and II loci was 4% and 16% for serological versus four-digit molecular typing methods, and 0% and 2% for two-digit versus four-digit molecular typing methods, respectively. In this small predominantly Caucasian population, compared with serology, there is a high level of agreement in the number of eplet mismatches calculated using two-compared to four-digit molecular HLA-typing methods, suggesting that two-digit typing may be sufficient in determining eplet mismatch load in kidney transplantation.

  7. A four dimensional separation method based on continuous heart-cutting gas chromatography with ion mobility and high resolution mass spectrometry.

    Science.gov (United States)

    Lipok, Christian; Hippler, Jörg; Schmitz, Oliver J

    2018-02-09

    A two-dimensional GC (2D-GC) method was developed and coupled to an ion mobility-high resolution mass spectrometer, which enables the separation of complex samples in four dimensions (2D-GC, ion mobilility spectrometry and mass spectrometry). This approach works as a continuous multiheart-cutting GC-system (GC+GC), using a long modulation time of 20s, which allows the complete transfer of most of the first dimension peaks to the second dimension column without fractionation, in comparison to comprehensive two-dimensional gas chromatography (GCxGC). Hence, each compound delivers only one peak in the second dimension, which simplifies the data handling even when ion mobility spectrometry as a third and mass spectrometry as a fourth dimension are introduced. The analysis of a plant extract from Calendula officinales shows the separation power of this four dimensional separation method. The introduction of ion mobility spectrometry provides an additional separation dimension and allows to determine collision cross sections (CCS) of the analytes as a further physicochemical constant supporting the identification. A CCS database with more than 800 standard substances including drug-like compounds and pesticides was used for CCS data base search in this work. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. In house validation of a high resolution mass spectrometry Orbitrap-based method for multiple allergen detection in a processed model food.

    Science.gov (United States)

    Pilolli, Rosa; De Angelis, Elisabetta; Monaci, Linda

    2018-02-13

    In recent years, mass spectrometry (MS) has been establishing its role in the development of analytical methods for multiple allergen detection, but most analyses are being carried out on low-resolution mass spectrometers such as triple quadrupole or ion traps. In this investigation, performance provided by a high resolution (HR) hybrid quadrupole-Orbitrap™ MS platform for the multiple allergens detection in processed food matrix is presented. In particular, three different acquisition modes were compared: full-MS, targeted-selected ion monitoring with data-dependent fragmentation (t-SIM/dd2), and parallel reaction monitoring. In order to challenge the HR-MS platform, the sample preparation was kept as simple as possible, limited to a 30-min ultrasound-aided protein extraction followed by clean-up with disposable size exclusion cartridges. Selected peptide markers tracing for five allergenic ingredients namely skim milk, whole egg, soy flour, ground hazelnut, and ground peanut were monitored in home-made cookies chosen as model processed matrix. Timed t-SIM/dd2 was found the best choice as a good compromise between sensitivity and accuracy, accomplishing the detection of 17 peptides originating from the five allergens in the same run. The optimized method was validated in-house through the evaluation of matrix and processing effects, recoveries, and precision. The selected quantitative markers for each allergenic ingredient provided quantification of 60-100 μg ingred /g allergenic ingredient/matrix in incurred cookies.

  9. a Method for the Extraction of Long-Term Deformation Characteristics of Long-Span High-Speed Railway Bridges Using High-Resolution SAR Images

    Science.gov (United States)

    Jia, H. G.; Liu, L. Y.

    2016-06-01

    Natural causes and high-speed train load will result in the structural deformation of long-span bridges, which greatly influence the safety operation of high-speed railway. Hence it is necessary to conduct the deformation monitoring and regular status assessment for long-span bridges. However for some traditional surveying technique, e.g. control-point-based surveying techniques, a lot of human and material resources are needed to perform the long-term monitoring for the whole bridge. In this study we detected the long-term bridge deformation time-series by persistent scatterer interferometric synthetic aperture radar (PSInSAR) technique using the high-resolution SAR images and external digital elevation model. A test area in Nanjing city in China is chosen and TerraSAR-X images and Tandem-X for this area have been used. There is the Dashengguan bridge in high speed railway in this area as study object to evaluate this method. Experiment results indicate that the proposed method can effectively extract the long-term deformation of long-span high-speed railway bridge with higher accuracy.

  10. A METHOD FOR THE EXTRACTION OF LONG-TERM DEFORMATION CHARACTERISTICS OF LONG-SPAN HIGH-SPEED RAILWAY BRIDGES USING HIGH-RESOLUTION SAR IMAGES

    Directory of Open Access Journals (Sweden)

    H. G. Jia

    2016-06-01

    Full Text Available Natural causes and high-speed train load will result in the structural deformation of long-span bridges, which greatly influence the safety operation of high-speed railway. Hence it is necessary to conduct the deformation monitoring and regular status assessment for long-span bridges. However for some traditional surveying technique, e.g. control-point-based surveying techniques, a lot of human and material resources are needed to perform the long-term monitoring for the whole bridge. In this study we detected the long-term bridge deformation time-series by persistent scatterer interferometric synthetic aperture radar (PSInSAR technique using the high-resolution SAR images and external digital elevation model. A test area in Nanjing city in China is chosen and TerraSAR-X images and Tandem-X for this area have been used. There is the Dashengguan bridge in high speed railway in this area as study object to evaluate this method. Experiment results indicate that the proposed method can effectively extract the long-term deformation of long-span high-speed railway bridge with higher accuracy.

  11. Regional cerebral blood flow and oxygen metabolism in patients with ischemic stroke studied with high resolution pet and the O-15 labelled gas steady-state method

    International Nuclear Information System (INIS)

    Uemura, K.; Shishido, F.; Inugami, A.; Yamaguchi, T.; Ogawa, T.; Murakami, M.; Kanno, I.; Tagawa, K.; Yasui, N.

    1986-01-01

    Although regional cerebral blood flow (rCBF) studies have considerably increased pathophysiological knowledge in ischemic cerebrovascular disease, sometimes the results of such studies do not correlate with neurological abnormalities observed in the subjects being examined. Because regional neuronal activities always couple to the regional energy metabolism of brain tissue, simultaneous observation of rCBF and regional energy metabolism, such as regional oxygen consumption (rCMRO/sub 2/) and regional glucose consumption (rCMRG1), will provide greater understanding of the pathophysiology of the disease than rCBF study alone. Positron emission tomography (PET) using the 0-15 labelled gas steady-state method offers simultaneous measurement of rCBF and rCMRO/sub 2/ in vivo, and demonstrates imbalance between rCBF and rCMRO/sub 2/ in an ischemic lesion in a human brain. However, clinical PET studies in ischemic cerebrovascular disease reported previously, have been carried out using low resolution (more than 15 mm in the full width at half maximum; FWHM) PET. This report presents preliminary results using a high resolution tomograph; Headtome III and 0-15 labelled gas steady state method to investigate ischemic cerebrovascular disease

  12. Micro-sampling method based on high-resolution continuum source graphite furnace atomic absorption spectrometry for calcium determination in blood and mitochondrial suspensions.

    Science.gov (United States)

    Gómez-Nieto, Beatriz; Gismera, Mª Jesús; Sevilla, Mª Teresa; Satrústegui, Jorgina; Procopio, Jesús R

    2017-08-01

    A micro-sampling and straightforward method based on high resolution continuum source atomic absorption spectrometry (HR-CS AAS) was developed to determine extracellular and intracellular Ca in samples of interest in clinical and biomedical analysis. Solid sampling platforms were used to introduce the micro-samples into the graphite furnace atomizer. The secondary absorption line for Ca, located at 239.856nm, was selected to carry out the measurements. Experimental parameters such as pyrolysis and atomization temperatures and the amount of sample introduced for the measurements were optimized. Calibration was performed using aqueous standards and the approach to measure at the wings of the absorption lines was employed for the expansion of the linear response range. The limit of detection was of 0.02mgL -1 Ca (0.39ng Ca) and the upper limit of linear range was increased up to 8.0mgL -1 Ca (160ng Ca). The proposed method was used to determine Ca in mitochondrial suspensions and whole blood samples with successful results. Adequate recoveries (within 91-107%) were obtained in the tests performed for validation purposes. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Radiation hybrid mapping as one of the main methods of the creation of high resolution maps of human and animal genomes

    International Nuclear Information System (INIS)

    Sulimova, G.E.; Kompanijtsev, A.A.; Mojsyak, E.V.; Rakhmanaliev, Eh.R.; Klimov, E.A.; Udina, I.G.; Zakharov, I.A.

    2000-01-01

    Radiation hybrid mapping (RH mapping) is considered as one of the main method of constructing physical maps of mammalian genomes. In introduction, theoretical prerequisites of developing of the RH mapping and statistical methods of data analysis are discussed. Comparative characteristics of universal commercial panels of the radiation hybrid somatic cells (RH panels) are shown. In experimental part of the work, RH mapping is used to localize nucleotide sequences adjacent to Not I sites of human chromosome 3 with the aim to integrate contig map of Nor I clones to comprehensive maps of human genome. Five nucleotide sequences adjacent to the sites of integration of papilloma virus in human genome and expressed in the cells of cervical cancer involved localized. It is demonstrated that the region 13q14.3-q21.1 was enriched with nucleotide sequences involved in the processes of carcinogenesis. RH mapping can be considered as one of the most perspective applications of modern radiation biology in the field of molecular genetics, that is, in constructing physical maps of mammalian genomes with high resolution level [ru

  14. Cross-validation of two commercial methods for volumetric high-resolution dose reconstruction on a phantom for non-coplanar VMAT beams

    International Nuclear Information System (INIS)

    Feygelman, Vladimir; Stambaugh, Cassandra; Opp, Daniel; Zhang, Geoffrey; Moros, Eduardo G.; Nelms, Benjamin E.

    2014-01-01

    Background and purpose: Delta 4 (ScandiDos AB, Uppsala, Sweden) and ArcCHECK with 3DVH software (Sun Nuclear Corp., Melbourne, FL, USA) are commercial quasi-three-dimensional diode dosimetry arrays capable of volumetric measurement-guided dose reconstruction. A method to reconstruct dose for non-coplanar VMAT beams with 3DVH is described. The Delta 4 3D dose reconstruction on its own phantom for VMAT delivery has not been thoroughly evaluated previously, and we do so by comparison with 3DVH. Materials and methods: Reconstructed volumetric doses for VMAT plans delivered with different table angles were compared between the Delta 4 and 3DVH using gamma analysis. Results: The average γ (2% local dose-error normalization/2mm) passing rate comparing the directly measured Delta 4 diode dose with 3DVH was 98.2 ± 1.6% (1SD). The average passing rate for the full volumetric comparison of the reconstructed doses on a homogeneous cylindrical phantom was 95.6 ± 1.5%. No dependence on the table angle was observed. Conclusions: Modified 3DVH algorithm is capable of 3D VMAT dose reconstruction on an arbitrary volume for the full range of table angles. Our comparison results between different dosimeters make a compelling case for the use of electronic arrays with high-resolution 3D dose reconstruction as primary means of evaluating spatial dose distributions during IMRT/VMAT verification

  15. An outline review of numerical transport methods

    International Nuclear Information System (INIS)

    Budd, C.

    1981-01-01

    A brief review is presented of numerical methods for solving the neutron transport equation in the context of reactor physics. First the various forms of transport equation are given. Second, the various ways of classifying numerical transport methods are discussed. Finally each method (or class of methods) is outlined in turn. (U.K.)

  16. Numerical methods for hydrodynamic stability problems

    International Nuclear Information System (INIS)

    Fujimura, Kaoru

    1985-11-01

    Numerical methods for solving the Orr-Sommerfeld equation, which is the fundamental equation of the hydrodynamic stability theory for various shear flows, are reviewed and typical numerical results are presented. The methods of asymptotic solution, finite difference methods, initial value methods and expansions in orthogonal functions are compared. (author)

  17. Chlorinated paraffin analysis by gas chromatography Orbitrap high-resolution mass spectrometry: Method performance, investigation of possible interferences and analysis of fish samples.

    Science.gov (United States)

    Krätschmer, Kerstin; Cojocariu, Cristian; Schächtele, Alexander; Malisch, Rainer; Vetter, Walter

    2018-03-02

    For decades, high quantities of short-chain chlorinated paraffins (SCCP) and medium-chain chlorinated paraffins (MCCP) have been widely used, for instance as plasticizers or flame retardants, leading to global pollution due to unintentional emissions from products or waste. Due to the high complexity of chlorinated paraffins with several thousand congeners there is no consensus on an analytical procedure for SCCPs and MCCPs in food samples. Amongst the multitude of methods currently in use, high-resolution mass spectrometry is particularly valuable for in-depth studies of homologue patterns. Here we analyse SCCPs and MCCPs with gas chromatography coupled to high-resolution Orbitrap mass spectrometry (GC-Orbitrap-HRMS) operated in full-scan acquisition in electron capture negative ion (ECNI) mode at 60,000 and 120,000 resolution (FWHM, m/z 200, equals roughly 30,000 and 60,000 at 5% peak height). Linear dynamic range, selectivity and sensitivity tests confirmed an excellent linearity in a concentration range of 25-15,000 pg/μL with very low limits of detection (LODs) in the low pg/μL range. Spiking experiments with high levels of native mono- and di-ortho-polychlorinated biphenyls (PCBs) and mixtures of MCCP and SCCP standards did not have a negative impact on isotope ratios of the examined homologues. Besides the [M-Cl] - fragment ions used for quantification, the mass spectra of homologues also featured [M-HCl] - ions whose abundance increased with decreasing chlorination degree. In addition, [M-HCl-Cl] - ions were detected with a relative abundance of 5-10%. Three salmon (Salmo salar) samples farmed in Norway showed a consistent CP homologue pattern which differed both from the CP pattern in a sample from Scottish aquaculture and a wild salmon sample. These measurements produce evidence that discretely different CP patterns may exist in different areas of origin. Our results demonstrate that GC/ECNI-Orbitrap-HRMS is well-suited for the analysis of CPs by

  18. A refined, rapid and reproducible high resolution melt (HRM-based method suitable for quantification of global LINE-1 repetitive element methylation

    Directory of Open Access Journals (Sweden)

    Tse M Yat

    2011-12-01

    Full Text Available Abstract Background The methylation of DNA is recognized as a key mechanism in the regulation of genomic stability and evidence for its role in the development of cancer is accumulating. LINE-1 methylation status represents a surrogate measure of genome-wide methylation. Findings Using high resolution melt (HRM curve analysis technology, we have established an in-tube assay that is linear (r > 0.9986 with a high amplification efficiency (90-105%, capable of discriminating between partcipant samples with small differences in methylation, and suitable for quantifying a wide range of LINE-1 methylation levels (0-100%--including the biologically relevant range of 50-90% expected in human DNA. We have optimized this procedure to perform using 2 μg of starting DNA and 2 ng of bisulfite-converted DNA for each PCR reaction. Intra- and inter-assay coefficients of variation were 1.44% and 0.49%, respectively, supporting the high reproducibility and precision of this approach. Conclusions In summary, this is a completely linear, quantitative HRM PCR method developed for the measurement of LINE-1 methylation. This cost-efficient, refined and reproducible assay can be performed using minimal amounts of starting DNA. These features make our assay suitable for high throughput analysis of multiple samples from large population-based studies.

  19. Automated method for relating regional pulmonary structure and function: integration of dynamic multislice CT and thin-slice high-resolution CT

    Science.gov (United States)

    Tajik, Jehangir K.; Kugelmass, Steven D.; Hoffman, Eric A.

    1993-07-01

    We have developed a method utilizing x-ray CT for relating pulmonary perfusion to global and regional anatomy, allowing for detailed study of structure to function relationships. A thick slice, high temporal resolution mode is used to follow a bolus contrast agent for blood flow evaluation and is fused with a high spatial resolution, thin slice mode to obtain structure- function detail. To aid analysis of blood flow, we have developed a software module, for our image analysis package (VIDA), to produce the combined structure-function image. Color coded images representing blood flow, mean transit time, regional tissue content, regional blood volume, regional air content, etc. are generated and imbedded in the high resolution volume image. A text file containing these values along with a voxel's 3-D coordinates is also generated. User input can be minimized to identifying the location of the pulmonary artery from which the input function to a blood flow model is derived. Any flow model utilizing one input and one output function can be easily added to a user selectable list. We present examples from our physiologic based research findings to demonstrate the strengths of combining dynamic CT and HRCT relative to other scanning modalities to uniquely characterize pulmonary normal and pathophysiology.

  20. Development of a quantitation method to assay both lyoniresinol enantiomers in wines, spirits, and oak wood by liquid chromatography-high resolution mass spectrometry.

    Science.gov (United States)

    Cretin, Blandine N; Dubourdieu, Denis; Marchal, Axel

    2016-05-01

    Wine taste balance evolves during oak aging by the release of volatile and non-volatile compounds from wood. Among them, an enantiomer of lyoniresinol, (+)-lyoniresinol, has been shown to exhibit bitterness. To evaluate the impact of (+)-lyoniresinol on wine taste, a two-step quantitation method was developed and validated. First, (±)-lyoniresinol was assayed in wines, spirits, and oak wood macerates by C-18 liquid chromatography-high resolution mass spectrometry (LC-HRMS). Then, the lyoniresinol enantiomeric ratio was determined by chiral LC-HRMS in order to calculate the (+)-lyoniresinol content. In red and white wines, the average concentrations of (+)-lyoniresinol were 1.9 and 0.8 mg/L, respectively. The enantiomer proportions were not affected by bottle aging, and lyoniresinol appeared to remain stable over time. The sensory study of (+)-lyoniresinol established its perception threshold at 0.46 mg/L in wine. All the commercial wines quantitated were above this perception threshold, demonstrating its impact on wine taste by an increase in bitterness. In spirits, (+)-lyoniresinol ranged from 2.0 to 10.0 mg/L and was found to be released continuously during oak aging. Finally, neither botanical origin nor toasting was found to significantly affect the (+)-lyoniresinol content of oak wood. Graphical abstract From oak wood to wine: evaluation of the influence of (+)-lyoniresinol on the bitterness of wines and spirits.

  1. A novel method for the determination of mercury and selenium in shark tissue using high-resolution inductively coupled plasma-mass spectrometry

    International Nuclear Information System (INIS)

    Paul, Mitchell C.; Toia, Robert F.; Nagy-Felsobuki, Ellak I. von

    2003-01-01

    A method for measuring Hg and Se in shark tissue by high-resolution inductively coupled plasma mass spectrometry (HR-ICP-MS) has been developed. Using a matrix of 4% (v/v) aqueous methanol, the spray chamber and transfer tubing memory effects of Hg were significantly reduced. The methanol matrix was able to effectively wash out Hg (10 ppb) and return the signal to blank level in approximately 5 min. This enabled accurate and concomitant measurements of Hg and Se with detection limits (3σ blank signal, n=10) of 26 and 4 ppt, respectively. The recoveries of Hg and Se based on the CRM were 88 and 83%, respectively. The concentrations of Hg and Se in the (liver, muscle, kidney) of a hammerhead shark (dry weight) were (2.65±0.85, 7.09±1.32, 4.43±1.36) and (17.3±4.1, 1.28±0.29, 24.1±5.2) mg kg -1 (where the expanded uncertainty uses a k=2 value) respectively. Multi-elemental semi-quantitative analysis of a hammerhead shark liver, muscle and kidney revealed high levels of Cd, Zn and As

  2. SEM-microphotogrammetry, a new take on an old method for generating high-resolution 3D models from SEM images.

    Science.gov (United States)

    Ball, A D; Job, P A; Walker, A E L

    2017-08-01

    The method we present here uses a scanning electron microscope programmed via macros to automatically capture dozens of images at suitable angles to generate accurate, detailed three-dimensional (3D) surface models with micron-scale resolution. We demonstrate that it is possible to use these Scanning Electron Microscope (SEM) images in conjunction with commercially available software originally developed for photogrammetry reconstructions from Digital Single Lens Reflex (DSLR) cameras and to reconstruct 3D models of the specimen. These 3D models can then be exported as polygon meshes and eventually 3D printed. This technique offers the potential to obtain data suitable to reconstruct very tiny features (e.g. diatoms, butterfly scales and mineral fabrics) at nanometre resolution. Ultimately, we foresee this as being a useful tool for better understanding spatial relationships at very high resolution. However, our motivation is also to use it to produce 3D models to be used in public outreach events and exhibitions, especially for the blind or partially sighted. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.

  3. A simple and fast method for assessment of the nitrogen–phosphorus–potassium rating of fertilizers using high-resolution continuum source atomic and molecular absorption spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Bechlin, Marcos André; Fortunato, Felipe Manfroi; Moutinho da Silva, Ricardo; Ferreira, Edilene Cristina; Gomes Neto, José Anchieta, E-mail: anchieta@iq.unesp.br

    2014-11-01

    The determination of N, P, and K in fertilizers by high-resolution continuum source flame atomic and molecular absorption spectrometry is proposed. Under optimized conditions, measurements of the diatomic molecules NO and PO at 215.360 and 247.620 nm, respectively, and K using the wing of the alternative line at 404.722 nm allowed calibration curves to be constructed in the ranges 500–5000 mg L{sup −1} N (r = 0.9994), 100–2000 mg L{sup −1} P (r = 0.9946), and 100–2500 mg L{sup −1} K (r = 0.9995). Commercial fertilizers were analyzed by the proposed method and the concentrations of N, P, and K were found to be in agreement with those obtained by Kjeldahl, spectrophotometric, and flame atomic emission spectrometry methods, respectively, at a 95% confidence level (paired t-test). A phosphate rock certified reference material (CRM) was analyzed and the results for P and K were in agreement with the reference values. Recoveries from spiked CRM were in the ranges 97–105% (NO{sub 3}{sup −}-N), 95–103% (NH{sub 4}{sup +}-N), 93–103% (urea-N), 99–108% (P), and 99–102% (K). The relative standard deviations (n = 12) for N, P, and K were 6, 4, and 2%, respectively. - Highlights: • A single technique is proposed to analyze NPK fertilizer. • HR-CS FAAS is proposed for the first time for N, P and K determination in fertilizers. • The method employs the same sample preparation and dilution for the three analytes. • Addition of H{sub 2}O{sub 2} allows analysis of fertilizers with different nitrogen species. • Proposal provides advantages over traditional methods in terms of cost and time.

  4. Towards a pathogenic Escherichia coli detection platform using multiplex SYBR®Green Real-time PCR methods and high resolution melting analysis.

    Directory of Open Access Journals (Sweden)

    Dafni-Maria Kagkli

    Full Text Available Escherichia coli is a group of bacteria which has raised a lot of safety concerns in recent years. Five major intestinal pathogenic groups have been recognized amongst which the verocytotoxin or shiga-toxin (stx1 and/or stx2 producing E. coli (VTEC or STEC respectively have received a lot of attention recently. Indeed, due to the high number of outbreaks related to VTEC strains, the European Food Safety Authority (EFSA has requested the monitoring of the "top-five" serogroups (O26, O103, O111, O145 and O157 most often encountered in food borne diseases and addressed the need for validated VTEC detection methods. Here we report the development of a set of intercalating dye Real-time PCR methods capable of rapidly detecting the presence of the toxin genes together with intimin (eae in the case of VTEC, or aggregative protein (aggR, in the case of the O104:H4 strain responsible for the outbreak in Germany in 2011. All reactions were optimized to perform at the same annealing temperature permitting the multiplex application in order to minimize the need of material and to allow for high-throughput analysis. In addition, High Resolution Melting (HRM analysis allowing the discrimination among strains possessing similar virulence traits was established. The development, application to food samples and the flexibility in use of the methods are thoroughly discussed. Together, these Real-time PCR methods facilitate the detection of VTEC in a new highly efficient way and could represent the basis for developing a simple pathogenic E. coli platform.

  5. A simple and fast method for assessment of the nitrogen–phosphorus–potassium rating of fertilizers using high-resolution continuum source atomic and molecular absorption spectrometry

    International Nuclear Information System (INIS)

    Bechlin, Marcos André; Fortunato, Felipe Manfroi; Moutinho da Silva, Ricardo; Ferreira, Edilene Cristina; Gomes Neto, José Anchieta

    2014-01-01

    The determination of N, P, and K in fertilizers by high-resolution continuum source flame atomic and molecular absorption spectrometry is proposed. Under optimized conditions, measurements of the diatomic molecules NO and PO at 215.360 and 247.620 nm, respectively, and K using the wing of the alternative line at 404.722 nm allowed calibration curves to be constructed in the ranges 500–5000 mg L −1 N (r = 0.9994), 100–2000 mg L −1 P (r = 0.9946), and 100–2500 mg L −1 K (r = 0.9995). Commercial fertilizers were analyzed by the proposed method and the concentrations of N, P, and K were found to be in agreement with those obtained by Kjeldahl, spectrophotometric, and flame atomic emission spectrometry methods, respectively, at a 95% confidence level (paired t-test). A phosphate rock certified reference material (CRM) was analyzed and the results for P and K were in agreement with the reference values. Recoveries from spiked CRM were in the ranges 97–105% (NO 3 − -N), 95–103% (NH 4 + -N), 93–103% (urea-N), 99–108% (P), and 99–102% (K). The relative standard deviations (n = 12) for N, P, and K were 6, 4, and 2%, respectively. - Highlights: • A single technique is proposed to analyze NPK fertilizer. • HR-CS FAAS is proposed for the first time for N, P and K determination in fertilizers. • The method employs the same sample preparation and dilution for the three analytes. • Addition of H 2 O 2 allows analysis of fertilizers with different nitrogen species. • Proposal provides advantages over traditional methods in terms of cost and time

  6. Search for over 2000 current and legacy micropollutants on a wastewater infiltration site with a UPLC-high resolution MS target screening method.

    Science.gov (United States)

    Wode, Florian; van Baar, Patricia; Dünnbier, Uwe; Hecht, Fabian; Taute, Thomas; Jekel, Martin; Reemtsma, Thorsten

    2015-02-01

    A target screening method using ultra high performance liquid chromatography-high resolution mass spectrometry (UPLC-HRMS) was developed. The method was applied to 14 groundwater and 11 surface water samples of a former wastewater infiltration site, where raw wastewater was applied until 1985 and treated wastewater is applied since 2005. The measured data are compared with mass spectrometric data of over 2000 organic micropollutants (OMPs), including pharmaceuticals, personal care products, pesticides, industrial chemicals and metabolites of these classes. A total number of 151 and 159 OMPs were detected in groundwater and surface water, respectively, of which 12 have not been reported before in these matrices. Among these 12 compounds were 11 pharmaceuticals and one personal care product. The identity of 55 of the detected OMPs (35%) was verified by analysis of standard compounds. Based on the distribution in the study area, two groups of OMPs were clearly distinguished: current OMPs introduced with treated municipal wastewater since 2005 and legacy OMPs originating from infiltration of untreated wastewater until 1985. A third group included OMPs contained in historic as well as in current wastewater. During infiltration, OMPs with molecular mass >500 g/mol and log DOW > 3.9 were preferentially removed. Speciation had a strong impact with cationic OMPs showing high, neutral OMPs medium and anionic OMPs lowest elimination during infiltration. This target screening method proved useful to study a wide range of compounds, even in retrospect and at sites with poorly documented history and with a complex and variable hydrological situation. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Application of Internal Standard Method for Several 3d-Transition Metallic Elements in Flame Atomic Absorption Spectrometry Using a Multi-wavelength High-resolution Spectrometer.

    Science.gov (United States)

    Toya, Yusuke; Itagaki, Toshiko; Wagatsuma, Kazuaki

    2017-01-01

    We investigated a simultaneous internal standard method in flame atomic absorption spectrometry (FAAS), in order to better the analytical precision of 3d-transition metals contained in steel materials. For this purpose, a new spectrometer system for FAAS, comprising a bright xenon lamp as the primary radiation source and a high-resolution Echelle monochromator, was employed to measure several absorption lines at a wavelength width of ca. 0.3 nm at the same time, which enables the absorbances of an analytical line and also an internal standard line to be estimated. In considering several criteria for selecting an internal standard element and the absorption line, it could be suggested that platinum-group elements: ruthenium, rhodium, or palladium, were suitable for an internal standard element to determine the 3d-transition metal elements, such as titanium, iron, and nickel, by measuring an appropriate pair of these absorption lines simultaneously. Several variances of the absorption signal, such as a variation in aspirated amounts of sample solution and a short-period drift of the primary light source, would be corrected and thus reduced, when the absorbance ratio of the analytical line to the internal standard line was measured. In Ti-Pd, Ni-Rh, and Fe-Ru systems chosen as typical test samples, the repeatability of the signal respnses was investigated with/without the internal standard method, resulting in better precision when the internal standard method was applied in the FAAS with a nitrous oxide-acetylene flame rather than an air-acetylene flame.

  8. Numerical methods used in simulation

    International Nuclear Information System (INIS)

    Caseau, Paul; Perrin, Michel; Planchard, Jacques

    1978-01-01

    The fundamental numerical problem posed by simulation problems is the stability of the resolution diagram. The system of the most used equations is defined, since there is a family of models of increasing complexity with 3, 4 or 5 equations although only models with 3 and 4 equations have been used extensively. After defining what is meant by explicit or implicit, the best established stability results is given for one-dimension problems and then for two-dimension problems. It is shown that two types of discretisation may be defined: four and eight point diagrams (in one or two dimensions) and six and ten point diagrams (in one or two dimensions). To end, some results are given on problems that are not usually treated very much, i.e. non-asymptotic stability and the stability of diagrams based on finite elements [fr

  9. Structure from motion, a low cost, very high resolution method for surveying glaciers using GoPros and opportunistic helicopter flights

    Science.gov (United States)

    Girod, L.; Nuth, C.; Schellenberger, T.

    2014-12-01

    The capability of structure from motion techniques to survey glaciers with a very high spatial and temporal resolution is a promising tool for better understanding the dynamic changes of glaciers. Modern software and computing power allow us to produce accurate data sets from low cost surveys, thus improving the observational capabilities on a wider range of glaciers and glacial processes. In particular, highly accurate glacier volume change monitoring and 3D movement computations will be possible Taking advantage of the helicopter flight needed to survey the ice stakes on Kronenbreen, NW Svalbard, we acquired high resolution photogrammetric data over the well-studied Midre Lovénbreen in September 2013. GoPro Hero 2 cameras were attached to the landing gear of the helicopter, acquiring two images per second. A C/A code based GPS was used for registering the stereoscopic model. Camera clock calibration is obtained through fitting together the shapes of the flight given by both the GPS logger and the relative orientation of the images. A DEM and an ortho-image are generated at 30cm resolution from 300 images collected. The comparison with a 2005 LiDAR DEM (5 meters resolution) shows an absolute error in the direct registration of about 6±3m in 3D which could be easily reduced to 1,5±1m by using fine point cloud alignment algorithms on stable ground. Due to the different nature of the acquisition method, it was not possible to use tie point based co-registration. A combination of the DEM and ortho-image is shown with the point cloud in figure below. A second photogrammetric data set will be acquired in September 2014 to survey the annual volume change and movement. These measurements will then be compared to the annual resolution glaciological stake mass balance and velocity measurements to assess the precision of the method to monitor at an annual resolution.

  10. Development of a High-Resolution Laser Absorption Spectroscopy Method with Application to the Determination of Absolute Concentration of Gaseous Elemental Mercury in Air.

    Science.gov (United States)

    Srivastava, Abneesh; Hodges, Joseph T

    2018-05-07

    Isotope dilution-cold-vapor-inductively coupled plasma mass spectrometry (ID-CV-ICPMS) has become the primary standard for measurement of gaseous elemental mercury (GEM) mass concentration. However, quantitative mass spectrometry is challenging for several reasons including (1) the need for isotopic spiking with a standard reference material, (2) the requirement for bias-free passive sampling protocols, (3) the need for stable mass spectrometry interface design, and (4) the time and cost involved for gas sampling, sample processing, and instrument calibration. Here, we introduce a high-resolution laser absorption spectroscopy method that eliminates the need for sample-specific calibration standards or detailed analysis of sample treatment losses. This technique involves a tunable, single-frequency laser absorption spectrometer that measures isotopically resolved spectra of elemental mercury (Hg) spectra of 6 1 S 0 ← 6 3 P 1 intercombination transition near λ = 253.7 nm. Measured spectra are accurately modeled from first-principles using the Beer-Lambert law and Voigt line profiles combined with literature values for line positions, line shape parameters, and the spontaneous emission Einstein coefficient to obtain GEM mass concentration values. We present application of this method for the measurement of the equilibrium concentration of mercury vapor near room temperature. Three closed systems are considered: two-phase mixtures of liquid Hg and its vapor and binary two-phase mixtures of Hg-air and Hg-N 2 near atmospheric pressure. Within the experimental relative standard uncertainty, 0.9-1.5% congruent values of the equilibrium Hg vapor concentration are obtained for the Hg-only, Hg-air, Hg-N 2 systems, in confirmation with thermodynamic predictions. We also discuss detection limits and the potential of the present technique to serve as an absolute primary standard for measurements of gas-phase mercury concentration and isotopic composition.

  11. Numerical computer methods part D

    CERN Document Server

    Johnson, Michael L

    2004-01-01

    The aim of this volume is to brief researchers of the importance of data analysis in enzymology, and of the modern methods that have developed concomitantly with computer hardware. It is also to validate researchers' computer programs with real and synthetic data to ascertain that the results produced are what they expected. Selected Contents: Prediction of protein structure; modeling and studying proteins with molecular dynamics; statistical error in isothermal titration calorimetry; analysis of circular dichroism data; model comparison methods.

  12. Numerical methods in software and analysis

    CERN Document Server

    Rice, John R

    1992-01-01

    Numerical Methods, Software, and Analysis, Second Edition introduces science and engineering students to the methods, tools, and ideas of numerical computation. Introductory courses in numerical methods face a fundamental problem-there is too little time to learn too much. This text solves that problem by using high-quality mathematical software. In fact, the objective of the text is to present scientific problem solving using standard mathematical software. This book discusses numerous programs and software packages focusing on the IMSL library (including the PROTRAN system) and ACM Algorithm

  13. An introduction to numerical methods and analysis

    CERN Document Server

    Epperson, James F

    2013-01-01

    Praise for the First Edition "". . . outstandingly appealing with regard to its style, contents, considerations of requirements of practice, choice of examples, and exercises.""-Zentralblatt MATH "". . . carefully structured with many detailed worked examples.""-The Mathematical Gazette The Second Edition of the highly regarded An Introduction to Numerical Methods and Analysis provides a fully revised guide to numerical approximation. The book continues to be accessible and expertly guides readers through the many available techniques of numerical methods and analysis. An Introduction to

  14. Isogeometric methods for numerical simulation

    CERN Document Server

    Bordas, Stéphane

    2015-01-01

    The book presents the state of the art in isogeometric modeling and shows how the method has advantaged. First an introduction to geometric modeling with NURBS and T-splines is given followed by the implementation into computer software. The implementation in both the FEM and BEM is discussed.

  15. Method validation for high resolution sector field inductively coupled plasma mass spectrometry determination of the emerging contaminants in the open ocean: Rare earth elements as a case study

    Science.gov (United States)

    Wysocka, Irena; Vassileva, Emilia

    2017-02-01

    Analytical procedure for the determination of fourteen rare earth elements (REEs) in the seawater samples has been developed and validated. The elements (La, Ce, Pr, Nd, Sm, Eu, Gd, Tb, Dy, Ho, Er, Tm, Yb, Lu) at ultra-trace level were measured by high resolution sector field inductively coupled plasma mass spectrometry (HR ICP-SFMS) after off-line analytes pre-concentration and matrix separation. The sample pre-treatment was carried out by commercially available automated system seaFAST-pico™, which is a low-pressure ion chromatography technique, based on solid phase extraction principles. Efficient elimination of seawater matrix and up to 50-fold pre-concentration of REEs enabled their accurate and precise quantification at ng L- 1 level. A validation approach in line with the requirements of ISO/IEC 17025 standard and Eurachem guidelines were followed. With this in mind, selectivity, working range, linearity, recovery (from 92% to 102%), repeatability (1%-4%), intermediate precision (2%-6%), limits of detection (0.001-0.08 ng L- 1) were systematically assessed. The total uncertainty associated to each result was estimated and the main sources of uncertainty sorted out. All major contributions to the combined uncertainty of the obtained results were identified and propagated together, following the ISO/GUM guidelines. The relative expanded uncertainty was estimated at range from 10.4% to 11.6% (k = 2). Demonstration of traceability of measurement results was also presented. Due to the low limits of detection, this method enables the determination of ultra-low levels of REEs in the open seawater as well as small variations in their concentrations. The potential of the proposed analytical procedure, based on combination of seaFAST-pico™ for sample preparation and HR ICP-SFMS, was demonstrated by direct analysis of seawater form different regions of the world.

  16. Monitoring and Method development of Hg in Istanbul Airborne Particulates by Solid Sampling Continuum Source-High Resolution Electrothermal Atomic Absorption Spectromerty

    Directory of Open Access Journals (Sweden)

    Soydemir E.

    2014-07-01

    Full Text Available In this work, a method has been developed and monitoring for the determination of mercury in PM2.5 airborne particulates by solid sampling high-resolution continuum source electrothermal atomic absorption spectrometry. The PM2.5 airborne particulates were collected on quartz filters using high volume samplers (500 L/min in Istanbul (Turkey for 96 hours every month in one year. At first, experimental conditions as well as the validation tests were optimized using collected filter. For this purpose, the effects of atomization temperature, amount of sample intoduced in to the furnace, addition of acids and/or KMnO4 on the sample, covering of graphite tube and platform or using of Ag nanoparticulates, Au nanoparticulates, and Pd solutions on the accuracy and precision were investigated. After optimization of the experimental conditions, the mercury concentrations were determined in the collected filter. The filters with PM2.5 airborne particulates were dried, divided into small fine particles and then Hg concentrations were determined directly. In order to eliminate any error due to the sensitivity difference between aqueous standards and solid samples, the quantification was performed using solid calibrants. The limit of detection, based on three times the standard deviations for ten atomizations of an unused filter, was 30 ng/g. The Hg content was dependent on the sampling site, season etc, ranging from

  17. High-resolution melting analysis, a simple and effective method for reliable mutation scanning and frequency studies in the ACADVL gene

    DEFF Research Database (Denmark)

    Olsen, Rikke Katrine Jentoft; Dobrowolski, Steven F; Kjeldsen, Margrethe

    2010-01-01

    -long-chain acyl-CoA dehydrogenase deficiency (VLCADD), the second most common fatty acid oxidation disorder detected by expanded newborn screening, to demonstrate accurate and fast diagnostic evaluation of the ACADVL gene utilizing DNA extracted from the newborn screening dried blood spot and high resolution melt...

  18. Numerical computer methods part E

    CERN Document Server

    Johnson, Michael L

    2004-01-01

    The contributions in this volume emphasize analysis of experimental data and analytical biochemistry, with examples taken from biochemistry. They serve to inform biomedical researchers of the modern data analysis methods that have developed concomitantly with computer hardware. Selected Contents: A practical approach to interpretation of SVD results; modeling of oscillations in endocrine networks with feedback; quantifying asynchronous breathing; sample entropy; wavelet modeling and processing of nasal airflow traces.

  19. Excel spreadsheet in teaching numerical methods

    Science.gov (United States)

    Djamila, Harimi

    2017-09-01

    One of the important objectives in teaching numerical methods for undergraduates’ students is to bring into the comprehension of numerical methods algorithms. Although, manual calculation is important in understanding the procedure, it is time consuming and prone to error. This is specifically the case when considering the iteration procedure used in many numerical methods. Currently, many commercial programs are useful in teaching numerical methods such as Matlab, Maple, and Mathematica. These are usually not user-friendly by the uninitiated. Excel spreadsheet offers an initial level of programming, which it can be used either in or off campus. The students will not be distracted with writing codes. It must be emphasized that general commercial software is required to be introduced later to more elaborated questions. This article aims to report on a teaching numerical methods strategy for undergraduates engineering programs. It is directed to students, lecturers and researchers in engineering field.

  20. High resolution time integration for SN radiation transport

    International Nuclear Information System (INIS)

    Thoreson, Greg; McClarren, Ryan G.; Chang, Jae H.

    2009-01-01

    First-order, second-order, and high resolution time discretization schemes are implemented and studied for the discrete ordinates (S N ) equations. The high resolution method employs a rate of convergence better than first-order, but also suppresses artificial oscillations introduced by second-order schemes in hyperbolic partial differential equations. The high resolution method achieves these properties by nonlinearly adapting the time stencil to use a first-order method in regions where oscillations could be created. We employ a quasi-linear solution scheme to solve the nonlinear equations that arise from the high resolution method. All three methods were compared for accuracy and convergence rates. For non-absorbing problems, both second-order and high resolution converged to the same solution as the first-order with better convergence rates. High resolution is more accurate than first-order and matches or exceeds the second-order method

  1. Classification of high resolution satellite images

    OpenAIRE

    Karlsson, Anders

    2003-01-01

    In this thesis the Support Vector Machine (SVM)is applied on classification of high resolution satellite images. Sveral different measures for classification, including texture mesasures, 1st order statistics, and simple contextual information were evaluated. Additionnally, the image was segmented, using an enhanced watershed method, in order to improve the classification accuracy.

  2. Progress in high-resolution x-ray holographic microscopy

    International Nuclear Information System (INIS)

    Jacobsen, C.; Kirz, J.; Howells, M.; McQuaid, K.; Rothman, S.; Feder, R.; Sayre, D.

    1987-07-01

    Among the various types of x-ray microscopes that have been demonstrated, the holographic microscope has had the largest gap between promise and performance. The difficulties of fabricating x-ray optical elements have led some to view holography as the most attractive method for obtaining the ultimate in high resolution x-ray micrographs; however, we know of no investigations prior to 1987 that clearly demonstrated submicron resolution in reconstructed images. Previous efforts suffered from problems such as limited resolution and dynamic range in the recording media, low coherent x-ray flux, and aberrations and diffraction limits in visible light reconstruction. We have addressed the recording limitations through the use of an undulator x-ray source and high-resolution photoresist recording media. For improved results in the readout and reconstruction steps, we have employed metal shadowing and transmission electron microscopy, along with numerical reconstruction techniques. We believe that this approach will allow holography to emerge as a practical method of high-resolution x-ray microscopy. 30 refs., 4 figs

  3. Progress in high-resolution x-ray holographic microscopy

    Energy Technology Data Exchange (ETDEWEB)

    Jacobsen, C.; Kirz, J.; Howells, M.; McQuaid, K.; Rothman, S.; Feder, R.; Sayre, D.

    1987-07-01

    Among the various types of x-ray microscopes that have been demonstrated, the holographic microscope has had the largest gap between promise and performance. The difficulties of fabricating x-ray optical elements have led some to view holography as the most attractive method for obtaining the ultimate in high resolution x-ray micrographs; however, we know of no investigations prior to 1987 that clearly demonstrated submicron resolution in reconstructed images. Previous efforts suffered from problems such as limited resolution and dynamic range in the recording media, low coherent x-ray flux, and aberrations and diffraction limits in visible light reconstruction. We have addressed the recording limitations through the use of an undulator x-ray source and high-resolution photoresist recording media. For improved results in the readout and reconstruction steps, we have employed metal shadowing and transmission electron microscopy, along with numerical reconstruction techniques. We believe that this approach will allow holography to emerge as a practical method of high-resolution x-ray microscopy. 30 refs., 4 figs.

  4. Berkeley High-Resolution Ball

    International Nuclear Information System (INIS)

    Diamond, R.M.

    1984-10-01

    Criteria for a high-resolution γ-ray system are discussed. Desirable properties are high resolution, good response function, and moderate solid angle so as to achieve not only double- but triple-coincidences with good statistics. The Berkeley High-Resolution Ball involved the first use of bismuth germanate (BGO) for anti-Compton shield for Ge detectors. The resulting compact shield permitted rather close packing of 21 detectors around a target. In addition, a small central BGO ball gives the total γ-ray energy and multiplicity, as well as the angular pattern of the γ rays. The 21-detector array is nearly complete, and the central ball has been designed, but not yet constructed. First results taken with 9 detector modules are shown for the nucleus 156 Er. The complex decay scheme indicates a transition from collective rotation (prolate shape) to single- particle states (possibly oblate) near spin 30 h, and has other interesting features

  5. Numerical Methods for Partial Differential Equations

    CERN Document Server

    Guo, Ben-yu

    1987-01-01

    These Proceedings of the first Chinese Conference on Numerical Methods for Partial Differential Equations covers topics such as difference methods, finite element methods, spectral methods, splitting methods, parallel algorithm etc., their theoretical foundation and applications to engineering. Numerical methods both for boundary value problems of elliptic equations and for initial-boundary value problems of evolution equations, such as hyperbolic systems and parabolic equations, are involved. The 16 papers of this volume present recent or new unpublished results and provide a good overview of current research being done in this field in China.

  6. High-resolution seismic wave propagation using local time stepping

    KAUST Repository

    Peter, Daniel; Rietmann, Max; Galvez, Percy; Ampuero, Jean Paul

    2017-01-01

    High-resolution seismic wave simulations often require local refinements in numerical meshes to accurately capture e.g. steep topography or complex fault geometry. Together with explicit time schemes, this dramatically reduces the global time step

  7. High resolution metric imaging payload

    Science.gov (United States)

    Delclaud, Y.

    2017-11-01

    Alcatel Space Industries has become Europe's leader in the field of high and very high resolution optical payloads, in the frame work of earth observation system able to provide military government with metric images from space. This leadership allowed ALCATEL to propose for the export market, within a French collaboration frame, a complete space based system for metric observation.

  8. Adaptive optics with pupil tracking for high resolution retinal imaging.

    Science.gov (United States)

    Sahin, Betul; Lamory, Barbara; Levecq, Xavier; Harms, Fabrice; Dainty, Chris

    2012-02-01

    Adaptive optics, when integrated into retinal imaging systems, compensates for rapidly changing ocular aberrations in real time and results in improved high resolution images that reveal the photoreceptor mosaic. Imaging the retina at high resolution has numerous potential medical applications, and yet for the development of commercial products that can be used in the clinic, the complexity and high cost of the present research systems have to be addressed. We present a new method to control the deformable mirror in real time based on pupil tracking measurements which uses the default camera for the alignment of the eye in the retinal imaging system and requires no extra cost or hardware. We also present the first experiments done with a compact adaptive optics flood illumination fundus camera where it was possible to compensate for the higher order aberrations of a moving model eye and in vivo in real time based on pupil tracking measurements, without the real time contribution of a wavefront sensor. As an outcome of this research, we showed that pupil tracking can be effectively used as a low cost and practical adaptive optics tool for high resolution retinal imaging because eye movements constitute an important part of the ocular wavefront dynamics.

  9. High resolution time integration for Sn radiation transport

    International Nuclear Information System (INIS)

    Thoreson, Greg; McClarren, Ryan G.; Chang, Jae H.

    2008-01-01

    First order, second order and high resolution time discretization schemes are implemented and studied for the S n equations. The high resolution method employs a rate of convergence better than first order, but also suppresses artificial oscillations introduced by second order schemes in hyperbolic differential equations. All three methods were compared for accuracy and convergence rates. For non-absorbing problems, both second order and high resolution converged to the same solution as the first order with better convergence rates. High resolution is more accurate than first order and matches or exceeds the second order method. (authors)

  10. Numerical simulation of compressible two-phase flow using a diffuse interface method

    International Nuclear Information System (INIS)

    Ansari, M.R.; Daramizadeh, A.

    2013-01-01

    Highlights: ► Compressible two-phase gas–gas and gas–liquid flows simulation are conducted. ► Interface conditions contain shock wave and cavitations. ► A high-resolution diffuse interface method is investigated. ► The numerical results exhibit very good agreement with experimental results. -- Abstract: In this article, a high-resolution diffuse interface method is investigated for simulation of compressible two-phase gas–gas and gas–liquid flows, both in the presence of shock wave and in flows with strong rarefaction waves similar to cavitations. A Godunov method and HLLC Riemann solver is used for discretization of the Kapila five-equation model and a modified Schmidt equation of state (EOS) is used to simulate the cavitation regions. This method is applied successfully to some one- and two-dimensional compressible two-phase flows with interface conditions that contain shock wave and cavitations. The numerical results obtained in this attempt exhibit very good agreement with experimental results, as well as previous numerical results presented by other researchers based on other numerical methods. In particular, the algorithm can capture the complex flow features of transient shocks, such as the material discontinuities and interfacial instabilities, without any oscillation and additional diffusion. Numerical examples show that the results of the method presented here compare well with other sophisticated modeling methods like adaptive mesh refinement (AMR) and local mesh refinement (LMR) for one- and two-dimensional problems

  11. Design of heat exchangers by numerical methods

    International Nuclear Information System (INIS)

    Konuk, A.A.

    1981-01-01

    Differential equations describing the heat tranfer in shell - and tube heat exchangers are derived and solved numerically. The method of ΔT sub(lm) is compared with the proposed method in cases where the specific heat at constant pressure, Cp and the overall heat transfer coefficient, U, vary with temperature. The error of the method of ΔT sub (lm) for the computation of the exchanger lenght is less than + 10%. However, the numerical method, being more accurate and at the same time easy to use and economical, is recommended for the design of shell-and-tube heat exchangers. (Author) [pt

  12. Numerical analysis in electromagnetics the TLM method

    CERN Document Server

    Saguet, Pierre

    2013-01-01

    The aim of this book is to give a broad overview of the TLM (Transmission Line Matrix) method, which is one of the "time-domain numerical methods". These methods are reputed for their significant reliance on computer resources. However, they have the advantage of being highly general.The TLM method has acquired a reputation for being a powerful and effective tool by numerous teams and still benefits today from significant theoretical developments. In particular, in recent years, its ability to simulate various situations with excellent precision, including complex materials, has been

  13. High resolution Neutron and Synchrotron Powder Diffraction

    International Nuclear Information System (INIS)

    Hewat, A.W.

    1986-01-01

    The use of high-resolution powder diffraction has grown rapidly in the past years, with the development of Rietveld (1967) methods of data analysis and new high-resolution diffractometers and multidetectors. The number of publications in this area has increased from a handful per year until 1973 to 150 per year in 1984, with a ten-year total of over 1000. These papers cover a wide area of solid state-chemistry, physics and materials science, and have been grouped under 20 subject headings, ranging from catalysts to zeolites, and from battery electrode materials to pre-stressed superconducting wires. In 1985 two new high-resolution diffractometers are being commissioned, one at the SNS laboratory near Oxford, and one at the ILL in Grenoble. In different ways these machines represent perhaps the ultimate that can be achieved with neutrons and will permit refinement of complex structures with about 250 parameters and unit cell volumes of about 2500 Angstrom/sp3/. The new European Synchotron Facility will complement the Grenoble neutron diffractometers, and extend the role of high-resolution powder diffraction to the direct solution of crystal structures, pioneered in Sweden

  14. Numerical Methods for Radiation Magnetohydrodynamics in Astrophysics

    Energy Technology Data Exchange (ETDEWEB)

    Klein, R I; Stone, J M

    2007-11-20

    We describe numerical methods for solving the equations of radiation magnetohydrodynamics (MHD) for astrophysical fluid flow. Such methods are essential for the investigation of the time-dependent and multidimensional dynamics of a variety of astrophysical systems, although our particular interest is motivated by problems in star formation. Over the past few years, the authors have been members of two parallel code development efforts, and this review reflects that organization. In particular, we discuss numerical methods for MHD as implemented in the Athena code, and numerical methods for radiation hydrodynamics as implemented in the Orion code. We discuss the challenges introduced by the use of adaptive mesh refinement in both codes, as well as the most promising directions for future developments.

  15. Numerical methods and modelling for engineering

    CERN Document Server

    Khoury, Richard

    2016-01-01

    This textbook provides a step-by-step approach to numerical methods in engineering modelling. The authors provide a consistent treatment of the topic, from the ground up, to reinforce for students that numerical methods are a set of mathematical modelling tools which allow engineers to represent real-world systems and compute features of these systems with a predictable error rate. Each method presented addresses a specific type of problem, namely root-finding, optimization, integral, derivative, initial value problem, or boundary value problem, and each one encompasses a set of algorithms to solve the problem given some information and to a known error bound. The authors demonstrate that after developing a proper model and understanding of the engineering situation they are working on, engineers can break down a model into a set of specific mathematical problems, and then implement the appropriate numerical methods to solve these problems. Uses a “building-block” approach, starting with simpler mathemati...

  16. Numerical Methods for Radiation Magnetohydrodynamics in Astrophysics

    International Nuclear Information System (INIS)

    Klein, R I; Stone, J M

    2007-01-01

    We describe numerical methods for solving the equations of radiation magnetohydrodynamics (MHD) for astrophysical fluid flow. Such methods are essential for the investigation of the time-dependent and multidimensional dynamics of a variety of astrophysical systems, although our particular interest is motivated by problems in star formation. Over the past few years, the authors have been members of two parallel code development efforts, and this review reflects that organization. In particular, we discuss numerical methods for MHD as implemented in the Athena code, and numerical methods for radiation hydrodynamics as implemented in the Orion code. We discuss the challenges introduced by the use of adaptive mesh refinement in both codes, as well as the most promising directions for future developments

  17. A numerical method for resonance integral calculations

    International Nuclear Information System (INIS)

    Tanbay, Tayfun; Ozgener, Bilge

    2013-01-01

    A numerical method has been proposed for resonance integral calculations and a cubic fit based on least squares approximation to compute the optimum Bell factor is given. The numerical method is based on the discretization of the neutron slowing down equation. The scattering integral is approximated by taking into account the location of the upper limit in energy domain. The accuracy of the method has been tested by performing computations of resonance integrals for uranium dioxide isolated rods and comparing the results with empirical values. (orig.)

  18. Hybrid methods for airframe noise numerical prediction

    Energy Technology Data Exchange (ETDEWEB)

    Terracol, M.; Manoha, E.; Herrero, C.; Labourasse, E.; Redonnet, S. [ONERA, Department of CFD and Aeroacoustics, BP 72, Chatillon (France); Sagaut, P. [Laboratoire de Modelisation en Mecanique - UPMC/CNRS, Paris (France)

    2005-07-01

    This paper describes some significant steps made towards the numerical simulation of the noise radiated by the high-lift devices of a plane. Since the full numerical simulation of such configuration is still out of reach for present supercomputers, some hybrid strategies have been developed to reduce the overall cost of such simulations. The proposed strategy relies on the coupling of an unsteady nearfield CFD with an acoustic propagation solver based on the resolution of the Euler equations for midfield propagation in an inhomogeneous field, and the use of an integral solver for farfield acoustic predictions. In the first part of this paper, this CFD/CAA coupling strategy is presented. In particular, the numerical method used in the propagation solver is detailed, and two applications of this coupling method to the numerical prediction of the aerodynamic noise of an airfoil are presented. Then, a hybrid RANS/LES method is proposed in order to perform some unsteady simulations of complex noise sources. This method allows for significant reduction of the cost of such a simulation by considerably reducing the extent of the LES zone. This method is described and some results of the numerical simulation of the three-dimensional unsteady flow in the slat cove of a high-lift profile are presented. While these results remain very difficult to validate with experiments on similar configurations, they represent up to now the first 3D computations of this kind of flow. (orig.)

  19. Spectral Methods in Numerical Plasma Simulation

    DEFF Research Database (Denmark)

    Coutsias, E.A.; Hansen, F.R.; Huld, T.

    1989-01-01

    An introduction is given to the use of spectral methods in numerical plasma simulation. As examples of the use of spectral methods, solutions to the two-dimensional Euler equations in both a simple, doubly periodic region, and on an annulus will be shown. In the first case, the solution is expanded...

  20. Hybrid numerical calculation method for bend waveguides

    OpenAIRE

    Garnier , Lucas; Saavedra , C.; Castro-Beltran , Rigoberto; Lucio , José Luis; Bêche , Bruno

    2017-01-01

    National audience; The knowledge of how the light will behave in a waveguide with a radius of curvature becomes more and more important because of the development of integrated photonics, which include ring micro-resonators, phasars, and other devices with a radius of curvature. This work presents a numerical calculation method to determine the eigenvalues and eigenvectors of curved waveguides. This method is a hybrid method which uses at first conform transformation of the complex plane gene...

  1. Lagrangian numerical methods for ocean biogeochemical simulations

    Science.gov (United States)

    Paparella, Francesco; Popolizio, Marina

    2018-05-01

    We propose two closely-related Lagrangian numerical methods for the simulation of physical processes involving advection, reaction and diffusion. The methods are intended to be used in settings where the flow is nearly incompressible and the Péclet numbers are so high that resolving all the scales of motion is unfeasible. This is commonplace in ocean flows. Our methods consist in augmenting the method of characteristics, which is suitable for advection-reaction problems, with couplings among nearby particles, producing fluxes that mimic diffusion, or unresolved small-scale transport. The methods conserve mass, obey the maximum principle, and allow to tune the strength of the diffusive terms down to zero, while avoiding unwanted numerical dissipation effects.

  2. Numerical methods and analysis of multiscale problems

    CERN Document Server

    Madureira, Alexandre L

    2017-01-01

    This book is about numerical modeling of multiscale problems, and introduces several asymptotic analysis and numerical techniques which are necessary for a proper approximation of equations that depend on different physical scales. Aimed at advanced undergraduate and graduate students in mathematics, engineering and physics – or researchers seeking a no-nonsense approach –, it discusses examples in their simplest possible settings, removing mathematical hurdles that might hinder a clear understanding of the methods. The problems considered are given by singular perturbed reaction advection diffusion equations in one and two-dimensional domains, partial differential equations in domains with rough boundaries, and equations with oscillatory coefficients. This work shows how asymptotic analysis can be used to develop and analyze models and numerical methods that are robust and work well for a wide range of parameters.

  3. Numerical methods in electron magnetic resonance

    International Nuclear Information System (INIS)

    Soernes, A.R.

    1998-01-01

    The focal point of the thesis is the development and use of numerical methods in the analysis, simulation and interpretation of Electron Magnetic Resonance experiments on free radicals in solids to uncover the structure, the dynamics and the environment of the system

  4. Numerical methods in electron magnetic resonance

    Energy Technology Data Exchange (ETDEWEB)

    Soernes, A.R

    1998-07-01

    The focal point of the thesis is the development and use of numerical methods in the analysis, simulation and interpretation of Electron Magnetic Resonance experiments on free radicals in solids to uncover the structure, the dynamics and the environment of the system.

  5. Numerical methods in nuclear engineering. Part 1

    International Nuclear Information System (INIS)

    Phillips, G.J.

    1983-08-01

    These proceedings, published in two parts contain the full text of 56 papers and summaries of six papers presented at the conference. They cover the use of numerical methods in thermal hydraulics, reactor physics, neutron diffusion, subchannel analysis, risk assessment, transport theory, and fuel behaviour

  6. Numerical methods for hyperbolic differential functional problems

    Directory of Open Access Journals (Sweden)

    Roman Ciarski

    2008-01-01

    Full Text Available The paper deals with the initial boundary value problem for quasilinear first order partial differential functional systems. A general class of difference methods for the problem is constructed. Theorems on the error estimate of approximate solutions for difference functional systems are presented. The convergence results are proved by means of consistency and stability arguments. A numerical example is given.

  7. High-Resolution MRI in Rectal Cancer

    International Nuclear Information System (INIS)

    Dieguez, Adriana

    2010-01-01

    High-resolution MRI is the best method of assessing the relation of the rectal tumor with the potential circumferential resection margin (CRM). Therefore it is currently considered the method of choice for local staging of rectal cancer. The primary surgery of rectal cancer is total mesorectal excision (TME), which plane of dissection is formed by the mesorectal fascia surrounding mesorectal fat and rectum. This fascia will determine the circumferential margin of resection. At the same time, high resolution MRI allows adequate pre-operative identification of important prognostic risk factors, improving the selection and indication of therapy for each patient. This information includes, besides the circumferential margin of resection, tumor and lymph node staging, extramural vascular invasion and the description of lower rectal tumors. All these should be described in detail in the report, being part of the discussion in the multidisciplinary team, the place where the decisions involving the patient with rectal cancer will take place. The aim of this study is to provide the information necessary to understand the use of high resolution MRI in the identification of prognostic risk factors in rectal cancer. The technical requirements and standardized report for this study will be describe, as well as the anatomical landmarks of importance for the total mesorectal excision (TME), as we have said is the surgery of choice for rectal cancer. (authors) [es

  8. A hybrid numerical method for orbit correction

    International Nuclear Information System (INIS)

    White, G.; Himel, T.; Shoaee, H.

    1997-09-01

    The authors describe a simple hybrid numerical method for beam orbit correction in particle accelerators. The method overcomes both degeneracy in the linear system being solved and respects boundaries on the solution. It uses the Singular Value Decomposition (SVD) to find and remove the null-space in the system, followed by a bounded Linear Least Squares analysis of the remaining recast problem. It was developed for correcting orbit and dispersion in the B-factory rings

  9. Conservative numerical methods for solitary wave interactions

    Energy Technology Data Exchange (ETDEWEB)

    Duran, A; Lopez-Marcos, M A [Departamento de Matematica Aplicada y Computacion, Facultad de Ciencias, Universidad de Valladolid, Paseo del Prado de la Magdalena s/n, 47005 Valladolid (Spain)

    2003-07-18

    The purpose of this paper is to show the advantages that represent the use of numerical methods that preserve invariant quantities in the study of solitary wave interactions for the regularized long wave equation. It is shown that the so-called conservative methods are more appropriate to study the phenomenon and provide a dynamic point of view that allows us to estimate the changes in the parameters of the solitary waves after the collision.

  10. Theoretical and numerical method in aeroacoustics

    Directory of Open Access Journals (Sweden)

    Nicuşor ALEXANDRESCU

    2010-06-01

    Full Text Available The paper deals with the mathematical and numerical modeling of the aerodynamic noisegenerated by the fluid flow interaction with the solid structure of a rotor blade.Our analysis use Lighthill’s acoustic analogy. Lighthill idea was to express the fundamental equationsof motion into a wave equation for acoustic fluctuation with a source term on the right-hand side. Theobtained wave equation is solved numerically by the spatial discretization. The method is applied inthe case of monopole source placed in different points of blade surfaces to find this effect of noisepropagation.

  11. Numerical methods for scientists and engineers

    CERN Document Server

    Antia, H M

    2012-01-01

    This book presents an exhaustive and in-depth exposition of the various numerical methods used in scientific and engineering computations. It emphasises the practical aspects of numerical computation and discusses various techniques in sufficient detail to enable their implementation in solving a wide range of problems. The main addition in the third edition is a new Chapter on Statistical Inferences. There is also some addition and editing in the next chapter on Approximations. With this addition 12 new programs have also been added.

  12. Numerical methods for differential equations and applications

    International Nuclear Information System (INIS)

    Ixaru, L.G.

    1984-01-01

    This book is addressed to persons who, without being professionals in applied mathematics, are often faced with the problem of numerically solving differential equations. In each of the first three chapters a definite class of methods is discussed for the solution of the initial value problem for ordinary differential equations: multistep methods; one-step methods; and piecewise perturbation methods. The fourth chapter is mainly focussed on the boundary value problems for linear second-order equations, with a section devoted to the Schroedinger equation. In the fifth chapter the eigenvalue problem for the radial Schroedinger equation is solved in several ways, with computer programs included. (Auth.)

  13. High-resolution 3D coronary vessel wall imaging with near 100% respiratory efficiency using epicardial fat tracking: reproducibility and comparison with standard methods.

    Science.gov (United States)

    Scott, Andrew D; Keegan, Jennifer; Firmin, David N

    2011-01-01

    To quantitatively assess the performance and reproducibility of 3D spiral coronary artery wall imaging with beat-to-beat respiratory-motion-correction (B2B-RMC) compared to navigator gated 2D spiral and turbo-spin-echo (TSE) acquisitions. High-resolution (0.7 × 0.7 mm) cross-sectional right coronary wall acquisitions were performed in 10 subjects using four techniques (B2B-RMC 3D spiral with alternate (2RR) and single (1RR) R-wave gating, navigator-gated 2D spiral (2RR) and navigator-gated 2D TSE (2RR)) on two occasions. Wall thickness measurements were compared with repeated measures analysis of variance (ANOVA). Reproducibility was assessed with the intraclass correlation coefficient (ICC). In all, 91% (73/80) of acquisitions were successful (failures: four TSE, two 3D spiral (1RR) and one 3D spiral (2RR)). Respiratory efficiency of the B2B-RMC was less variable and substantially higher than for navigator gating (99.6 ± 1.2% vs. 39.0 ± 7.5%, P B2B-RMC permits coronary vessel wall assessment over multiple thin contiguous slices in a clinically feasible duration. Excellent reproducibility of the technique potentially enables studies of disease progression/regression. Copyright © 2010 Wiley-Liss, Inc.

  14. Using a High-Resolution Ensemble Modeling Method to Inform Risk-Based Decision-Making at Taylor Park Dam, Colorado

    Science.gov (United States)

    Mueller, M.; Mahoney, K. M.; Holman, K. D.

    2015-12-01

    The Bureau of Reclamation (Reclamation) is responsible for the safety of Taylor Park Dam, located in central Colorado at an elevation of 9300 feet. A key aspect of dam safety is anticipating extreme precipitation, runoff and the associated inflow of water to the reservoir within a probabilistic framework for risk analyses. The Cooperative Institute for Research in Environmental Sciences (CIRES) has partnered with Reclamation to improve understanding and estimation of precipitation in the western United States, including the Taylor Park watershed. A significant challenge is that Taylor Park Dam is located in a relatively data-sparse region, surrounded by mountains exceeding 12,000 feet. To better estimate heavy precipitation events in this basin, a high-resolution modeling approach is used. The Weather Research and Forecasting (WRF) model is employed to simulate events that have produced observed peaks in streamflow at the location of interest. Importantly, an ensemble of model simulations are run on each event so that uncertainty bounds (i.e., forecast error) may be provided such that the model outputs may be more effectively used in Reclamation's risk assessment framework. Model estimates of precipitation (and the uncertainty thereof) are then used in rainfall runoff models to determine the probability of inflows to the reservoir for use in Reclamation's dam safety risk analyses.

  15. Tunneling Splittings in Vibronic Structure of CH_3F^+ ( X^2E): Studied by High Resolution Photoelectron Spectra and AB Initio Theoretical Method

    Science.gov (United States)

    Mo, Yuxiang; Gao, Shuming; Dai, Zuyang; Li, Hua

    2013-06-01

    We report a combined experimental and theoretical study on the vibronic structure of CH_3F^+. The results show that the tunneling splittings of vibrational energy levels occur in CH_3F^+ due to the Jahn-Teller effect. Experimentally, we have measured a high resolution ZEKE spectrum of CH_3F up to 3500 cm^-^1 above the ground state. Theoretically, we performed an ab initio calculation based on the diabatic model. The adiabatic potential energy surfaces (APES) of CH_3F^+ have been calculated at the MRCI/CAS/avq(t)z level and expressed by Taylor expansions with normal coordinates as variables. The energy gradients for the lower and upper APES, the derivative couplings between them and also the energies of the APES have been used to determine the coefficients in the Taylor expansion. The spin-vibronic energy levels have been calculated by accounting all six vibrational modes and their couplings. The experimental ZEKE spectra were assigned based on the theoretical calculations. W. Domcke, D. R. Yarkony, and H. Köpple (Eds.), Conical Intersections: Eletronic Structure, Dynamics and Spectroscopy (World Scientific, Singapore, 2004). M. S. Schuurman, D. E. Weinberg, and D. R. Yarkony, J. Chem. Phys. 127, 104309 (2007).

  16. Numerical methods and optimization a consumer guide

    CERN Document Server

    Walter, Éric

    2014-01-01

    Initial training in pure and applied sciences tends to present problem-solving as the process of elaborating explicit closed-form solutions from basic principles, and then using these solutions in numerical applications. This approach is only applicable to very limited classes of problems that are simple enough for such closed-form solutions to exist. Unfortunately, most real-life problems are too complex to be amenable to this type of treatment. Numerical Methods and Optimization – A Consumer Guide presents methods for dealing with them. Shifting the paradigm from formal calculus to numerical computation, the text makes it possible for the reader to ·         discover how to escape the dictatorship of those particular cases that are simple enough to receive a closed-form solution, and thus gain the ability to solve complex, real-life problems; ·         understand the principles behind recognized algorithms used in state-of-the-art numerical software; ·         learn the advantag...

  17. Intelligent numerical methods applications to fractional calculus

    CERN Document Server

    Anastassiou, George A

    2016-01-01

    In this monograph the authors present Newton-type, Newton-like and other numerical methods, which involve fractional derivatives and fractional integral operators, for the first time studied in the literature. All for the purpose to solve numerically equations whose associated functions can be also non-differentiable in the ordinary sense. That is among others extending the classical Newton method theory which requires usual differentiability of function. Chapters are self-contained and can be read independently and several advanced courses can be taught out of this book. An extensive list of references is given per chapter. The book’s results are expected to find applications in many areas of applied mathematics, stochastics, computer science and engineering. As such this monograph is suitable for researchers, graduate students, and seminars of the above subjects, also to be in all science and engineering libraries.

  18. Numerical methods: Analytical benchmarking in transport theory

    International Nuclear Information System (INIS)

    Ganapol, B.D.

    1988-01-01

    Numerical methods applied to reactor technology have reached a high degree of maturity. Certainly one- and two-dimensional neutron transport calculations have become routine, with several programs available on personal computer and the most widely used programs adapted to workstation and minicomputer computational environments. With the introduction of massive parallelism and as experience with multitasking increases, even more improvement in the development of transport algorithms can be expected. Benchmarking an algorithm is usually not a very pleasant experience for the code developer. Proper algorithmic verification by benchmarking involves the following considerations: (1) conservation of particles, (2) confirmation of intuitive physical behavior, and (3) reproduction of analytical benchmark results. By using today's computational advantages, new basic numerical methods have been developed that allow a wider class of benchmark problems to be considered

  19. Partial differential equations with numerical methods

    CERN Document Server

    Larsson, Stig

    2003-01-01

    The book is suitable for advanced undergraduate and beginning graduate students of applied mathematics and engineering. The main theme is the integration of the theory of linear PDEs and the numerical solution of such equations. For each type of PDE, elliptic, parabolic, and hyperbolic, the text contains one chapter on the mathematical theory of the differential equation, followed by one chapter on finite difference methods and one on finite element methods. As preparation, the two-point boundary value problem and the initial-value problem for ODEs are discussed in separate chapters. There is also one chapter on the elliptic eigenvalue problem and eigenfunction expansion. The presentation does not presume a deep knowledge of mathematical and functional analysis. Some background on linear functional analysis and Sobolev spaces, and also on numerical linear algebra, is reviewed in two appendices.

  20. A student's guide to numerical methods

    CERN Document Server

    Hutchinson, Ian H

    2015-01-01

    This concise, plain-language guide for senior undergraduates and graduate students aims to develop intuition, practical skills and an understanding of the framework of numerical methods for the physical sciences and engineering. It provides accessible self-contained explanations of mathematical principles, avoiding intimidating formal proofs. Worked examples and targeted exercises enable the student to master the realities of using numerical techniques for common needs such as solution of ordinary and partial differential equations, fitting experimental data, and simulation using particle and Monte Carlo methods. Topics are carefully selected and structured to build understanding, and illustrate key principles such as: accuracy, stability, order of convergence, iterative refinement, and computational effort estimation. Enrichment sections and in-depth footnotes form a springboard to more advanced material and provide additional background. Whether used for self-study, or as the basis of an accelerated introdu...

  1. A simple, high throughput method to locate single copy sequences from Bacterial Artificial Chromosome (BAC libraries using High Resolution Melt analysis

    Directory of Open Access Journals (Sweden)

    Caligari Peter DS

    2010-05-01

    Full Text Available Abstract Background The high-throughput anchoring of genetic markers into contigs is required for many ongoing physical mapping projects. Multidimentional BAC pooling strategies for PCR-based screening of large insert libraries is a widely used alternative to high density filter hybridisation of bacterial colonies. To date, concerns over reliability have led most if not all groups engaged in high throughput physical mapping projects to favour BAC DNA isolation prior to amplification by conventional PCR. Results Here, we report the first combined use of Multiplex Tandem PCR (MT-PCR and High Resolution Melt (HRM analysis on bacterial stocks of BAC library superpools as a means of rapidly anchoring markers to BAC colonies and thereby to integrate genetic and physical maps. We exemplify the approach using a BAC library of the model plant Arabidopsis thaliana. Super pools of twenty five 384-well plates and two-dimension matrix pools of the BAC library were prepared for marker screening. The entire procedure only requires around 3 h to anchor one marker. Conclusions A pre-amplification step during MT-PCR allows high multiplexing and increases the sensitivity and reliability of subsequent HRM discrimination. This simple gel-free protocol is more reliable, faster and far less costly than conventional PCR screening. The option to screen in parallel 3 genetic markers in one MT-PCR-HRM reaction using templates from directly pooled bacterial stocks of BAC-containing bacteria further reduces time for anchoring markers in physical maps of species with large genomes.

  2. Evaluating a Local Ensemble Transform Kalman Filter snow cover data assimilation method to estimate SWE within a high-resolution hydrologic modeling framework across Western US mountainous regions

    Science.gov (United States)

    Oaida, C. M.; Andreadis, K.; Reager, J. T., II; Famiglietti, J. S.; Levoe, S.

    2017-12-01

    Accurately estimating how much snow water equivalent (SWE) is stored in mountainous regions characterized by complex terrain and snowmelt-driven hydrologic cycles is not only greatly desirable, but also a big challenge. Mountain snowpack exhibits high spatial variability across a broad range of spatial and temporal scales due to a multitude of physical and climatic factors, making it difficult to observe or estimate in its entirety. Combing remotely sensed data and high resolution hydrologic modeling through data assimilation (DA) has the potential to provide a spatially and temporally continuous SWE dataset at horizontal scales that capture sub-grid snow spatial variability and are also relevant to stakeholders such as water resource managers. Here, we present the evaluation of a new snow DA approach that uses a Local Ensemble Transform Kalman Filter (LETKF) in tandem with the Variable Infiltration Capacity macro-scale hydrologic model across the Western United States, at a daily temporal resolution, and a horizontal resolution of 1.75 km x 1.75 km. The LETKF is chosen for its relative simplicity, ease of implementation, and computational efficiency and scalability. The modeling/DA system assimilates daily MODIS Snow Covered Area and Grain Size (MODSCAG) fractional snow cover over, and has been developed to efficiently calculate SWE estimates over extended periods of time and covering large regional-scale areas at relatively high spatial resolution, ultimately producing a snow reanalysis-type dataset. Here we focus on the assessment of SWE produced by the DA scheme over several basins in California's Sierra Nevada Mountain range where Airborne Snow Observatory data is available, during the last five water years (2013-2017), which include both one of the driest and one of the wettest years. Comparison against such a spatially distributed SWE observational product provides a greater understanding of the model's ability to estimate SWE and SWE spatial variability

  3. High-resolution ultrasonic spectroscopy

    Directory of Open Access Journals (Sweden)

    V. Buckin

    2018-03-01

    Full Text Available High-resolution ultrasonic spectroscopy (HR-US is an analytical technique for direct and non-destructive monitoring of molecular and micro-structural transformations in liquids and semi-solid materials. It is based on precision measurements of ultrasonic velocity and attenuation in analysed samples. The application areas of HR-US in research, product development, and quality and process control include analysis of conformational transitions of polymers, ligand binding, molecular self-assembly and aggregation, crystallisation, gelation, characterisation of phase transitions and phase diagrams, and monitoring of chemical and biochemical reactions. The technique does not require optical markers or optical transparency. The HR-US measurements can be performed in small sample volumes (down to droplet size, over broad temperature range, at ambient and elevated pressures, and in various measuring regimes such as automatic temperature ramps, titrations and measurements in flow.

  4. High Resolution Thermometry for EXACT

    Science.gov (United States)

    Panek, J. S.; Nash, A. E.; Larson, M.; Mulders, N.

    2000-01-01

    High Resolution Thermometers (HRTs) based on SQUID detection of the magnetization of a paramagnetic salt or a metal alloy has been commonly used for sub-nano Kelvin temperature resolution in low temperature physics experiments. The main applications to date have been for temperature ranges near the lambda point of He-4 (2.177 K). These thermometers made use of materials such as Cu(NH4)2Br4 *2H2O, GdCl3, or PdFe. None of these materials are suitable for EXACT, which will explore the region of the He-3/He-4 tricritical point at 0.87 K. The experiment requirements and properties of several candidate paramagnetic materials will be presented, as well as preliminary test results.

  5. High resolution tomographic instrument development

    International Nuclear Information System (INIS)

    1992-01-01

    Our recent work has concentrated on the development of high-resolution PET instrumentation reflecting in part the growing importance of PET in nuclear medicine imaging. We have developed a number of positron imaging instruments and have the distinction that every instrument has been placed in operation and has had an extensive history of application for basic research and clinical study. The present program is a logical continuation of these earlier successes. PCR-I, a single ring positron tomograph was the first demonstration of analog coding using BGO. It employed 4 mm detectors and is currently being used for a wide range of biological studies. These are of immense importance in guiding the direction for future instruments. In particular, PCR-II, a volume sensitive positron tomograph with 3 mm spatial resolution has benefited greatly from the studies using PCR-I. PCR-II is currently in the final stages of assembly and testing and will shortly be placed in operation for imaging phantoms, animals and ultimately humans. Perhaps the most important finding resulting from our previous study is that resolution and sensitivity must be carefully balanced to achieve a practical high resolution system. PCR-II has been designed to have the detection characteristics required to achieve 3 mm resolution in human brain under practical imaging situations. The development of algorithms by the group headed by Dr. Chesler is based on a long history of prior study including his joint work with Drs. Pelc and Reiderer and Stearns. This body of expertise will be applied to the processing of data from PCR-II when it becomes operational

  6. High resolution tomographic instrument development

    Energy Technology Data Exchange (ETDEWEB)

    1992-08-01

    Our recent work has concentrated on the development of high-resolution PET instrumentation reflecting in part the growing importance of PET in nuclear medicine imaging. We have developed a number of positron imaging instruments and have the distinction that every instrument has been placed in operation and has had an extensive history of application for basic research and clinical study. The present program is a logical continuation of these earlier successes. PCR-I, a single ring positron tomograph was the first demonstration of analog coding using BGO. It employed 4 mm detectors and is currently being used for a wide range of biological studies. These are of immense importance in guiding the direction for future instruments. In particular, PCR-II, a volume sensitive positron tomograph with 3 mm spatial resolution has benefited greatly from the studies using PCR-I. PCR-II is currently in the final stages of assembly and testing and will shortly be placed in operation for imaging phantoms, animals and ultimately humans. Perhaps the most important finding resulting from our previous study is that resolution and sensitivity must be carefully balanced to achieve a practical high resolution system. PCR-II has been designed to have the detection characteristics required to achieve 3 mm resolution in human brain under practical imaging situations. The development of algorithms by the group headed by Dr. Chesler is based on a long history of prior study including his joint work with Drs. Pelc and Reiderer and Stearns. This body of expertise will be applied to the processing of data from PCR-II when it becomes operational.

  7. High resolution tomographic instrument development

    Energy Technology Data Exchange (ETDEWEB)

    1992-01-01

    Our recent work has concentrated on the development of high-resolution PET instrumentation reflecting in part the growing importance of PET in nuclear medicine imaging. We have developed a number of positron imaging instruments and have the distinction that every instrument has been placed in operation and has had an extensive history of application for basic research and clinical study. The present program is a logical continuation of these earlier successes. PCR-I, a single ring positron tomograph was the first demonstration of analog coding using BGO. It employed 4 mm detectors and is currently being used for a wide range of biological studies. These are of immense importance in guiding the direction for future instruments. In particular, PCR-II, a volume sensitive positron tomograph with 3 mm spatial resolution has benefited greatly from the studies using PCR-I. PCR-II is currently in the final stages of assembly and testing and will shortly be placed in operation for imaging phantoms, animals and ultimately humans. Perhaps the most important finding resulting from our previous study is that resolution and sensitivity must be carefully balanced to achieve a practical high resolution system. PCR-II has been designed to have the detection characteristics required to achieve 3 mm resolution in human brain under practical imaging situations. The development of algorithms by the group headed by Dr. Chesler is based on a long history of prior study including his joint work with Drs. Pelc and Reiderer and Stearns. This body of expertise will be applied to the processing of data from PCR-II when it becomes operational.

  8. Numerical Methods for Stochastic Computations A Spectral Method Approach

    CERN Document Server

    Xiu, Dongbin

    2010-01-01

    The first graduate-level textbook to focus on fundamental aspects of numerical methods for stochastic computations, this book describes the class of numerical methods based on generalized polynomial chaos (gPC). These fast, efficient, and accurate methods are an extension of the classical spectral methods of high-dimensional random spaces. Designed to simulate complex systems subject to random inputs, these methods are widely used in many areas of computer science and engineering. The book introduces polynomial approximation theory and probability theory; describes the basic theory of gPC meth

  9. Spectral methods in numerical plasma simulation

    International Nuclear Information System (INIS)

    Coutsias, E.A.; Hansen, F.R.; Huld, T.; Knorr, G.; Lynov, J.P.

    1989-01-01

    An introduction is given to the use of spectral methods in numerical plasma simulation. As examples of the use of spectral methods, solutions to the two-dimensional Euler equations in both a simple, doubly periodic region, and on an annulus will be shown. In the first case, the solution is expanded in a two-dimensional Fourier series, while a Chebyshev-Fourier expansion is employed in the second case. A new, efficient algorithm for the solution of Poisson's equation on an annulus is introduced. Problems connected to aliasing and to short wavelength noise generated by gradient steepening are discussed. (orig.)

  10. 天津近海风能资源的高分辨率数值模拟与评估%High-Resolution Numerical Simulation and Assessment of the Offshore Wind Energy Resource in Tianjin

    Institute of Scientific and Technical Information of China (English)

    杨艳娟; 李明财; 任雨; 熊明明

    2011-01-01

    Wind energy is a rapidly growing alternative energy source and has been widely developed around the world over the last 10 years. Offshore wind power generation is now becoming a new trend in the development of future wind power generation because wind tends to blow faster and be more uniform over offshore areas than on the land. Accurate assessment of wind energy resource is fundamental and valuable for wind energy developers and potential wind energy users because it allows them to choose a general area of the estimated high wind resource for more detailed examination. However, it is difficult to make direct observations from meteorological variables over offshore areas, which calls for numerical simulation with high resolution so as to derive the availability and potential of wind energy. The distribution of wind energy resources with 1 km horizontal resolution and 10 m vertical resolution in Tianjin coastal areas was simulated using the numerical model MM5 and Calmet to derive wind energy potential over the offshore areas. In addition, the simulation efficiency was determined by comparing observation data with three wind-measurement towers over the same period. Results show that the annual mean wind speed and trend of daily mean wind speed were simulated well, and the relative deviations between observations and simulated values at three wind measurement towers were 7.11%, 12.99%, and 6.14%, respectively. This suggests that the models are effective in assessing the offshore wind energy resource in Tianjin. The long time wind energy resource was obtained by comparing simulated year’s and recent 20 years’ mean wind speed. It was found that annual mean wind speed is (6.6~7.0)m/s, and annual mean wind power density is above 340w/m2, which indicate that the offshore wind energy resource in Tianjin is exploitable and could be used for grid-connected power generation. The assessment shows that the MM5/Calmet model is capable of providing reasonable wind status

  11. RELAP-7 Numerical Stabilization: Entropy Viscosity Method

    Energy Technology Data Exchange (ETDEWEB)

    R. A. Berry; M. O. Delchini; J. Ragusa

    2014-06-01

    The RELAP-7 code is the next generation nuclear reactor system safety analysis code being developed at the Idaho National Laboratory (INL). The code is based on the INL's modern scientific software development framework, MOOSE (Multi-Physics Object Oriented Simulation Environment). The overall design goal of RELAP-7 is to take advantage of the previous thirty years of advancements in computer architecture, software design, numerical integration methods, and physical models. The end result will be a reactor systems analysis capability that retains and improves upon RELAP5's capability and extends the analysis capability for all reactor system simulation scenarios. RELAP-7 utilizes a single phase and a novel seven-equation two-phase flow models as described in the RELAP-7 Theory Manual (INL/EXT-14-31366). The basic equation systems are hyperbolic, which generally require some type of stabilization (or artificial viscosity) to capture nonlinear discontinuities and to suppress advection-caused oscillations. This report documents one of the available options for this stabilization in RELAP-7 -- a new and novel approach known as the entropy viscosity method. Because the code is an ongoing development effort in which the physical sub models, numerics, and coding are evolving, so too must the specific details of the entropy viscosity stabilization method. Here the fundamentals of the method in their current state are presented.

  12. Ultra-high resolution coded wavefront sensor

    KAUST Repository

    Wang, Congli

    2017-06-08

    Wavefront sensors and more general phase retrieval methods have recently attracted a lot of attention in a host of application domains, ranging from astronomy to scientific imaging and microscopy. In this paper, we introduce a new class of sensor, the Coded Wavefront Sensor, which provides high spatio-temporal resolution using a simple masked sensor under white light illumination. Specifically, we demonstrate megapixel spatial resolution and phase accuracy better than 0.1 wavelengths at reconstruction rates of 50 Hz or more, thus opening up many new applications from high-resolution adaptive optics to real-time phase retrieval in microscopy.

  13. High-resolution computer-aided moire

    Science.gov (United States)

    Sciammarella, Cesar A.; Bhat, Gopalakrishna K.

    1991-12-01

    This paper presents a high resolution computer assisted moire technique for the measurement of displacements and strains at the microscopic level. The detection of micro-displacements using a moire grid and the problem associated with the recovery of displacement field from the sampled values of the grid intensity are discussed. A two dimensional Fourier transform method for the extraction of displacements from the image of the moire grid is outlined. An example of application of the technique to the measurement of strains and stresses in the vicinity of the crack tip in a compact tension specimen is given.

  14. High-Resolution Mass Spectrometers

    Science.gov (United States)

    Marshall, Alan G.; Hendrickson, Christopher L.

    2008-07-01

    Over the past decade, mass spectrometry has been revolutionized by access to instruments of increasingly high mass-resolving power. For small molecules up to ˜400 Da (e.g., drugs, metabolites, and various natural organic mixtures ranging from foods to petroleum), it is possible to determine elemental compositions (CcHhNnOoSsPp…) of thousands of chemical components simultaneously from accurate mass measurements (the same can be done up to 1000 Da if additional information is included). At higher mass, it becomes possible to identify proteins (including posttranslational modifications) from proteolytic peptides, as well as lipids, glycoconjugates, and other biological components. At even higher mass (˜100,000 Da or higher), it is possible to characterize posttranslational modifications of intact proteins and to map the binding surfaces of large biomolecule complexes. Here we review the principles and techniques of the highest-resolution analytical mass spectrometers (time-of-flight and Fourier transform ion cyclotron resonance and orbitrap mass analyzers) and describe some representative high-resolution applications.

  15. Numerical methods for engine-airframe integration

    International Nuclear Information System (INIS)

    Murthy, S.N.B.; Paynter, G.C.

    1986-01-01

    Various papers on numerical methods for engine-airframe integration are presented. The individual topics considered include: scientific computing environment for the 1980s, overview of prediction of complex turbulent flows, numerical solutions of the compressible Navier-Stokes equations, elements of computational engine/airframe integrations, computational requirements for efficient engine installation, application of CAE and CFD techniques to complete tactical missile design, CFD applications to engine/airframe integration, and application of a second-generation low-order panel methods to powerplant installation studies. Also addressed are: three-dimensional flow analysis of turboprop inlet and nacelle configurations, application of computational methods to the design of large turbofan engine nacelles, comparison of full potential and Euler solution algorithms for aeropropulsive flow field computations, subsonic/transonic, supersonic nozzle flows and nozzle integration, subsonic/transonic prediction capabilities for nozzle/afterbody configurations, three-dimensional viscous design methodology of supersonic inlet systems for advanced technology aircraft, and a user's technology assessment

  16. Numerical method for partial equilibrium flow

    International Nuclear Information System (INIS)

    Ramshaw, J.D.; Cloutman, L.D.; Los Alamos, New Mexico 87545)

    1981-01-01

    A numerical method is presented for chemically reactive fluid flow in which equilibrium and nonequilibrium reactions occur simultaneously. The equilibrium constraints on the species concentrations are established by a quadratic iterative procedure. If the equilibrium reactions are uncoupled and of second or lower order, the procedure converges in a single step. In general, convergence is most rapid when the reactions are weakly coupled. This can frequently be achieved by a judicious choice of the independent reactions. In typical transient calculations, satisfactory accuracy has been achieved with about five iterations per time step

  17. High resolution modelling of extreme precipitation events in urban areas

    Science.gov (United States)

    Siemerink, Martijn; Volp, Nicolette; Schuurmans, Wytze; Deckers, Dave

    2015-04-01

    The present day society needs to adjust to the effects of climate change. More extreme weather conditions are expected, which can lead to longer periods of drought, but also to more extreme precipitation events. Urban water systems are not designed for such extreme events. Most sewer systems are not able to drain the excessive storm water, causing urban flooding. This leads to high economic damage. In order to take appropriate measures against extreme urban storms, detailed knowledge about the behaviour of the urban water system above and below the streets is required. To investigate the behaviour of urban water systems during extreme precipitation events new assessment tools are necessary. These tools should provide a detailed and integral description of the flow in the full domain of overland runoff, sewer flow, surface water flow and groundwater flow. We developed a new assessment tool, called 3Di, which provides detailed insight in the urban water system. This tool is based on a new numerical methodology that can accurately deal with the interaction between overland runoff, sewer flow and surface water flow. A one-dimensional model for the sewer system and open channel flow is fully coupled to a two-dimensional depth-averaged model that simulates the overland flow. The tool uses a subgrid-based approach in order to take high resolution information of the sewer system and of the terrain into account [1, 2]. The combination of using the high resolution information and the subgrid based approach results in an accurate and efficient modelling tool. It is now possible to simulate entire urban water systems using extreme high resolution (0.5m x 0.5m) terrain data in combination with a detailed sewer and surface water network representation. The new tool has been tested in several Dutch cities, such as Rotterdam, Amsterdam and The Hague. We will present the results of an extreme precipitation event in the city of Schiedam (The Netherlands). This city deals with

  18. Mathematica with a Numerical Methods Course

    Science.gov (United States)

    Varley, Rodney

    2003-04-01

    An interdisciplinary "Numerical Methods" course has been shared between physics, mathematics and computer science since 1992 at Hunter C. Recently, the lectures and workshops for this course have become formalized and placed on the internet at http://www.ph.hunter.cuny.edu (follow the links "Course Listings and Websites" >> "PHYS385 (Numerical Methods)". Mathematica notebooks for the lectures are available for automatic download (by "double clicking" the lecture icon) for student use in the classroom or at home. AOL (or Netscape/Explorer) can be used provided Mathematica (or the "free" MathReader) has been made a "helper application". Using Mathematica has the virtue that mathematical equations (no LaTex required) can easily be included with the text and Mathematica's graphing is easy to use. Computational cells can be included within the notebook and students may easily modify the calculation to see the result of "what if..." questions. Homework is sent as Mathematica notebooks to the instructor via the internet and the corrected workshops are returned in the same manner. Most exam questions require computational solutions.

  19. Numerical methods in dynamic fracture mechanics

    International Nuclear Information System (INIS)

    Beskos, D.E.

    1987-01-01

    A review of numerical methods for the solution of dynamic problems of fracture mechanics is presented. Finite difference, finite element and boundary element methods as applied to linear elastic or viscoelastic and non-linear elastoplastic or elastoviscoplastic dynamic fracture mechanics problems are described and critically evaluated. Both cases of stationary cracks and rapidly propagating cracks of simple I, II, III or mixed modes are considered. Harmonically varying with time or general transient dynamic disturbances in the form of external loading or incident waves are taken into account. Determination of the dynamic stress intensity factor for stationary cracks or moving cracks with known velocity history as well as determination of the crack-tip propagation history for given dynamic fracture toughness versus crack velocity relation are described and illustrated by means of certain representative examples. Finally, a brief assessment of the present state of knowledge is made and research needs are identified

  20. Validation of an analytical method based on the high-resolution continuum source flame atomic absorption spectrometry for the fast-sequential determination of several hazardous/priority hazardous metals in soil.

    Science.gov (United States)

    Frentiu, Tiberiu; Ponta, Michaela; Hategan, Raluca

    2013-03-01

    The aim of this paper was the validation of a new analytical method based on the high-resolution continuum source flame atomic absorption spectrometry for the fast-sequential determination of several hazardous/priority hazardous metals (Ag, Cd, Co, Cr, Cu, Ni, Pb and Zn) in soil after microwave assisted digestion in aqua regia. Determinations were performed on the ContrAA 300 (Analytik Jena) air-acetylene flame spectrometer equipped with xenon short-arc lamp as a continuum radiation source for all elements, double monochromator consisting of a prism pre-monocromator and an echelle grating monochromator, and charge coupled device as detector. For validation a method-performance study was conducted involving the establishment of the analytical performance of the new method (limits of detection and quantification, precision and accuracy). Moreover, the Bland and Altman statistical method was used in analyzing the agreement between the proposed assay and inductively coupled plasma optical emission spectrometry as standardized method for the multielemental determination in soil. The limits of detection in soil sample (3σ criterion) in the high-resolution continuum source flame atomic absorption spectrometry method were (mg/kg): 0.18 (Ag), 0.14 (Cd), 0.36 (Co), 0.25 (Cr), 0.09 (Cu), 1.0 (Ni), 1.4 (Pb) and 0.18 (Zn), close to those in inductively coupled plasma optical emission spectrometry: 0.12 (Ag), 0.05 (Cd), 0.15 (Co), 1.4 (Cr), 0.15 (Cu), 2.5 (Ni), 2.5 (Pb) and 0.04 (Zn). Accuracy was checked by analyzing 4 certified reference materials and a good agreement for 95% confidence interval was found in both methods, with recoveries in the range of 94-106% in atomic absorption and 97-103% in optical emission. Repeatability found by analyzing real soil samples was in the range 1.6-5.2% in atomic absorption, similar with that of 1.9-6.1% in optical emission spectrometry. The Bland and Altman method showed no statistical significant difference between the two spectrometric

  1. Numerical Methods for Free Boundary Problems

    CERN Document Server

    1991-01-01

    About 80 participants from 16 countries attended the Conference on Numerical Methods for Free Boundary Problems, held at the University of Jyviiskylii, Finland, July 23-27, 1990. The main purpose of this conference was to provide up-to-date information on important directions of research in the field of free boundary problems and their numerical solutions. The contributions contained in this volume cover the lectures given in the conference. The invited lectures were given by H.W. Alt, V. Barbu, K-H. Hoffmann, H. Mittelmann and V. Rivkind. In his lecture H.W. Alt considered a mathematical model and existence theory for non-isothermal phase separations in binary systems. The lecture of V. Barbu was on the approximate solvability of the inverse one phase Stefan problem. K-H. Hoff­ mann gave an up-to-date survey of several directions in free boundary problems and listed several applications, but the material of his lecture is not included in this proceedings. H.D. Mittelmann handled the stability of thermo capi...

  2. Development of a new screening method for the detection of antibiotic residues in muscle tissues using liquid chromatography and high resolution mass spectrometry with a LC-LTQ-Orbitrap instrument.

    Science.gov (United States)

    Hurtaud-Pessel, D; Jagadeshwar-Reddy, T; Verdon, E

    2011-10-01

    A liquid chromatography-high resolution mass spectrometry (LC-HRMS) method was developed for screening meat for a wide range of antibiotics used in veterinary medicine. Full-scan mode under high resolution mass spectral conditions using an LTQ-Orbitrap mass spectrometer with resolving power 60,000 full width at half maximum (FWHM) was applied for analysis of the samples. Samples were prepared using two extraction protocols prior to LC-HRMS analysis. The scope of the method focuses on screening the following main families of antibacterial veterinary drugs: penicillins, cephalosporins, sulfonamides, macrolides, tetracyclines, aminoglucosides and quinolones. Compounds were successfully identified in spiked samples from their accurate mass and LC retention times from the acquired full-scan chromatogram. Automated data processing using ToxId software allowed rapid treatment of the data. Analyses of muscle tissues from real samples collected from antibiotic-treated animals was carried out using the above methodology and antibiotic residues were identified unambiguously. Further analysis of the data for real samples allowed the identification of the targeted antibiotic residues but also non-targeted compounds, such as some of their metabolites.

  3. A method for the fast estimation of a battery entropy-variation high-resolution curve - Application on a commercial LiFePO4/graphite cell

    Science.gov (United States)

    Damay, Nicolas; Forgez, Christophe; Bichat, Marie-Pierre; Friedrich, Guy

    2016-11-01

    The entropy-variation of a battery is responsible for heat generation or consumption during operation and its prior measurement is mandatory for developing a thermal model. It is generally done through the potentiometric method which is considered as a reference. However, it requires several days or weeks to get a look-up table with a 5 or 10% SoC (State of Charge) resolution. In this study, a calorimetric method based on the inversion of a thermal model is proposed for the fast estimation of a nearly continuous curve of entropy-variation. This is achieved by separating the heats produced while charging and discharging the battery. The entropy-variation is then deduced from the extracted entropic heat. The proposed method is validated by comparing the results obtained with several current rates to measurements made with the potentiometric method.

  4. A simultaneous screening and quantitative method for the multiresidue analysis of pesticides in spices using ultra-high performance liquid chromatography-high resolution (Orbitrap) mass spectrometry.

    Science.gov (United States)

    Goon, Arnab; Khan, Zareen; Oulkar, Dasharath; Shinde, Raviraj; Gaikwad, Suresh; Banerjee, Kaushik

    2018-01-12

    A novel screening and quantitation method is reported for non-target multiresidue analysis of pesticides using ultra-HPLC-quadrupole-Orbitrap mass spectrometry in spice matrices, including black pepper, cardamom, chili, coriander, cumin, and turmeric. The method involved sequential full-scan (resolution = 70,000), and variable data independent acquisition (vDIA) with nine consecutive fragmentation events (resolution = 17,500). Samples were extracted by the QuEChERS method. The introduction of an SPE-based clean-up step through hydrophilic-lipophilic-balance (HLB) cartridges proved advantageous in minimizing the false negatives. For coriander, cumin, chili, and cardamom, the screening detection limit was largely at 2 ng/g, while it was 5 ng/g for black pepper, and turmeric. When the method was quantitatively validated for 199 pesticides, the limit of quantification (LOQ) was mostly at 10 ng/g (excluding black pepper, and turmeric with LOQ = 20 ng/g) with recoveries within 70-120%, and precision-RSDs <20%. Furthermore, the method allowed the identification of suspected non-target analytes through retrospective search of the accurate mass of the compound-specific precursor and product ions. Compared to LC-MS/MS, the quantitative performance of this Orbitrap-MS method had agreements in residue values between 78-100%. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. High-Resolution Scintimammography: A Pilot Study

    Energy Technology Data Exchange (ETDEWEB)

    Rachel F. Brem; Joelle M. Schoonjans; Douglas A. Kieper; Stan Majewski; Steven Goodman; Cahid Civelek

    2002-07-01

    This study evaluated a novel high-resolution breast-specific gamma camera (HRBGC) for the detection of suggestive breast lesions. Methods: Fifty patients (with 58 breast lesions) for whom a scintimammogram was clinically indicated were prospectively evaluated with a general-purpose gamma camera and a novel HRBGC prototype. The results of conventional and high-resolution nuclear studies were prospectively classified as negative (normal or benign) or positive (suggestive or malignant) by 2 radiologists who were unaware of the mammographic and histologic results. All of the included lesions were confirmed by pathology. Results: There were 30 benign and 28 malignant lesions. The sensitivity for detection of breast cancer was 64.3% (18/28) with the conventional camera and 78.6% (22/28) with the HRBGC. The specificity with both systems was 93.3% (28/30). For the 18 nonpalpable lesions, sensitivity was 55.5% (10/18) and 72.2% (13/18) with the general-purpose camera and the HRBGC, respectively. For lesions 1 cm, 7 of 15 were detected with the general-purpose camera and 10 of 15 with the HRBGC. Four lesions (median size, 8.5 mm) were detected only with the HRBGC and were missed by the conventional camera. Conclusion: Evaluation of indeterminate breast lesions with an HRBGC results in improved sensitivity for the detection of cancer, with greater improvement shown for nonpalpable and 1-cm lesions.

  6. Evaluation of methods for aerodynamic roughness length retrieval from very high-resolution imaging LIDAR observations over the heihe basin in China

    NARCIS (Netherlands)

    Faivre, R.D.; Colin, Jérôme; Menenti, M.

    2017-01-01

    The parameterization of heat transfer based on remote sensing data, and the Surface Energy Balance System (SEBS) scheme to retrieve turbulent heat fluxes, already proved to be very appropriate for estimating evapotranspiration (ET) over homogeneous land surfaces. However, the use of such a method

  7. Methods for Motion Correction Evaluation Using 18F-FDG Human Brain Scans on a High-Resolution PET Scanner

    DEFF Research Database (Denmark)

    Keller, Sune H.; Sibomana, Merence; Olesen, Oline Vinter

    2012-01-01

    Many authors have reported the importance of motion correction (MC) for PET. Patient motion during scanning disturbs kinetic analysis and degrades resolution. In addition, using misaligned transmission for attenuation and scatter correction may produce regional quantification bias in the reconstr......Many authors have reported the importance of motion correction (MC) for PET. Patient motion during scanning disturbs kinetic analysis and degrades resolution. In addition, using misaligned transmission for attenuation and scatter correction may produce regional quantification bias...... in the reconstructed emission images. The purpose of this work was the development of quality control (QC) methods for MC procedures based on external motion tracking (EMT) for human scanning using an optical motion tracking system. Methods: Two scans with minor motion and 5 with major motion (as reported...... (automated image registration) software. The following 3 QC methods were used to evaluate the EMT and AIR MC: a method using the ratio between 2 regions of interest with gray matter voxels (GM) and white matter voxels (WM), called GM/WM; mutual information; and cross correlation. Results: The results...

  8. Development of numerical methods for reactive transport

    International Nuclear Information System (INIS)

    Bouillard, N.

    2006-12-01

    When a radioactive waste is stored in deep geological disposals, it is expected that the waste package will be damaged under water action (concrete leaching, iron corrosion). Then, to understand these damaging processes, chemical reactions and solutes transport are modelled. Numerical simulations of reactive transport can be done sequentially by the coupling of several codes. This is the case of the software platform ALLIANCES which is developed jointly with CEA, ANDRA and EDF. Stiff reactions like precipitation-dissolution are crucial for the radioactive waste storage applications, but standard sequential iterative approaches like Picard's fail in solving rapidly reactive transport simulations with such stiff reactions. In the first part of this work, we focus on a simplified precipitation and dissolution process: a system made up with one solid species and two aqueous species moving by diffusion is studied mathematically. It is assumed that a precipitation dissolution reaction occurs in between them, and it is modelled by a discontinuous kinetics law of unknown sign. By using monotonicity properties, the convergence of a finite volume scheme on admissible mesh is proved. Existence of a weak solution is obtained as a by-product of the convergence of the scheme. The second part is dedicated to coupling algorithms which improve Picard's method and can be easily used in an existing coupling code. By extending previous works, we propose a general and adaptable framework to solve nonlinear systems. Indeed by selecting special options, we can either recover well known methods, like nonlinear conjugate gradient methods, or design specific method. This algorithm has two main steps, a preconditioning one and an acceleration one. This algorithm is tested on several examples, some of them being rather academical and others being more realistic. We test it on the 'three species model'' example. Other reactive transport simulations use an external chemical code CHESS. For a

  9. High-resolution wave-theory-based ultrasound reflection imaging using the split-step fourier and globally optimized fourier finite-difference methods

    Science.gov (United States)

    Huang, Lianjie

    2013-10-29

    Methods for enhancing ultrasonic reflection imaging are taught utilizing a split-step Fourier propagator in which the reconstruction is based on recursive inward continuation of ultrasonic wavefields in the frequency-space and frequency-wave number domains. The inward continuation within each extrapolation interval consists of two steps. In the first step, a phase-shift term is applied to the data in the frequency-wave number domain for propagation in a reference medium. The second step consists of applying another phase-shift term to data in the frequency-space domain to approximately compensate for ultrasonic scattering effects of heterogeneities within the tissue being imaged (e.g., breast tissue). Results from various data input to the method indicate significant improvements are provided in both image quality and resolution.

  10. Adaptation of the deoxyglucose method for use at cellular level: histological processing of the central nervous system for high resolution radio-autography

    International Nuclear Information System (INIS)

    Des Rosiers, M.H.; Descarries, Laurent

    1978-01-01

    Vascular perfusion of all products required for primary fixation, postfixation, dehydration and embedding of nervous tissue in Epon permits radio-autographic detection of radioactivity accumulated in the central nervous system after intravenous injection of [ 3 H]deoxyglucose. This histological technique should allow application of the deoxyglucose method at cellular if not subcellular level, since a high proportion of the tracer appears to be retained in situ in specimens adequately preserved for light and electron microscope radio-autography [fr

  11. Development of a Liquid Chromatography High Resolution Mass Spectrometry (LC-HRMS) Method for the Quantitation of Viral Envelope Glycoprotein in Ebola Virus-Like Particle Vaccine Preparations

    Science.gov (United States)

    2016-09-05

    distribution is unlimited. UNCLASSIFIED Background: 92 Ebola is an extremely pathogenic virus that causes hemorrhagic fever and can result in 93... animals . 155 156 Materials and Methods: 157 Generation and Characterization of eVLPs. 158 TR-16-141 DISTRIBUTION STATEMENT A...measuring the optical density (OD) at 280 nm in 186 a spectrophotometer and assuming an extinction coefficient at 1% equal to 10 (under this 187

  12. Validation and optimization of a chromatographic method for the quantitative and qualitative determination of cocaine and heroin through liquid chromatography of high resolution

    International Nuclear Information System (INIS)

    Montero Aguilar, A.L.

    1997-01-01

    A (HPLC) chromatographic method was optimized in this work, for the determination of cocaine and heroin in seizures, through the application of the factorial and the simplex design. It applied the developed methodology in the determination of the content of cocaine and of heroin in nine different samples. The application of this methodology in abuse drugs seizures samples, turned out analytically satisfactory for the time of analysis and the veracity of the results. It also complements efficiently the qualitative analysis of cocaine and heroin, that the Laboratory of Physical and Chemical Investigations of the Department of Forensic Sciences of the Agency of Judicial Investigation carries out. (S. Grainger) [es

  13. Section on High Resolution Optical Imaging (HROI)

    Data.gov (United States)

    Federal Laboratory Consortium — The Section on High Resolution Optical Imaging (HROI) develops novel technologies for studying biological processes at unprecedented speed and resolution. Research...

  14. Nodal methods in numerical reactor calculations

    International Nuclear Information System (INIS)

    Hennart, J.P.; Valle, E. del

    2004-01-01

    The present work describes the antecedents, developments and applications started in 1972 with Prof. Hennart who was invited to be part of the staff of the Nuclear Engineering Department at the School of Physics and Mathematics of the National Polytechnic Institute. Since that time and up to 1981, several master theses based on classical finite element methods were developed with applications in point kinetics and in the steady state as well as the time dependent multigroup diffusion equations. After this period the emphasis moved to nodal finite elements in 1, 2 and 3D cartesian geometries. All the thesis were devoted to the numerical solution of the neutron multigroup diffusion and transport equations, few of them including the time dependence, most of them related with steady state diffusion equations. The main contributions were as follows: high order nodal schemes for the primal and mixed forms of the diffusion equations, block-centered finite-differences methods, post-processing, composite nodal finite elements for hexagons, and weakly and strongly discontinuous schemes for the transport equation. Some of these are now being used by several researchers involved in nuclear fuel management. (Author)

  15. Nodal methods in numerical reactor calculations

    Energy Technology Data Exchange (ETDEWEB)

    Hennart, J P [UNAM, IIMAS, A.P. 20-726, 01000 Mexico D.F. (Mexico); Valle, E del [National Polytechnic Institute, School of Physics and Mathematics, Department of Nuclear Engineering, Mexico, D.F. (Mexico)

    2004-07-01

    The present work describes the antecedents, developments and applications started in 1972 with Prof. Hennart who was invited to be part of the staff of the Nuclear Engineering Department at the School of Physics and Mathematics of the National Polytechnic Institute. Since that time and up to 1981, several master theses based on classical finite element methods were developed with applications in point kinetics and in the steady state as well as the time dependent multigroup diffusion equations. After this period the emphasis moved to nodal finite elements in 1, 2 and 3D cartesian geometries. All the thesis were devoted to the numerical solution of the neutron multigroup diffusion and transport equations, few of them including the time dependence, most of them related with steady state diffusion equations. The main contributions were as follows: high order nodal schemes for the primal and mixed forms of the diffusion equations, block-centered finite-differences methods, post-processing, composite nodal finite elements for hexagons, and weakly and strongly discontinuous schemes for the transport equation. Some of these are now being used by several researchers involved in nuclear fuel management. (Author)

  16. Methodology of high-resolution photography for mural condition database

    Science.gov (United States)

    Higuchi, R.; Suzuki, T.; Shibata, M.; Taniguchi, Y.

    2015-08-01

    Digital documentation is one of the most useful techniques to record the condition of cultural heritage. Recently, high-resolution images become increasingly useful because it is possible to show general views of mural paintings and also detailed mural conditions in a single image. As mural paintings are damaged by environmental stresses, it is necessary to record the details of painting condition on high-resolution base maps. Unfortunately, the cost of high-resolution photography and the difficulty of operating its instruments and software have commonly been an impediment for researchers and conservators. However, the recent development of graphic software makes its operation simpler and less expensive. In this paper, we suggest a new approach to make digital heritage inventories without special instruments, based on our recent our research project in Üzümlü church in Cappadocia, Turkey. This method enables us to achieve a high-resolution image database with low costs, short time, and limited human resources.

  17. Application of 3D documentation and geometric reconstruction methods in traffic accident analysis: with high resolution surface scanning, radiological MSCT/MRI scanning and real data based animation.

    Science.gov (United States)

    Buck, Ursula; Naether, Silvio; Braun, Marcel; Bolliger, Stephan; Friederich, Hans; Jackowski, Christian; Aghayev, Emin; Christe, Andreas; Vock, Peter; Dirnhofer, Richard; Thali, Michael J

    2007-07-20

    The examination of traffic accidents is daily routine in forensic medicine. An important question in the analysis of the victims of traffic accidents, for example in collisions between motor vehicles and pedestrians or cyclists, is the situation of the impact. Apart from forensic medical examinations (external examination and autopsy), three-dimensional technologies and methods are gaining importance in forensic investigations. Besides the post-mortem multi-slice computed tomography (MSCT) and magnetic resonance imaging (MRI) for the documentation and analysis of internal findings, highly precise 3D surface scanning is employed for the documentation of the external body findings and of injury-inflicting instruments. The correlation of injuries of the body to the injury-inflicting object and the accident mechanism are of great importance. The applied methods include documentation of the external and internal body and the involved vehicles and inflicting tools as well as the analysis of the acquired data. The body surface and the accident vehicles with their damages were digitized by 3D surface scanning. For the internal findings of the body, post-mortem MSCT and MRI were used. The analysis included the processing of the obtained data to 3D models, determination of the driving direction of the vehicle, correlation of injuries to the vehicle damages, geometric determination of the impact situation and evaluation of further findings of the accident. In the following article, the benefits of the 3D documentation and computer-assisted, drawn-to-scale 3D comparisons of the relevant injuries with the damages to the vehicle in the analysis of the course of accidents, especially with regard to the impact situation, are shown on two examined cases.

  18. An enhanced method to determine the Young’s modulus of technical single fibres by means of high resolution digital image correlation

    Science.gov (United States)

    Huether, Jonas; Rupp, Peter; Kohlschreiber, Ina; André Weidenmann, Kay

    2018-04-01

    To obtain mechanical tensile properties of materials it is customary to equip the specimen directly with a device to measure strain and Young’s modulus correctly and only within the measuring length defined by the standards. Whereas a variety of tools such as extensometers, strain gauges and optical systems are available for specimens on coupon level, no market-ready tools to measure strains of single fibres during single fibre tensile tests are available. Although there is a standard for single fibre testing, the procedures described there are only capable of measuring strains of the whole testing setup rather than the strain of the fibre. Without a direct strain measurement on the specimen, the compliance of the test rig itself influences the determination of the Young’s modulus. This work aims to fill this gap by establishing an enhanced method to measure strains directly on the tested fibre and thus provide accurate values for Young’s modulus. It is demonstrated that by applying and then optically tracking fluorescing polymeric beads on single glass fibres, Young’s modulus is determined directly and with high repeatability, without a need to measure at different measuring lengths or compensating for the system compliance. Employing this method to glass fibres, a Young’s modulus of approximately 82.5 GPa was determined, which is in the range of values obtained by applying a conventional procedure. This enhanced measuring technology achieves high accuracy and repeatability while reducing scatter of the data. It was demonstrated that the fluorescing beads do not affect the fibre properties.

  19. A strategy for fast screening and identification of sulfur derivatives in medicinal Pueraria species based on the fine isotopic pattern filtering method using ultra-high-resolution mass spectrometry

    International Nuclear Information System (INIS)

    Yang, Min; Zhou, Zhe; Guo, De-an

    2015-01-01

    Sulfurous compounds are commonly present in plants, fungi, and animals. Most of them were reported to possess various bioactivities. Isotopic pattern filter (IPF) is a powerful tool for screening compounds with distinct isotope pattern. Over the past decades, the IPF was used mainly to study Cl- and Br-containing compounds. To our knowledge, the algorithm was scarcely used to screen S-containing compounds, especially when combined with chromatography analyses, because the "3"4S isotopic ion is drastically affected by "1"3C_2 and "1"8O. Thus, we present a new method for a fine isotopic pattern filter (FIPF) based on the separated M + 2 ions ("1"2C_x"1H_y"1"6O_z"3"2S"1"3C_2"1"8O, "1"2C_x_+_2"1H_y"1"6O_z_+_1"3"4S, tentatively named M + 2OC and M + 2S) with an ultra-high-resolution mass (100,000 FWHM @ 400 m/z) to screen sulfur derivatives in traditional Chinese medicines (TCM).This finer algorithm operates through convenient filters, including an accurate mass shift of M + 2OC and M + 2S from M and their relative intensity compared to M. The method was validated at various mass resolutions, mass accuracies, and screening thresholds of flexible elemental compositions. Using the established FIPF method, twelve S-derivatives were found in the popular medicinal used Pueraria species, and 9 of them were tentatively identified by high-resolution multiple stage mass spectrometry (HRMS"n). The compounds were used to evaluate the sulfurous compounds' situation in commercially purchased Pueraria products. The strategy presented here provides a promising application of the IPF method in a new field. - Highlights: • We provide a new strategy for specifically screening of sulfurous compounds. • The fine isotopic pattern filter (FIPF) bases on separation of "1"3C_2+"1"8O and "3"4S. • Ultra high resolution mass (100,000 FWHM @ 400 m/z) is essential for FIPF. • IPF is applied to study the unique components of TCM for the first time. • New sulfurous components

  20. A strategy for fast screening and identification of sulfur derivatives in medicinal Pueraria species based on the fine isotopic pattern filtering method using ultra-high-resolution mass spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Min [National Engineering Laboratory for TCM Standardization Technology, Shanghai Institute of Materia Medica, Chinese Academy of Sciences, 501 Haike Road, Shanghai 201203 (China); Zhou, Zhe [ThermoFisher Scientific China Co., Ltd, No 6 Building, 27 Xinjinqiao Road, Shanghai 201206 (China); Guo, De-an, E-mail: daguo@simm.ac.cn [National Engineering Laboratory for TCM Standardization Technology, Shanghai Institute of Materia Medica, Chinese Academy of Sciences, 501 Haike Road, Shanghai 201203 (China)

    2015-09-24

    Sulfurous compounds are commonly present in plants, fungi, and animals. Most of them were reported to possess various bioactivities. Isotopic pattern filter (IPF) is a powerful tool for screening compounds with distinct isotope pattern. Over the past decades, the IPF was used mainly to study Cl- and Br-containing compounds. To our knowledge, the algorithm was scarcely used to screen S-containing compounds, especially when combined with chromatography analyses, because the {sup 34}S isotopic ion is drastically affected by {sup 13}C{sub 2} and {sup 18}O. Thus, we present a new method for a fine isotopic pattern filter (FIPF) based on the separated M + 2 ions ({sup 12}C{sub x}{sup 1}H{sub y}{sup 16}O{sub z}{sup 32}S{sup 13}C{sub 2}{sup 18}O, {sup 12}C{sub x+2}{sup 1}H{sub y}{sup 16}O{sub z+1}{sup 34}S, tentatively named M + 2OC and M + 2S) with an ultra-high-resolution mass (100,000 FWHM @ 400 m/z) to screen sulfur derivatives in traditional Chinese medicines (TCM).This finer algorithm operates through convenient filters, including an accurate mass shift of M + 2OC and M + 2S from M and their relative intensity compared to M. The method was validated at various mass resolutions, mass accuracies, and screening thresholds of flexible elemental compositions. Using the established FIPF method, twelve S-derivatives were found in the popular medicinal used Pueraria species, and 9 of them were tentatively identified by high-resolution multiple stage mass spectrometry (HRMS{sup n}). The compounds were used to evaluate the sulfurous compounds' situation in commercially purchased Pueraria products. The strategy presented here provides a promising application of the IPF method in a new field. - Highlights: • We provide a new strategy for specifically screening of sulfurous compounds. • The fine isotopic pattern filter (FIPF) bases on separation of {sup 13}C{sub 2}+{sup 18}O and {sup 34}S. • Ultra high resolution mass (100,000 FWHM @ 400 m/z) is essential

  1. Investigation of high resolution compact gamma camera module based on a continuous scintillation crystal using a novel charge division readout method

    International Nuclear Information System (INIS)

    Dai Qiusheng; Zhao Cuilan; Qi Yujin; Zhang Hualin

    2010-01-01

    The objective of this study is to investigate a high performance and lower cost compact gamma camera module for a multi-head small animal SPECT system. A compact camera module was developed using a thin Lutetium Oxyorthosilicate (LSO) scintillation crystal slice coupled to a Hamamatsu H8500 position sensitive photomultiplier tube (PSPMT). A two-stage charge division readout board based on a novel subtractive resistive readout with a truncated center-of-gravity (TCOG) positioning method was developed for the camera. The performance of the camera was evaluated using a flood 99m Tc source with a four-quadrant bar-mask phantom. The preliminary experimental results show that the image shrinkage problem associated with the conventional resistive readout can be effectively overcome by the novel subtractive resistive readout with an appropriate fraction subtraction factor. The response output area (ROA) of the camera shown in the flood image was improved up to 34%, and an intrinsic spatial resolution better than 2 mm of detector was achieved. In conclusion, the utilization of a continuous scintillation crystal and a flat-panel PSPMT equipped with a novel subtractive resistive readout is a feasible approach for developing a high performance and lower cost compact gamma camera. (authors)

  2. A comprehensive high-resolution mass spectrometry approach for characterization of metabolites by combination of ambient ionization, chromatography and imaging methods.

    Science.gov (United States)

    Berisha, Arton; Dold, Sebastian; Guenther, Sabine; Desbenoit, Nicolas; Takats, Zoltan; Spengler, Bernhard; Römpp, Andreas

    2014-08-30

    An ideal method for bioanalytical applications would deliver spatially resolved quantitative information in real time and without sample preparation. In reality these requirements can typically not be met by a single analytical technique. Therefore, we combine different mass spectrometry approaches: chromatographic separation, ambient ionization and imaging techniques, in order to obtain comprehensive information about metabolites in complex biological samples. Samples were analyzed by laser desorption followed by electrospray ionization (LD-ESI) as an ambient ionization technique, by matrix-assisted laser desorption/ionization (MALDI) mass spectrometry imaging for spatial distribution analysis and by high-performance liquid chromatography/electrospray ionization mass spectrometry (HPLC/ESI-MS) for quantitation and validation of compound identification. All MS data were acquired with high mass resolution and accurate mass (using orbital trapping and ion cyclotron resonance mass spectrometers). Grape berries were analyzed and evaluated in detail, whereas wheat seeds and mouse brain tissue were analyzed in proof-of-concept experiments. In situ measurements by LD-ESI without any sample preparation allowed for fast screening of plant metabolites on the grape surface. MALDI imaging of grape cross sections at 20 µm pixel size revealed the detailed distribution of metabolites which were in accordance with their biological function. HPLC/ESI-MS was used to quantify 13 anthocyanin species as well as to separate and identify isomeric compounds. A total of 41 metabolites (amino acids, carbohydrates, anthocyanins) were identified with all three approaches. Mass accuracy for all MS measurements was better than 2 ppm (root mean square error). The combined approach provides fast screening capabilities, spatial distribution information and the possibility to quantify metabolites. Accurate mass measurements proved to be critical in order to reliably combine data from different MS

  3. A new method to assess the added value of high-resolution regional climate simulations: application to the EURO-CORDEX dataset

    Science.gov (United States)

    Soares, P. M. M.; Cardoso, R. M.

    2017-12-01

    Regional climate models (RCM) are used with increasing resolutions pursuing to represent in an improved way regional to local scale atmospheric phenomena. The EURO-CORDEX simulations at 0.11° and simulations exploiting finer grid spacing approaching the convective-permitting regimes are representative examples. The climate runs are computationally very demanding and do not always show improvements. These depend on the region, variable and object of study. The gains or losses associated with the use of higher resolution in relation to the forcing model (global climate model or reanalysis), or to different resolution RCM simulations, is known as added value. Its characterization is a long-standing issue, and many different added-value measures have been proposed. In the current paper, a new method is proposed to assess the added value of finer resolution simulations, in comparison to its forcing data or coarser resolution counterparts. This approach builds on a probability density function (PDF) matching score, giving a normalised measure of the difference between diverse resolution PDFs, mediated by the observational ones. The distribution added value (DAV) is an objective added value measure that can be applied to any variable, region or temporal scale, from hindcast or historical (non-synchronous) simulations. The DAVs metric and an application to the EURO-CORDEX simulations, for daily temperatures and precipitation, are here presented. The EURO-CORDEX simulations at both resolutions (0.44o,0.11o) display a clear added value in relation to ERA-Interim, with values around 30% in summer and 20% in the intermediate seasons, for precipitation. When both RCM resolutions are directly compared the added value is limited. The regions with the larger precipitation DAVs are areas where convection is relevant, e.g. Alps and Iberia. When looking at the extreme precipitation PDF tail, the higher resolution improvement is generally greater than the low resolution for seasons

  4. High resolution CT in pulmonary sarcoidosis

    International Nuclear Information System (INIS)

    Spina, Juan C.; Curros, Marisela L.; Gomez, M.; Gonzalez, A.; Chacon, Carolina; Guerendiain, G.

    2000-01-01

    Objectives: To establish the particular advantages of High Resolution CT (HRCT) for the diagnosis of pulmonary sarcoidosis. Material and Methods: A series of fourteen patients, (4 men and 10 women; mean age 44,5 years) with thoracic sarcoidosis. All patients were studied using HRCT and diagnosis was confirmed for each case. Confidence intervals were obtained for different disease manifestations. Results: The most common findings were: lymph node enlargement (n=14 patients), pulmonary nodules (n=13), thickening of septa (n=6), peribronquial vascular thickening (n=5) pulmonary pseudo mass (n=5) and signs of fibrosis (n=4). The stage most commonly observed was stage II. It is worth noting that no cases of pleural effusion or cavitations of pulmonary lesions were observed. Conclusions: In this series, confidence interval overlapping for lymph node enlargement, single pulmonary nodules and septum thickening, allows to infer that their presence in a young adult, with few clinical symptoms, forces to rule out first the possibility of sarcoidosis. (author)

  5. GRANULOMETRIC MAPS FROM HIGH RESOLUTION SATELLITE IMAGES

    Directory of Open Access Journals (Sweden)

    Catherine Mering

    2011-05-01

    Full Text Available A new method of land cover mapping from satellite images using granulometric analysis is presented here. Discontinuous landscapes such as steppian bushes of semi arid regions and recently growing urban settlements are especially concerned by this study. Spatial organisations of the land cover are quantified by means of the size distribution analysis of the land cover units extracted from high resolution remotely sensed images. A granulometric map is built by automatic classification of every pixel of the image according to the granulometric density inside a sliding neighbourhood. Granulometric mapping brings some advantages over traditional thematic mapping by remote sensing by focusing on fine spatial events and small changes in one peculiar category of the landscape.

  6. Development of AMS high resolution injector system

    International Nuclear Information System (INIS)

    Bao Yiwen; Guan Xialing; Hu Yueming

    2008-01-01

    The Beijing HI-13 tandem accelerator AMS high resolution injector system was developed. The high resolution energy achromatic system consists of an electrostatic analyzer and a magnetic analyzer, which mass resolution can reach 600 and transmission is better than 80%. (authors)

  7. High-resolution electron spectroscopy of the 1s23lnl' Be-like series in oxygen and neon. Test of theoretical data: I. Experimental method and theoretical background

    International Nuclear Information System (INIS)

    Bordenave-Montesquieu, A; Moretto-Capelle, P; Bordenave-Montesquieu, D

    2003-01-01

    A complete and accurate experimental test of theoretical spectroscopic data sets (state positions, lifetimes) available for the n = 3-5 terms of the 1s 2 3lnl' Rydberg series of oxygen and neon ions is presented in a series of two papers. This result was achieved by fitting our high resolution electron spectra with post-collisional lineshapes calculated with the help of these spectroscopic data. In this paper the method which has been developed for this fitting procedure is explained. In addition, as a first test, a comparison of all the available calculated spectroscopic data is presented and discussed. Strong deviations of transition energies and decay lifetimes are observed in many cases. Best data are selected in the following companion paper through a quantitative comparison with our experimental electron spectra

  8. High-resolution electron spectroscopy of the 1s{sup 2}3lnl' Be-like series in oxygen and neon. Test of theoretical data: I. Experimental method and theoretical background

    Energy Technology Data Exchange (ETDEWEB)

    Bordenave-Montesquieu, A; Moretto-Capelle, P; Bordenave-Montesquieu, D [Laboratoire CAR-IRSAMC, UMR 5589 CNRS - Universite Paul Sabatier, 31062 Toulouse (France)

    2003-01-14

    A complete and accurate experimental test of theoretical spectroscopic data sets (state positions, lifetimes) available for the n = 3-5 terms of the 1s{sup 2}3lnl' Rydberg series of oxygen and neon ions is presented in a series of two papers. This result was achieved by fitting our high resolution electron spectra with post-collisional lineshapes calculated with the help of these spectroscopic data. In this paper the method which has been developed for this fitting procedure is explained. In addition, as a first test, a comparison of all the available calculated spectroscopic data is presented and discussed. Strong deviations of transition energies and decay lifetimes are observed in many cases. Best data are selected in the following companion paper through a quantitative comparison with our experimental electron spectra.

  9. Numerical methods in simulation of resistance welding

    DEFF Research Database (Denmark)

    Nielsen, Chris Valentin; Martins, Paulo A.F.; Zhang, Wenqi

    2015-01-01

    Finite element simulation of resistance welding requires coupling betweenmechanical, thermal and electrical models. This paper presents the numerical models and theircouplings that are utilized in the computer program SORPAS. A mechanical model based onthe irreducible flow formulation is utilized...... a resistance welding point of view, the most essential coupling between the above mentioned models is the heat generation by electrical current due to Joule heating. The interaction between multiple objects is anothercritical feature of the numerical simulation of resistance welding because it influences...... thecontact area and the distribution of contact pressure. The numerical simulation of resistancewelding is illustrated by a spot welding example that includes subsequent tensile shear testing...

  10. High-resolution electron microscopy

    CERN Document Server

    Spence, John C H

    2013-01-01

    This new fourth edition of the standard text on atomic-resolution transmission electron microscopy (TEM) retains previous material on the fundamentals of electron optics and aberration correction, linear imaging theory (including wave aberrations to fifth order) with partial coherence, and multiple-scattering theory. Also preserved are updated earlier sections on practical methods, with detailed step-by-step accounts of the procedures needed to obtain the highest quality images of atoms and molecules using a modern TEM or STEM electron microscope. Applications sections have been updated - these include the semiconductor industry, superconductor research, solid state chemistry and nanoscience, and metallurgy, mineralogy, condensed matter physics, materials science and material on cryo-electron microscopy for structural biology. New or expanded sections have been added on electron holography, aberration correction, field-emission guns, imaging filters, super-resolution methods, Ptychography, Ronchigrams, tomogr...

  11. An optimized method for neurotransmitters and their metabolites analysis in mouse hypothalamus by high performance liquid chromatography-Q Exactive hybrid quadrupole-orbitrap high-resolution accurate mass spectrometry.

    Science.gov (United States)

    Yang, Zong-Lin; Li, Hui; Wang, Bing; Liu, Shu-Ying

    2016-02-15

    Neurotransmitters (NTs) and their metabolites are known to play an essential role in maintaining various physiological functions in nervous system. However, there are many difficulties in the detection of NTs together with their metabolites in biological samples. A new method for NTs and their metabolites detection by high performance liquid chromatography coupled with Q Exactive hybrid quadruple-orbitrap high-resolution accurate mass spectrometry (HPLC-HRMS) was established in this paper. This method was a great development of the applying of Q Exactive MS in the quantitative analysis. This method enabled a rapid quantification of ten compounds within 18min. Good linearity was obtained with a correlation coefficient above 0.99. The concentration range of the limit of detection (LOD) and the limit of quantitation (LOQ) level were 0.0008-0.05nmol/mL and 0.002-25.0nmol/mL respectively. Precisions (relative standard deviation, RSD) of this method were at 0.36-12.70%. Recovery ranges were between 81.83% and 118.04%. Concentrations of these compounds in mouse hypothalamus were detected by Q Exactive LC-MS technology with this method. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. High resolution optical DNA mapping

    Science.gov (United States)

    Baday, Murat

    Many types of diseases including cancer and autism are associated with copy-number variations in the genome. Most of these variations could not be identified with existing sequencing and optical DNA mapping methods. We have developed Multi-color Super-resolution technique, with potential for high throughput and low cost, which can allow us to recognize more of these variations. Our technique has made 10--fold improvement in the resolution of optical DNA mapping. Using a 180 kb BAC clone as a model system, we resolved dense patterns from 108 fluorescent labels of two different colors representing two different sequence-motifs. Overall, a detailed DNA map with 100 bp resolution was achieved, which has the potential to reveal detailed information about genetic variance and to facilitate medical diagnosis of genetic disease.

  13. CEMRACS 2010: Numerical methods for fusion

    International Nuclear Information System (INIS)

    2011-01-01

    This CEMRACS summer school is devoted to the mathematical and numerical modeling of plasma problems that occur in magnetic or inertial fusion. The main topics of this year are the following: -) asymptotic solutions for fluid models of plasma, -) the hydrodynamics of the implosion and the coupling with radiative transfer in inertial fusion, -) gyrokinetic simulations of magnetic fusion plasmas, and -) Landau damping.

  14. High speed, High resolution terahertz spectrometers

    International Nuclear Information System (INIS)

    Kim, Youngchan; Yee, Dae Su; Yi, Miwoo; Ahn, Jaewook

    2008-01-01

    A variety of sources and methods have been developed for terahertz spectroscopy during almost two decades. Terahertz time domain spectroscopy (THz TDS)has attracted particular attention as a basic measurement method in the fields of THz science and technology. Recently, asynchronous optical sampling (AOS)THz TDS has been demonstrated, featuring rapid data acquisition and a high spectral resolution. Also, terahertz frequency comb spectroscopy (TFCS)possesses attractive features for high precision terahertz spectroscopy. In this presentation, we report on these two types of terahertz spectrometer. Our high speed, high resolution terahertz spectrometer is demonstrated using two mode locked femtosecond lasers with slightly different repetition frequencies without a mechanical delay stage. The repetition frequencies of the two femtosecond lasers are stabilized by use of two phase locked loops sharing the same reference oscillator. The time resolution of our terahertz spectrometer is measured using the cross correlation method to be 270 fs. AOS THz TDS is presented in Fig. 1, which shows a time domain waveform rapidly acquired on a 10ns time window. The inset shows a zoom into the signal with 100ps time window. The spectrum obtained by the fast Fourier Transformation (FFT)of the time domain waveform has a frequency resolution of 100MHz. The dependence of the signal to noise ratio (SNR)on the measurement time is also investigated

  15. High-resolution slab gel isoelectric focusing: methods for quantitative electrophoretic transfer and immunodetection of proteins as applied to the study of the multiple isoelectric forms of ornithine decarboxylase.

    Science.gov (United States)

    Reddy, S G; Cochran, B J; Worth, L L; Knutson, V P; Haddox, M K

    1994-04-01

    A high-resolution isoelectric focusing vertical slab gel method which can resolve proteins which differ by a single charge was developed and this method was applied to the study of the multiple isoelectric forms of ornithine decarboxylase. Separation of proteins at this high level of resolution was achieved by increasing the ampholyte concentration in the gels to 6%. Various lots of ampholytes, from the same or different commercial sources, differed significantly in their protein binding capacity. Ampholytes bound to proteins interfered both with the electrophoretic transfer of proteins from the gel to immunoblotting membranes and with the ability of antibodies to interact with proteins on the immunoblotting membranes. Increasing the amount of protein loaded into a gel lane also decreased the efficiency of the electrophoretic transfer and immunodetection. To overcome these problems, both gel washing and gel electrophoretic transfer protocols for disrupting the ampholyte-protein binding and enabling a quantitative electrophoretic transfer of proteins were developed. Two gel washing procedures, with either thiocyanate or borate buffers, and a two-step electrophoretic transfer method are described. The choice of which method to use to optimally disrupt the ampholyte-protein binding was found to vary with each lot of ampholytes employed.

  16. Unbiased Scanning Method and Data Banking Approach Using Ultra-High Performance Liquid Chromatography Coupled with High-Resolution Mass Spectrometry for Quantitative Comparison of Metabolite Exposure in Plasma across Species Analyzed at Different Dates.

    Science.gov (United States)

    Gao, Hongying; Deng, Shibing; Obach, R Scott

    2015-12-01

    An unbiased scanning methodology using ultra high-performance liquid chromatography coupled with high-resolution mass spectrometry was used to bank data and plasma samples for comparing the data generated at different dates. This method was applied to bank the data generated earlier in animal samples and then to compare the exposure to metabolites in animal versus human for safety assessment. With neither authentic standards nor prior knowledge of the identities and structures of metabolites, full scans for precursor ions and all ion fragments (AIF) were employed with a generic gradient LC method to analyze plasma samples at positive and negative polarity, respectively. In a total of 22 tested drugs and metabolites, 21 analytes were detected using this unbiased scanning method except that naproxen was not detected due to low sensitivity at negative polarity and interference at positive polarity; and 4'- or 5-hydroxy diclofenac was not separated by a generic UPLC method. Statistical analysis of the peak area ratios of the analytes versus the internal standard in five repetitive analyses over approximately 1 year demonstrated that the analysis variation was significantly different from sample instability. The confidence limits for comparing the exposure using peak area ratio of metabolites in animal plasma versus human plasma measured over approximately 1 year apart were comparable to the analysis undertaken side by side on the same days. These statistical analysis results showed it was feasible to compare data generated at different dates with neither authentic standards nor prior knowledge of the analytes.

  17. High resolution sequence stratigraphy in China

    International Nuclear Information System (INIS)

    Zhang Shangfeng; Zhang Changmin; Yin Yanshi; Yin Taiju

    2008-01-01

    Since high resolution sequence stratigraphy was introduced into China by DENG Hong-wen in 1995, it has been experienced two development stages in China which are the beginning stage of theory research and development of theory research and application, and the stage of theoretical maturity and widely application that is going into. It is proved by practices that high resolution sequence stratigraphy plays more and more important roles in the exploration and development of oil and gas in Chinese continental oil-bearing basin and the research field spreads to the exploration of coal mine, uranium mine and other strata deposits. However, the theory of high resolution sequence stratigraphy still has some shortages, it should be improved in many aspects. The authors point out that high resolution sequence stratigraphy should be characterized quantitatively and modelized by computer techniques. (authors)

  18. High resolution CT of the chest

    Energy Technology Data Exchange (ETDEWEB)

    Barneveld Binkhuysen, F H [Eemland Hospital (Netherlands), Dept. of Radiology

    1996-12-31

    Compared to conventional CT high resolution CT (HRCT) shows several extra anatomical structures which might effect both diagnosis and therapy. The extra anatomical structures were discussed briefly in this article. (18 refs.).

  19. High-resolution spectrometer at PEP

    International Nuclear Information System (INIS)

    Weiss, J.M.; HRS Collaboration.

    1982-01-01

    A description is presented of the High Resolution Spectrometer experiment (PEP-12) now running at PEP. The advanced capabilities of the detector are demonstrated with first physics results expected in the coming months

  20. Structure of high-resolution NMR spectra

    CERN Document Server

    Corio, PL

    2012-01-01

    Structure of High-Resolution NMR Spectra provides the principles, theories, and mathematical and physical concepts of high-resolution nuclear magnetic resonance spectra.The book presents the elementary theory of magnetic resonance; the quantum mechanical theory of angular momentum; the general theory of steady state spectra; and multiple quantum transitions, double resonance and spin echo experiments.Physicists, chemists, and researchers will find the book a valuable reference text.

  1. Dynamic high resolution imaging of rats

    International Nuclear Information System (INIS)

    Miyaoka, R.S.; Lewellen, T.K.; Bice, A.N.

    1990-01-01

    A positron emission tomography with the sensitivity and resolution to do dynamic imaging of rats would be an invaluable tool for biological researchers. In this paper, the authors determine the biological criteria for dynamic positron emission imaging of rats. To be useful, 3 mm isotropic resolution and 2-3 second time binning were necessary characteristics for such a dedicated tomograph. A single plane in which two objects of interest could be imaged simultaneously was considered acceptable. Multi-layered detector designs were evaluated as a possible solution to the dynamic imaging and high resolution imaging requirements. The University of Washington photon history generator was used to generate data to investigate a tomograph's sensitivity to true, scattered and random coincidences for varying detector ring diameters. Intrinsic spatial uniformity advantages of multi-layered detector designs over conventional detector designs were investigated using a Monte Carlo program. As a result, a modular three layered detector prototype is being developed. A module will consist of a layer of five 3.5 mm wide crystals and two layers of six 2.5 mm wide crystals. The authors believe adequate sampling can be achieved with a stationary detector system using these modules. Economical crystal decoding strategies have been investigated and simulations have been run to investigate optimum light channeling methods for block decoding strategies. An analog block decoding method has been proposed and will be experimentally evaluated to determine whether it can provide the desired performance

  2. Homogenization-based topology optimization for high-resolution manufacturable micro-structures

    DEFF Research Database (Denmark)

    Groen, Jeroen Peter; Sigmund, Ole

    2018-01-01

    This paper presents a projection method to obtain high-resolution, manufacturable structures from efficient and coarse-scale, homogenization-based topology optimization results. The presented approach bridges coarse and fine scale, such that the complex periodic micro-structures can be represented...... by a smooth and continuous lattice on the fine mesh. A heuristic methodology allows control of the projected topology, such that a minimum length-scale on both solid and void features is ensured in the final result. Numerical examples show excellent behavior of the method, where performances of the projected...

  3. Survey of numerical methods for compressible fluids

    Energy Technology Data Exchange (ETDEWEB)

    Sod, G A

    1977-06-01

    The finite difference methods of Godunov, Hyman, Lax-Wendroff (two-step), MacCormack, Rusanov, the upwind scheme, the hybrid scheme of Harten and Zwas, the antidiffusion method of Boris and Book, and the artificial compression method of Harten are compared with the random choice known as Glimm's method. The methods are used to integrate the one-dimensional equations of gas dynamics for an inviscid fluid. The results are compared and demonstrate that Glimm's method has several advantages. 16 figs., 4 tables.

  4. High-Resolution Integrated Optical System

    Science.gov (United States)

    Prakapenka, V. B.; Goncharov, A. F.; Holtgrewe, N.; Greenberg, E.

    2017-12-01

    Raman and optical spectroscopy in-situ at extreme high pressure and temperature conditions relevant to the planets' deep interior is a versatile tool for characterization of wide range of properties of minerals essential for understanding the structure, composition, and evolution of terrestrial and giant planets. Optical methods, greatly complementing X-ray diffraction and spectroscopy techniques, become crucial when dealing with light elements. Study of vibrational and optical properties of minerals and volatiles, was a topic of many research efforts in past decades. A great deal of information on the materials properties under extreme pressure and temperature has been acquired including that related to structural phase changes, electronic transitions, and chemical transformations. These provide an important insight into physical and chemical states of planetary interiors (e.g. nature of deep reservoirs) and their dynamics including heat and mass transport (e.g. deep carbon cycle). Optical and vibrational spectroscopy can be also very instrumental for elucidating the nature of the materials molten states such as those related to the Earth's volatiles (CO2, CH4, H2O), aqueous fluids and silicate melts, planetary ices (H2O, CH4, NH3), noble gases, and H2. The optical spectroscopy study performed concomitantly with X-ray diffraction and spectroscopy measurements at the GSECARS beamlines on the same sample and at the same P-T conditions would greatly enhance the quality of this research and, moreover, will provide unique new information on chemical state of matter. The advanced high-resolution user-friendly integrated optical system is currently under construction and expected to be completed by 2018. In our conceptual design we have implemented Raman spectroscopy with five excitation wavelengths (266, 473, 532, 660, 946 nm), confocal imaging, double sided IR laser heating combined with high temperature Raman (including coherent anti-Stokes Raman scattering) and

  5. Numerical methods in physical and economic sciences

    International Nuclear Information System (INIS)

    Lions, J.L.; Marchouk, G.I.

    1974-01-01

    This book is the first of a series to be published simultaneously in French and Russian. Some results obtained in the framework of an agreement of French-Soviet scientific collaboration in the field of the information processing are exposed. In the first part, the iterative methods for solving linear systems are studied with new methods which are compared to already known methods. Iterative methods of minimization of quadratic functionals are then studied. In the second part, the optimization problems with one or many criteria, issued from Physics and Economics problems are considered and splitting and decentralizing methods systematically studied [fr

  6. High resolution CT of temporal bone trauma

    International Nuclear Information System (INIS)

    Youn, Eun Kyung

    1986-01-01

    Radiographic studies of the temporal bone following head trauma are indicated when there is cerebrospinal fluid otorrhea or rhinorrhoea, hearing loss, or facial nerve paralysis. Plain radiography displays only 17-30% of temporal bone fractures and pluridirectional tomography is both difficult to perform, particularly in the acutely ill patient, and less satisfactory for the demonstration of fine fractures. Consequently, high resolution CT is the imaging method of choice for the investigation of suspected temporal bone trauma and allows special resolution of fine bony detail comparable to that attainable by conventional tomography. Eight cases of temporal bone trauma examined at Korea General Hospital April 1985 through May 1986. The results were as follows: Seven patients (87%) suffered longitudinal fractures. In 6 patients who had purely conductive hearing loss, CT revealed various ossicular chain abnormality. In one patient who had neuro sensory hearing loss, CT demonstrated intract ossicular with a fracture nearing lateral wall of the lateral semicircular canal. In one patient who had mixed hearing loss, CT showed complex fracture.

  7. Quantum dynamic imaging theoretical and numerical methods

    CERN Document Server

    Ivanov, Misha

    2011-01-01

    Studying and using light or "photons" to image and then to control and transmit molecular information is among the most challenging and significant research fields to emerge in recent years. One of the fastest growing areas involves research in the temporal imaging of quantum phenomena, ranging from molecular dynamics in the femto (10-15s) time regime for atomic motion to the atto (10-18s) time scale of electron motion. In fact, the attosecond "revolution" is now recognized as one of the most important recent breakthroughs and innovations in the science of the 21st century. A major participant in the development of ultrafast femto and attosecond temporal imaging of molecular quantum phenomena has been theory and numerical simulation of the nonlinear, non-perturbative response of atoms and molecules to ultrashort laser pulses. Therefore, imaging quantum dynamics is a new frontier of science requiring advanced mathematical approaches for analyzing and solving spatial and temporal multidimensional partial differ...

  8. A comparison of high-resolution specific conductance-based end-member mixing analysis and a graphical method for baseflow separation of four streams in hydrologically challenging agricultural watersheds

    Science.gov (United States)

    Kronholm, Scott C.; Capel, Paul D.

    2015-01-01

    Quantifying the relative contributions of different sources of water to a stream hydrograph is important for understanding the hydrology and water quality dynamics of a given watershed. To compare the performance of two methods of hydrograph separation, a graphical program [baseflow index (BFI)] and an end-member mixing analysis that used high-resolution specific conductance measurements (SC-EMMA) were used to estimate daily and average long-term slowflow additions of water to four small, primarily agricultural streams with different dominant sources of water (natural groundwater, overland flow, subsurface drain outflow, and groundwater from irrigation). Because the result of hydrograph separation by SC-EMMA is strongly related to the choice of slowflow and fastflow end-member values, a sensitivity analysis was conducted based on the various approaches reported in the literature to inform the selection of end-members. There were substantial discrepancies among the BFI and SC-EMMA, and neither method produced reasonable results for all four streams. Streams that had a small difference in the SC of slowflow compared with fastflow or did not have a monotonic relationship between streamflow and stream SC posed a challenge to the SC-EMMA method. The utility of the graphical BFI program was limited in the stream that had only gradual changes in streamflow. The results of this comparison suggest that the two methods may be quantifying different sources of water. Even though both methods are easy to apply, they should be applied with consideration of the streamflow and/or SC characteristics of a stream, especially where anthropogenic water sources (irrigation and subsurface drainage) are present.

  9. A new method for simultaneous detection and discrimination of Bovine herpesvirus types 1 (BoHV-1) and 5 (BoHV-5) using real time PCR with high resolution melting (HRM) analysis.

    Science.gov (United States)

    Marin, M S; Quintana, S; Leunda, M R; Recavarren, M; Pagnuco, I; Späth, E; Pérez, S; Odeón, A

    2016-01-01

    Bovine herpesvirus types 1 (BoHV-1) and 5 (BoHV-5) are antigenically and genetically similar. The aim of this study was to develop a simple and reliable one-step real time PCR assay with high resolution melting (HRM) analysis for the simultaneous detection and differentiation of BoHV-1 and BoHV-5. Optimization of assay conditions was performed with DNA from reference strains. Then, DNA from field isolates, clinical samples and tissue samples of experimentally infected animals were studied by real time PCR-HRM. An efficient amplification of real time PCR products was obtained, and a clear melting curve and appropriate melting peaks for both viruses were achieved in the HRM curve analysis for BoHV type identification. BoHV was identified in all of the isolates and clinical samples, and BoHV types were properly differentiated. Furthermore, viral DNA was detected in 12/18 and 7/18 samples from BoHV-1- and BoHV-5-infected calves, respectively. Real time PCR-HRM achieved a higher sensitivity compared with virus isolation or conventional PCR. In this study, HRM was used as a novel procedure. This method provides rapid, sensitive, specific and simultaneous detection of bovine alpha-herpesviruses DNA. Thus, this technique is an excellent tool for diagnosis, research and epidemiological studies of these viruses in cattle. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. High-resolution seismic wave propagation using local time stepping

    KAUST Repository

    Peter, Daniel

    2017-03-13

    High-resolution seismic wave simulations often require local refinements in numerical meshes to accurately capture e.g. steep topography or complex fault geometry. Together with explicit time schemes, this dramatically reduces the global time step size for ground-motion simulations due to numerical stability conditions. To alleviate this problem, local time stepping (LTS) algorithms allow an explicit time stepping scheme to adapt the time step to the element size, allowing nearoptimal time steps everywhere in the mesh. This can potentially lead to significantly faster simulation runtimes.

  11. Yeast expression proteomics by high-resolution mass spectrometry

    DEFF Research Database (Denmark)

    Walther, Tobias C; Olsen, Jesper Velgaard; Mann, Matthias

    2010-01-01

    -translational controls contribute majorly to regulation of protein abundance, for example in heat shock stress response. The development of new sample preparation methods, high-resolution mass spectrometry and novel bioinfomatic tools close this gap and allow the global quantitation of the yeast proteome under different...

  12. Extraction method based on emulsion breaking for the determination of Cu, Fe and Pb in Brazilian automotive gasoline samples by high-resolution continuum source flame atomic absorption spectrometry

    Science.gov (United States)

    Leite, Clarice C.; de Jesus, Alexandre; Kolling, Leandro; Ferrão, Marco F.; Samios, Dimitrios; Silva, Márcia M.

    2018-04-01

    This work reports a new method for extraction of Cu, Fe and Pb from Brazilian automotive gasoline and their determination by high-resolution continuous source flame atomic absorption spectrometry (HR-CS FAAS). The method was based on the formation of water-in-oil emulsion by mixing 2.0 mL of extraction solution constituted by 12% (w/v) Triton X-100 and 5% (v/v) HNO3 with 10 mL of sample. After heating at 90 °C for 10 min, two well-defined phases were formed. The bottom phase (approximately 3.5 mL), composed of acidified water and part of the ethanol originally present in the gasoline sample, containing the extracted analytes was analyzed. The surfactant and HNO3 concentrations and the heating temperature employed in the process were optimized by Doehlert design, using a Brazilian gasoline sample spiked with Cu, Fe and Pb (organometallic compounds). The efficiency of extraction was investigated and it ranged from 80 to 89%. The calibration was accomplished by using matrix matching method. For this, the standards were obtained performing the same extraction procedure used for the sample, using emulsions obtained with a gasoline sample free of analytes and the addition of inorganic standards. Limits of detection obtained were 3.0, 5.0 and 14.0 μg L-1 for Cu, Fe and Pb, respectively. These limits were estimated for the original sample taking into account the preconcentration factor obtained. The accuracy of the proposed method was assured by recovery tests spiking the samples with organometallic standards and the obtained values ranged from 98 to 105%. Ten gasoline samples were analyzed and Fe was found in four samples (0.04-0.35 mg L-1) while Cu (0.28 mg L-1) and Pb (0.60 mg L-1) was found in just one sample.

  13. High-resolution subgrid models: background, grid generation, and implementation

    Science.gov (United States)

    Sehili, Aissa; Lang, Günther; Lippert, Christoph

    2014-04-01

    The basic idea of subgrid models is the use of available high-resolution bathymetric data at subgrid level in computations that are performed on relatively coarse grids allowing large time steps. For that purpose, an algorithm that correctly represents the precise mass balance in regions where wetting and drying occur was derived by Casulli (Int J Numer Method Fluids 60:391-408, 2009) and Casulli and Stelling (Int J Numer Method Fluids 67:441-449, 2010). Computational grid cells are permitted to be wet, partially wet, or dry, and no drying threshold is needed. Based on the subgrid technique, practical applications involving various scenarios were implemented including an operational forecast model for water level, salinity, and temperature of the Elbe Estuary in Germany. The grid generation procedure allows a detailed boundary fitting at subgrid level. The computational grid is made of flow-aligned quadrilaterals including few triangles where necessary. User-defined grid subdivision at subgrid level allows a correct representation of the volume up to measurement accuracy. Bottom friction requires a particular treatment. Based on the conveyance approach, an appropriate empirical correction was worked out. The aforementioned features make the subgrid technique very efficient, robust, and accurate. Comparison of predicted water levels with the comparatively highly resolved classical unstructured grid model shows very good agreement. The speedup in computational performance due to the use of the subgrid technique is about a factor of 20. A typical daily forecast can be carried out in less than 10 min on a standard PC-like hardware. The subgrid technique is therefore a promising framework to perform accurate temporal and spatial large-scale simulations of coastal and estuarine flow and transport processes at low computational cost.

  14. High-resolution analysis of the mechanical behavior of tissue

    Science.gov (United States)

    Hudnut, Alexa W.; Armani, Andrea M.

    2017-06-01

    The mechanical behavior and properties of biomaterials, such as tissue, have been directly and indirectly connected to numerous malignant physiological states. For example, an increase in the Young's Modulus of tissue can be indicative of cancer. Due to the heterogeneity of biomaterials, it is extremely important to perform these measurements using whole or unprocessed tissue because the tissue matrix contains important information about the intercellular interactions and the structure. Thus, developing high-resolution approaches that can accurately measure the elasticity of unprocessed tissue samples is of great interest. Unfortunately, conventional elastography methods such as atomic force microscopy, compression testing, and ultrasound elastography either require sample processing or have poor resolution. In the present work, we demonstrate the characterization of unprocessed salmon muscle using an optical polarimetric elastography system. We compare the results of compression testing within different samples of salmon skeletal muscle with different numbers of collagen membranes to characterize differences in heterogeneity. Using the intrinsic collagen membranes as markers, we determine the resolution of the system when testing biomaterials. The device reproducibly measures the stiffness of the tissues at variable strains. By analyzing the amount of energy lost by the sample during compression, collagen membranes that are 500 μm in size are detected.

  15. Numerical methods for coupled fracture problems

    Science.gov (United States)

    Viesca, Robert C.; Garagash, Dmitry I.

    2018-04-01

    We consider numerical solutions in which the linear elastic response to an opening- or sliding-mode fracture couples with one or more processes. Classic examples of such problems include traction-free cracks leading to stress singularities or cracks with cohesive-zone strength requirements leading to non-singular stress distributions. These classical problems have characteristic square-root asymptotic behavior for stress, relative displacement, or their derivatives. Prior work has shown that such asymptotics lead to a natural quadrature of the singular integrals at roots of Chebyhsev polynomials of the first, second, third, or fourth kind. We show that such quadratures lead to convenient techniques for interpolation, differentiation, and integration, with the potential for spectral accuracy. We further show that these techniques, with slight amendment, may continue to be used for non-classical problems which lack the classical asymptotic behavior. We consider solutions to example problems of both the classical and non-classical variety (e.g., fluid-driven opening-mode fracture and fault shear rupture driven by thermal weakening), with comparisons to analytical solutions or asymptotes, where available.

  16. A high resolution portable spectroscopy system

    International Nuclear Information System (INIS)

    Kulkarni, C.P.; Vaidya, P.P.; Paulson, M.; Bhatnagar, P.V.; Pande, S.S.; Padmini, S.

    2003-01-01

    Full text: This paper describes the system details of a High Resolution Portable Spectroscopy System (HRPSS) developed at Electronics Division, BARC. The system can be used for laboratory class, high-resolution nuclear spectroscopy applications. The HRPSS consists of a specially designed compact NIM bin, with built-in power supplies, accommodating a low power, high resolution MCA, and on-board embedded computer for spectrum building and communication. A NIM based spectroscopy amplifier and a HV module for detector bias are integrated (plug-in) in the bin. The system communicates with a host PC via a serial link. Along-with a laptop PC, and a portable HP-Ge detector, the HRPSS offers a laboratory class performance for portable applications

  17. Riemann solvers and numerical methods for fluid dynamics a practical introduction

    CERN Document Server

    Toro, Eleuterio F

    2009-01-01

    High resolution upwind and centred methods are a mature generation of computational techniques applicable to a range of disciplines, Computational Fluid Dynamics being the most prominent. This book gives a practical presentation of this class of techniques.

  18. High-resolution multi-slice PET

    International Nuclear Information System (INIS)

    Yasillo, N.J.; Chintu Chen; Ordonez, C.E.; Kapp, O.H.; Sosnowski, J.; Beck, R.N.

    1992-01-01

    This report evaluates the progress to test the feasibility and to initiate the design of a high resolution multi-slice PET system. The following specific areas were evaluated: detector development and testing; electronics configuration and design; mechanical design; and system simulation. The design and construction of a multiple-slice, high-resolution positron tomograph will provide substantial improvements in the accuracy and reproducibility of measurements of the distribution of activity concentrations in the brain. The range of functional brain research and our understanding of local brain function will be greatly extended when the development of this instrumentation is completed

  19. Method development for the determination of fluorine in toothpaste via molecular absorption of aluminum mono fluoride using a high-resolution continuum source nitrous oxide/acetylene flame atomic absorption spectrophotometer.

    Science.gov (United States)

    Ozbek, Nil; Akman, Suleyman

    2012-05-30

    Fluorine was determined via the rotational molecular absorption line of aluminum mono fluoride (AlF) generated in C(2)H(2)/N(2)O flame at 227.4613 nm using a high-resolution continuum source flame atomic absorption spectrophotometer (HR-CS-FAAS). The effects of AlF wavelength, burner height, fuel rate (C(2)H(2)/N(2)O) and amount of Al on the accuracy, precision and sensitivity were investigated and optimized. The Al-F absorption band at 227.4613 nm was found to be the most suitable analytical line with respect to sensitivity and spectral interferences. Maximum sensitivity and a good linearity were obtained in acetylene-nitrous oxide flame at a flow rate of 210 L h(-1) and a burner height of 8mm using 3000 mg L(-1) of Al for 10-1000 mg L(-1)of F. The accuracy and precision of the method were tested by analyzing spiked samples and waste water certified reference material. The results were in good agreement with the certified and spiked amounts as well as the precision of several days during this study was satisfactory (RSD<10%). The limit of detection and characteristic concentration of the method were 5.5 mg L(-1) and 72.8 mg L(-1), respectively. Finally, the fluorine concentrations in several toothpaste samples were determined. The results found and given by the producers were not significantly different. The method was simple, fast, accurate and sensitive. Copyright © 2012 Elsevier B.V. All rights reserved.

  20. Numerical method improvement for a subchannel code

    Energy Technology Data Exchange (ETDEWEB)

    Ding, W.J.; Gou, J.L.; Shan, J.Q. [Xi' an Jiaotong Univ., Shaanxi (China). School of Nuclear Science and Technology

    2016-07-15

    Previous studies showed that the subchannel codes need most CPU time to solve the matrix formed by the conservation equations. Traditional matrix solving method such as Gaussian elimination method and Gaussian-Seidel iteration method cannot meet the requirement of the computational efficiency. Therefore, a new algorithm for solving the block penta-diagonal matrix is designed based on Stone's incomplete LU (ILU) decomposition method. In the new algorithm, the original block penta-diagonal matrix will be decomposed into a block upper triangular matrix and a lower block triangular matrix as well as a nonzero small matrix. After that, the LU algorithm is applied to solve the matrix until the convergence. In order to compare the computational efficiency, the new designed algorithm is applied to the ATHAS code in this paper. The calculation results show that more than 80 % of the total CPU time can be saved with the new designed ILU algorithm for a 324-channel PWR assembly problem, compared with the original ATHAS code.

  1. High-resolution satellite image segmentation using Hölder exponents

    Indian Academy of Sciences (India)

    Keywords. High resolution image; texture analysis; segmentation; IKONOS; Hölder exponent; cluster. ... are that. • it can be used as a tool to measure the roughness ... uses reinforcement learning to learn the reward values of ..... The numerical.

  2. Nonlinear ordinary differential equations analytical approximation and numerical methods

    CERN Document Server

    Hermann, Martin

    2016-01-01

    The book discusses the solutions to nonlinear ordinary differential equations (ODEs) using analytical and numerical approximation methods. Recently, analytical approximation methods have been largely used in solving linear and nonlinear lower-order ODEs. It also discusses using these methods to solve some strong nonlinear ODEs. There are two chapters devoted to solving nonlinear ODEs using numerical methods, as in practice high-dimensional systems of nonlinear ODEs that cannot be solved by analytical approximate methods are common. Moreover, it studies analytical and numerical techniques for the treatment of parameter-depending ODEs. The book explains various methods for solving nonlinear-oscillator and structural-system problems, including the energy balance method, harmonic balance method, amplitude frequency formulation, variational iteration method, homotopy perturbation method, iteration perturbation method, homotopy analysis method, simple and multiple shooting method, and the nonlinear stabilized march...

  3. HIGH RESOLUTION AIRBORNE SHALLOW WATER MAPPING

    Directory of Open Access Journals (Sweden)

    F. Steinbacher

    2012-07-01

    Full Text Available In order to meet the requirements of the European Water Framework Directive (EU-WFD, authorities face the problem of repeatedly performing area-wide surveying of all kinds of inland waters. Especially for mid-sized or small rivers this is a considerable challenge imposing insurmountable logistical efforts and costs. It is therefore investigated if large-scale surveying of a river system on an operational basis is feasible by employing airborne hydrographic laser scanning. In cooperation with the Bavarian Water Authority (WWA Weilheim a pilot project was initiated by the Unit of Hydraulic Engineering at the University of Innsbruck and RIEGL Laser Measurement Systems exploiting the possibilities of a new LIDAR measurement system with high spatial resolution and high measurement rate to capture about 70 km of riverbed and foreland for the river Loisach in Bavaria/Germany and the estuary and parts of the shoreline (about 40km in length of lake Ammersee. The entire area surveyed was referenced to classic terrestrial cross-section surveys with the aim to derive products for the monitoring and managing needs of the inland water bodies forced by the EU-WFD. The survey was performed in July 2011 by helicopter and airplane and took 3 days in total. In addition, high resolution areal images were taken to provide an optical reference, offering a wide range of possibilities on further research, monitoring, and managing responsibilities. The operating altitude was about 500 m to maintain eye-safety, even for the aided eye, the airspeed was about 55 kts for the helicopter and 75 kts for the aircraft. The helicopter was used in the alpine regions while the fixed wing aircraft was used in the plains and the urban area, using appropriate scan rates to receive evenly distributed point clouds. The resulting point density ranged from 10 to 25 points per square meter. By carefully selecting days with optimum water quality, satisfactory penetration down to the river

  4. High Resolution Airborne Shallow Water Mapping

    Science.gov (United States)

    Steinbacher, F.; Pfennigbauer, M.; Aufleger, M.; Ullrich, A.

    2012-07-01

    In order to meet the requirements of the European Water Framework Directive (EU-WFD), authorities face the problem of repeatedly performing area-wide surveying of all kinds of inland waters. Especially for mid-sized or small rivers this is a considerable challenge imposing insurmountable logistical efforts and costs. It is therefore investigated if large-scale surveying of a river system on an operational basis is feasible by employing airborne hydrographic laser scanning. In cooperation with the Bavarian Water Authority (WWA Weilheim) a pilot project was initiated by the Unit of Hydraulic Engineering at the University of Innsbruck and RIEGL Laser Measurement Systems exploiting the possibilities of a new LIDAR measurement system with high spatial resolution and high measurement rate to capture about 70 km of riverbed and foreland for the river Loisach in Bavaria/Germany and the estuary and parts of the shoreline (about 40km in length) of lake Ammersee. The entire area surveyed was referenced to classic terrestrial cross-section surveys with the aim to derive products for the monitoring and managing needs of the inland water bodies forced by the EU-WFD. The survey was performed in July 2011 by helicopter and airplane and took 3 days in total. In addition, high resolution areal images were taken to provide an optical reference, offering a wide range of possibilities on further research, monitoring, and managing responsibilities. The operating altitude was about 500 m to maintain eye-safety, even for the aided eye, the airspeed was about 55 kts for the helicopter and 75 kts for the aircraft. The helicopter was used in the alpine regions while the fixed wing aircraft was used in the plains and the urban area, using appropriate scan rates to receive evenly distributed point clouds. The resulting point density ranged from 10 to 25 points per square meter. By carefully selecting days with optimum water quality, satisfactory penetration down to the river bed was achieved

  5. Numerical Methods for Partial Differential Equations.

    Science.gov (United States)

    1984-01-09

    iteration or the conjugate gradient method. The smoothing sweeps are used to annihilate the highly oscillatory (compared to the grid spacing) components of...53 52 "-󈧯 33 41 *32 * . 31 * 21 - 11 O- carrius plane rotacions o I ~~arr: ’.trix vrS2-0 Cf A Figure 4. QM fiitorization of a BLTE (1,2) mnitrix

  6. Numerical methods for stochastic partial differential equations with white noise

    CERN Document Server

    Zhang, Zhongqiang

    2017-01-01

    This book covers numerical methods for stochastic partial differential equations with white noise using the framework of Wong-Zakai approximation. The book begins with some motivational and background material in the introductory chapters and is divided into three parts. Part I covers numerical stochastic ordinary differential equations. Here the authors start with numerical methods for SDEs with delay using the Wong-Zakai approximation and finite difference in time. Part II covers temporal white noise. Here the authors consider SPDEs as PDEs driven by white noise, where discretization of white noise (Brownian motion) leads to PDEs with smooth noise, which can then be treated by numerical methods for PDEs. In this part, recursive algorithms based on Wiener chaos expansion and stochastic collocation methods are presented for linear stochastic advection-diffusion-reaction equations. In addition, stochastic Euler equations are exploited as an application of stochastic collocation methods, where a numerical compa...

  7. Optimized cleanup method for the determination of short chain polychlorinated n-alkanes in sediments by high resolution gas chromatography/electron capture negative ion-low resolution mass spectrometry

    International Nuclear Information System (INIS)

    Gao Yuan; Zhang Haijun; Chen Jiping; Zhang Qing; Tian Yuzeng; Qi Peipei; Yu Zhengkun

    2011-01-01

    Graphical abstract: The sediment sample could be purified by the optimized cleanup method, and satisfying cleanup efficiency was obtained. Highlights: → The elution characters of sPCAs and interfering substances were evaluated on three adsorbents. → An optimized cleanup method was developed for sPCAs with satisfying cleanup efficiency. → The cleanup method combined with HRGC/ECNI-LRMS was applied for sPCAs analysis. → The sPCAs levels range from 53.6 ng g -1 to 289.3 ng g -1 in tested sediment samples. - Abstract: The performances of three adsorbents, i.e. silica gel, neutral and basic alumina, in the separation of short chain polychlorinated n-alkanes (sPCAs) from potential interfering substances such as polychlorinated biphenyls (PCBs) and organochlorine pesticides were evaluated. To increase the cleanup efficiency, a two-step cleanup method using silica gel column and subsequent basic alumina column was developed. All the PCB and organochlorine pesticides could be removed by this cleanup method. The very satisfying cleanup efficiency of sPCAs has been achieved and the recovery in the cleanup method reached 92.7%. The method detection limit (MDL) for sPCAs in sediments was determined to be 14 ng g -1 . Relative standard deviation (R.S.D.) of 5.3% was obtained for the mass fraction of sPCAs by analyzing four replicates of a spiked sediment sample. High resolution gas chromatography/electron capture negative ion-low resolution mass spectrometry (HRGC/ECNI-LRMS) was used for sPCAs quantification by monitoring [M-HCl]· - ions. When applied to the sediment samples from the mouth of the Daliao River, the optimized cleanup method in conjunction with HRGC/ECNI-LRMS allowed for highly selective identifications for sPCAs. The sPCAs levels in sediment samples are reported to range from 53.6 ng g -1 to 289.3 ng g -1 . C 10 - and C 11 -PCAs are the dominant residue in most of investigated sediment samples.

  8. Parameter Optimization for Feature and Hit Generation in a General Unknown Screening Method-Proof of Concept Study Using a Design of Experiment Approach for a High Resolution Mass Spectrometry Procedure after Data Independent Acquisition.

    Science.gov (United States)

    Elmiger, Marco P; Poetzsch, Michael; Steuer, Andrea E; Kraemer, Thomas

    2018-03-06

    High resolution mass spectrometry and modern data independent acquisition (DIA) methods enable the creation of general unknown screening (GUS) procedures. However, even when DIA is used, its potential is far from being exploited, because often, the untargeted acquisition is followed by a targeted search. Applying an actual GUS (including untargeted screening) produces an immense amount of data that must be dealt with. An optimization of the parameters regulating the feature detection and hit generation algorithms of the data processing software could significantly reduce the amount of unnecessary data and thereby the workload. Design of experiment (DoE) approaches allow a simultaneous optimization of multiple parameters. In a first step, parameters are evaluated (crucial or noncrucial). Second, crucial parameters are optimized. The aim in this study was to reduce the number of hits, without missing analytes. The obtained parameter settings from the optimization were compared to the standard settings by analyzing a test set of blood samples spiked with 22 relevant analytes as well as 62 authentic forensic cases. The optimization lead to a marked reduction of workload (12.3 to 1.1% and 3.8 to 1.1% hits for the test set and the authentic cases, respectively) while simultaneously increasing the identification rate (68.2 to 86.4% and 68.8 to 88.1%, respectively). This proof of concept study emphasizes the great potential of DoE approaches to master the data overload resulting from modern data independent acquisition methods used for general unknown screening procedures by optimizing software parameters.

  9. Development of suspect and non-target screening methods for detection of organic contaminants in highway runoff and fish tissue with high-resolution time-of-flight mass spectrometry.

    Science.gov (United States)

    Du, Bowen; Lofton, Jonathan M; Peter, Katherine T; Gipe, Alexander D; James, C Andrew; McIntyre, Jenifer K; Scholz, Nathaniel L; Baker, Joel E; Kolodziej, Edward P

    2017-09-20

    Untreated urban stormwater runoff contributes to poor water quality in receiving waters. The ability to identify toxicants and other bioactive molecules responsible for observed adverse effects in a complex mixture of contaminants is critical to effective protection of ecosystem and human health, yet this is a challenging analytical task. The objective of this study was to develop analytical methods using liquid chromatography coupled to high-resolution quadrupole time-of-flight mass spectrometry (LC-QTOF-MS) to detect organic contaminants in highway runoff and in runoff-exposed fish (adult coho salmon, Oncorhynchus kisutch). Processing of paired water and tissue samples facilitated contaminant prioritization and aided investigation of chemical bioavailability and uptake processes. Simple, minimal processing effort solid phase extraction (SPE) and elution procedures were optimized for water samples, and selective pressurized liquid extraction (SPLE) procedures were optimized for fish tissues. Extraction methods were compared by detection of non-target features and target compounds (e.g., quantity and peak area), while minimizing matrix interferences. Suspect screening techniques utilized in-house and commercial databases to prioritize high-risk detections for subsequent MS/MS characterization and identification efforts. Presumptive annotations were also screened with an in-house linear regression (log K ow vs. retention time) to exclude isobaric compounds. Examples of confirmed identifications (via reference standard comparison) in highway runoff include ethoprophos, prometon, DEET, caffeine, cotinine, 4(or 5)-methyl-1H-methylbenzotriazole, and acetanilide. Acetanilide was also detected in runoff-exposed fish gill and liver samples. Further characterization of highway runoff and fish tissues (14 and 19 compounds, respectively with tentative identification by MS/MS data) suggests that many novel or poorly characterized organic contaminants exist in urban

  10. High resolution numerical weather prediction over the Indian ...

    Indian Academy of Sciences (India)

    Present address: Environmental Modeling Center, National Centers for Environmental ... sensed data collection systems and application of ... data, especially satellite and aircraft data over the ..... day-3 while the control experiment went to the.

  11. Numerical simulation of GEW equation using RBF collocation method

    Directory of Open Access Journals (Sweden)

    Hamid Panahipour

    2012-08-01

    Full Text Available The generalized equal width (GEW equation is solved numerically by a meshless method based on a global collocation with standard types of radial basis functions (RBFs. Test problems including propagation of single solitons, interaction of two and three solitons, development of the Maxwellian initial condition pulses, wave undulation and wave generation are used to indicate the efficiency and accuracy of the method. Comparisons are made between the results of the proposed method and some other published numerical methods.

  12. Zeolites - a high resolution electron microscopy study

    International Nuclear Information System (INIS)

    Alfredsson, V.

    1994-10-01

    High resolution transmission electron microscopy (HRTEM) has been used to investigate a number of zeolites (EMT, FAU, LTL, MFI and MOR) and a member of the mesoporous M41S family. The electron optical artefact, manifested as a dark spot in the projected centre of the large zeolite channels, caused by insufficient transfer of certain reflections in the objective lens has been explained. The artefact severely hinders observation of materials confined in the zeolite channels and cavities. It is shown how to circumvent the artefact problem and how to image confined materials in spite of disturbance caused by the artefact. Image processing by means of a Wiener filter has been applied for removal of the artefact. The detailed surface structure of FAU has been investigated. Comparison of experimental micrographs with images simulated using different surface models indicates that the surface can be terminated in different ways depending on synthesis methods. The dealuminated form of FAU (USY) is covered by an amorphous region. Platinum incorporated in FAU has a preponderance to aggregate in the (111) twin planes, probably due to a local difference in cage structure with more spacious cages. It is shown that platinum is intra-zeolitic as opposed to being located on the external surface of the zeolite crystal. This could be deduced from tomography of ultra-thin sections among observations. HRTEM studies of the mesoporous MCM-41 show that the pores have a hexagonal shape and also supports the mechanistic model proposed which involves a cooperative formation of a mesophase including the silicate species as well as the surfactant. 66 refs, 24 figs

  13. High-Resolution Wind Measurements for Offshore Wind Energy Development

    Science.gov (United States)

    Nghiem, Son V.; Neumann, Gregory

    2011-01-01

    A mathematical transform, called the Rosette Transform, together with a new method, called the Dense Sampling Method, have been developed. The Rosette Transform is invented to apply to both the mean part and the fluctuating part of a targeted radar signature using the Dense Sampling Method to construct the data in a high-resolution grid at 1-km posting for wind measurements over water surfaces such as oceans or lakes.

  14. High-resolution clean-sc

    NARCIS (Netherlands)

    Sijtsma, P.; Snellen, M.

    2016-01-01

    In this paper a high-resolution extension of CLEAN-SC is proposed: HR-CLEAN-SC. Where CLEAN-SC uses peak sources in “dirty maps” to define so-called source components, HR-CLEAN-SC takes advantage of the fact that source components can likewise be derived from points at some distance from the peak,

  15. A High-Resolution Stopwatch for Cents

    Science.gov (United States)

    Gingl, Z.; Kopasz, K.

    2011-01-01

    A very low-cost, easy-to-make stopwatch is presented to support various experiments in mechanics. The high-resolution stopwatch is based on two photodetectors connected directly to the microphone input of a sound card. Dedicated free open-source software has been developed and made available to download. The efficiency is demonstrated by a free…

  16. Planning for shallow high resolution seismic surveys

    CSIR Research Space (South Africa)

    Fourie, CJS

    2008-11-01

    Full Text Available of the input wave. This information can be used in conjunction with this spreadsheet to aid the geophysicist in designing shallow high resolution seismic surveys to achieve maximum resolution and penetration. This Excel spreadsheet is available free from...

  17. Numerical Methods for Bayesian Inverse Problems

    KAUST Repository

    Ernst, Oliver

    2014-01-06

    We present recent results on Bayesian inversion for a groundwater flow problem with an uncertain conductivity field. In particular, we show how direct and indirect measurements can be used to obtain a stochastic model for the unknown. The main tool here is Bayes’ theorem which merges the indirect data with the stochastic prior model for the conductivity field obtained by the direct measurements. Further, we demonstrate how the resulting posterior distribution of the quantity of interest, in this case travel times of radionuclide contaminants, can be obtained by Markov Chain Monte Carlo (MCMC) simulations. Moreover, we investigate new, promising MCMC methods which exploit geometrical features of the posterior and which are suited to infinite dimensions.

  18. Numerical Methods for Bayesian Inverse Problems

    KAUST Repository

    Ernst, Oliver; Sprungk, Bjorn; Cliffe, K. Andrew; Starkloff, Hans-Jorg

    2014-01-01

    We present recent results on Bayesian inversion for a groundwater flow problem with an uncertain conductivity field. In particular, we show how direct and indirect measurements can be used to obtain a stochastic model for the unknown. The main tool here is Bayes’ theorem which merges the indirect data with the stochastic prior model for the conductivity field obtained by the direct measurements. Further, we demonstrate how the resulting posterior distribution of the quantity of interest, in this case travel times of radionuclide contaminants, can be obtained by Markov Chain Monte Carlo (MCMC) simulations. Moreover, we investigate new, promising MCMC methods which exploit geometrical features of the posterior and which are suited to infinite dimensions.

  19. Tensor viscosity method for convection in numerical fluid dynamics

    International Nuclear Information System (INIS)

    Dukowicz, J.K.; Ramshaw, J.D.

    1979-01-01

    A new method, called the tensor viscosity method, is described for differencing the convective terms in multidimensional numerical fluid dynamics. The method is the proper generalization to two or three dimensions of interpolated donor cell differencing in one dimension, and is designed to achieve numerical stability with minimal numerical damping. It is a single-step method that is distinguished by simplicity and case of implementation, even in the case of an arbitrary non-rectangular mesh. It should therefore be useful in finite-element as well as finite-difference formulations

  20. Numerical Methods for a Class of Differential Algebraic Equations

    Directory of Open Access Journals (Sweden)

    Lei Ren

    2017-01-01

    Full Text Available This paper is devoted to the study of some efficient numerical methods for the differential algebraic equations (DAEs. At first, we propose a finite algorithm to compute the Drazin inverse of the time varying DAEs. Numerical experiments are presented by Drazin inverse and Radau IIA method, which illustrate that the precision of the Drazin inverse method is higher than the Radau IIA method. Then, Drazin inverse, Radau IIA, and Padé approximation are applied to the constant coefficient DAEs, respectively. Numerical results demonstrate that the Padé approximation is powerful for solving constant coefficient DAEs.

  1. A validated analytical method to study the long-term stability of natural and synthetic glucocorticoids in livestock urine using ultra-high performance liquid chromatography coupled to Orbitrap-high resolution mass spectrometry.

    Science.gov (United States)

    De Clercq, Nathalie; Julie, Vanden Bussche; Croubels, Siska; Delahaut, Philippe; Vanhaecke, Lynn

    2013-08-02

    Due to their growth-promoting effects, the use of synthetic glucocorticoids is strictly regulated in the European Union (Council Directive 2003/74/EC). In the frame of the national control plans, which should ensure the absence of residues in food products of animal origin, in recent years, a higher frequency of prednisolone positive bovine urines has been observed. This has raised questions with respect to the stability of natural corticoids in the respective urine samples and their potential to be transformed into synthetic analogs. In this study, a ultra high performance liquid chromatography-high resolution mass spectrometry (UHPLC-HRMS) methodology was developed to examine the stability of glucocorticoids in bovine urine under various storage conditions (up to 20 weeks) and to define suitable conditions for sample handling and storage, using an Orbitrap Exactive™. To this end, an extraction procedure was optimized using a Plackett-Burman experimental design to determine the key conditions for optimal extraction of glucocorticoids from urine. Next, the analytical method was successfully validated according to the guidelines of CD 2002/657/EC. Decision limits and detection capabilities for prednisolone, prednisone and methylprednisolone ranged, respectively, from 0.1 to 0.5μgL(-1) and from 0.3 to 0.8μgL(-1). For the natural glucocorticoids limits of detection and limits of quantification for dihydrocortisone, cortisol and cortisone ranged, respectively, from 0.1 to 0.2μgL(-1) and from 0.3 to 0.8μgL(-1). The stability study demonstrated that filter-sterilization of urine, storage at -80°C, and acidic conditions (pH 3) were optimal for preservation of glucocorticoids in urine and able to significantly limit degradation up to 20 weeks. Copyright © 2013 Elsevier B.V. All rights reserved.

  2. A Metabolomic Approach Applied to a Liquid Chromatography Coupled to High-Resolution Tandem Mass Spectrometry Method (HPLC-ESI-HRMS/MS): Towards the Comprehensive Evaluation of the Chemical Composition of Cannabis Medicinal Extracts.

    Science.gov (United States)

    Citti, Cinzia; Battisti, Umberto Maria; Braghiroli, Daniela; Ciccarella, Giuseppe; Schmid, Martin; Vandelli, Maria Angela; Cannazza, Giuseppe

    2018-03-01

    Cannabis sativa L. is a powerful medicinal plant and its use has recently increased for the treatment of several pathologies. Nonetheless, side effects, like dizziness and hallucinations, and long-term effects concerning memory and cognition, can occur. Most alarming is the lack of a standardised procedure to extract medicinal cannabis. Indeed, each galenical preparation has an unknown chemical composition in terms of cannabinoids and other active principles that depends on the extraction procedure. This study aims to highlight the main differences in the chemical composition of Bediol® extracts when the extraction is carried out with either ethyl alcohol or olive oil for various times (0, 60, 120 and 180 min for ethyl alcohol, and 0, 60, 90 and 120 min for olive oil). Cannabis medicinal extracts (CMEs) were analysed by liquid chromatography coupled to high-resolution tandem mass spectrometry (LC-MS/MS) using an untargeted metabolomics approach. The data sets were processed by unsupervised multivariate analysis. Our results suggested that the main difference lies in the ratio of acid to decarboxylated cannabinoids, which dramatically influences the pharmacological activity of CMEs. Minor cannabinoids, alkaloids, and amino acids contributing to this difference are also discussed. The main cannabinoids were quantified in each extract applying a recently validated LC-MS and LC-UV method. Notwithstanding the use of a standardised starting plant material, great changes are caused by different extraction procedures. The metabolomics approach is a useful tool for the evaluation of the chemical composition of cannabis extracts. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  3. Advanced Numerical and Theoretical Methods for Photonic Crystals and Metamaterials

    Science.gov (United States)

    Felbacq, Didier

    2016-11-01

    This book provides a set of theoretical and numerical tools useful for the study of wave propagation in metamaterials and photonic crystals. While concentrating on electromagnetic waves, most of the material can be used for acoustic (or quantum) waves. For each presented numerical method, numerical code written in MATLAB® is presented. The codes are limited to 2D problems and can be easily translated in Python or Scilab, and used directly with Octave as well.

  4. Introduction to numerical methods for time dependent differential equations

    CERN Document Server

    Kreiss, Heinz-Otto

    2014-01-01

    Introduces both the fundamentals of time dependent differential equations and their numerical solutions Introduction to Numerical Methods for Time Dependent Differential Equations delves into the underlying mathematical theory needed to solve time dependent differential equations numerically. Written as a self-contained introduction, the book is divided into two parts to emphasize both ordinary differential equations (ODEs) and partial differential equations (PDEs). Beginning with ODEs and their approximations, the authors provide a crucial presentation of fundamental notions, such as the t

  5. Numerical implementation of the loop-tree duality method

    Energy Technology Data Exchange (ETDEWEB)

    Buchta, Sebastian; Rodrigo, German [Universitat de Valencia-Consejo Superior de Investigaciones Cientificas, Parc Cientific, Instituto de Fisica Corpuscular, Valencia (Spain); Chachamis, Grigorios [Universidad Autonoma de Madrid, Instituto de Fisica Teorica UAM/CSIC, Madrid (Spain); Draggiotis, Petros [Institute of Nuclear and Particle Physics, NCSR ' ' Demokritos' ' , Agia Paraskevi (Greece)

    2017-05-15

    We present a first numerical implementation of the loop-tree duality (LTD) method for the direct numerical computation of multi-leg one-loop Feynman integrals. We discuss in detail the singular structure of the dual integrands and define a suitable contour deformation in the loop three-momentum space to carry out the numerical integration. Then we apply the LTD method to the computation of ultraviolet and infrared finite integrals, and we present explicit results for scalar and tensor integrals with up to eight external legs (octagons). The LTD method features an excellent performance independently of the number of external legs. (orig.)

  6. A high resolution large dynamic range TDC circuit implementation

    International Nuclear Information System (INIS)

    Lei Wuhu; Liu Songqiu; Ye Weiguo; Han Hui; Li Pengyu

    2003-01-01

    Time measurement technology is usually used in nuclear experimentation. There are many methods of time measurement. The implementation method of Time to Digital Conversion (TDC) by means of electronic is a classical technology. The range and resolution of TDC is different according with different usage. A wide range and high resolution TDC circuit, including its theory and implementation way, is introduced in this paper. The test result is also given. (authors)

  7. A high resolution large dynamic range TDC circuit implementation

    International Nuclear Information System (INIS)

    Lei Wuhu; Liu Songqiu; Li Pengyu; Han Hui; Ye Yanlin

    2005-01-01

    Time measurement technology is usually used in nuclear experimentation. There are many methods of time measurement. The implementation method of Time to Digital Conversion (TDC) by means of electronics is a classical technology. The range and resolution of TDC is different according with different usage. A wide range and high resolution TDC circuit, including its theory and implementation way, is introduced in this paper. The test result is also given. (authors)

  8. Numerical simulation methods for phase-transitional flow

    NARCIS (Netherlands)

    Pecenko, A.

    2010-01-01

    The object of the present dissertation is a numerical study of multiphase flow of one fluid component. In particular, the research described in this thesis focuses on the development of numerical methods that are based on a diffuse-interface model (DIM). With this approach, the modeling problem

  9. Assessing numerical methods used in nuclear aerosol transport models

    International Nuclear Information System (INIS)

    McDonald, B.H.

    1987-01-01

    Several computer codes are in use for predicting the behaviour of nuclear aerosols released into containment during postulated accidents in water-cooled reactors. Each of these codes uses numerical methods to discretize and integrate the equations that govern the aerosol transport process. Computers perform only algebraic operations and generate only numbers. It is in the numerical methods that sense can be made of these numbers and where they can be related to the actual solution of the equations. In this report, the numerical methods most commonly used in the aerosol transport codes are examined as special cases of a general solution procedure, the Method of Weighted Residuals. It would appear that the numerical methods used in the codes are all capable of producing reasonable answers to the mathematical problem when used with skill and care. 27 refs

  10. An Efficient, Semi-implicit Pressure-based Scheme Employing a High-resolution Finitie Element Method for Simulating Transient and Steady, Inviscid and Viscous, Compressible Flows on Unstructured Grids

    Energy Technology Data Exchange (ETDEWEB)

    Richard C. Martineau; Ray A. Berry

    2003-04-01

    A new semi-implicit pressure-based Computational Fluid Dynamics (CFD) scheme for simulating a wide range of transient and steady, inviscid and viscous compressible flow on unstructured finite elements is presented here. This new CFD scheme, termed the PCICEFEM (Pressure-Corrected ICE-Finite Element Method) scheme, is composed of three computational phases, an explicit predictor, an elliptic pressure Poisson solution, and a semiimplicit pressure-correction of the flow variables. The PCICE-FEM scheme is capable of second-order temporal accuracy by incorporating a combination of a time-weighted form of the two-step Taylor-Galerkin Finite Element Method scheme as an explicit predictor for the balance of momentum equations and the finite element form of a time-weighted trapezoid rule method for the semi-implicit form of the governing hydrodynamic equations. Second-order spatial accuracy is accomplished by linear unstructured finite element discretization. The PCICE-FEM scheme employs Flux-Corrected Transport as a high-resolution filter for shock capturing. The scheme is capable of simulating flows from the nearly incompressible to the high supersonic flow regimes. The PCICE-FEM scheme represents an advancement in mass-momentum coupled, pressurebased schemes. The governing hydrodynamic equations for this scheme are the conservative form of the balance of momentum equations (Navier-Stokes), mass conservation equation, and total energy equation. An operator splitting process is performed along explicit and implicit operators of the semi-implicit governing equations to render the PCICE-FEM scheme in the class of predictor-corrector schemes. The complete set of semi-implicit governing equations in the PCICE-FEM scheme are cast in this form, an explicit predictor phase and a semi-implicit pressure-correction phase with the elliptic pressure Poisson solution coupling the predictor-corrector phases. The result of this predictor-corrector formulation is that the pressure Poisson

  11. High-resolution axial MR imaging of tibial stress injuries

    Directory of Open Access Journals (Sweden)

    Mammoto Takeo

    2012-05-01

    Full Text Available Abstract Purpose To evaluate the relative involvement of tibial stress injuries using high-resolution axial MR imaging and the correlation with MR and radiographic images. Methods A total of 33 patients with exercise-induced tibial pain were evaluated. All patients underwent radiograph and high-resolution axial MR imaging. Radiographs were taken at initial presentation and 4 weeks later. High-resolution MR axial images were obtained using a microscopy surface coil with 60 × 60 mm field of view on a 1.5T MR unit. All images were evaluated for abnormal signals of the periosteum, cortex and bone marrow. Results Nineteen patients showed no periosteal reaction at initial and follow-up radiographs. MR imaging showed abnormal signals in the periosteal tissue and partially abnormal signals in the bone marrow. In 7 patients, periosteal reaction was not seen at initial radiograph, but was detected at follow-up radiograph. MR imaging showed abnormal signals in the periosteal tissue and entire bone marrow. Abnormal signals in the cortex were found in 6 patients. The remaining 7 showed periosteal reactions at initial radiograph. MR imaging showed abnormal signals in the periosteal tissue in 6 patients. Abnormal signals were seen in the partial and entire bone marrow in 4 and 3 patients, respectively. Conclusions Bone marrow abnormalities in high-resolution axial MR imaging were related to periosteal reactions at follow-up radiograph. Bone marrow abnormalities might predict later periosteal reactions, suggesting shin splints or stress fractures. High-resolution axial MR imaging is useful in early discrimination of tibial stress injuries.

  12. High-resolution axial MR imaging of tibial stress injuries

    Science.gov (United States)

    2012-01-01

    Purpose To evaluate the relative involvement of tibial stress injuries using high-resolution axial MR imaging and the correlation with MR and radiographic images. Methods A total of 33 patients with exercise-induced tibial pain were evaluated. All patients underwent radiograph and high-resolution axial MR imaging. Radiographs were taken at initial presentation and 4 weeks later. High-resolution MR axial images were obtained using a microscopy surface coil with 60 × 60 mm field of view on a 1.5T MR unit. All images were evaluated for abnormal signals of the periosteum, cortex and bone marrow. Results Nineteen patients showed no periosteal reaction at initial and follow-up radiographs. MR imaging showed abnormal signals in the periosteal tissue and partially abnormal signals in the bone marrow. In 7 patients, periosteal reaction was not seen at initial radiograph, but was detected at follow-up radiograph. MR imaging showed abnormal signals in the periosteal tissue and entire bone marrow. Abnormal signals in the cortex were found in 6 patients. The remaining 7 showed periosteal reactions at initial radiograph. MR imaging showed abnormal signals in the periosteal tissue in 6 patients. Abnormal signals were seen in the partial and entire bone marrow in 4 and 3 patients, respectively. Conclusions Bone marrow abnormalities in high-resolution axial MR imaging were related to periosteal reactions at follow-up radiograph. Bone marrow abnormalities might predict later periosteal reactions, suggesting shin splints or stress fractures. High-resolution axial MR imaging is useful in early discrimination of tibial stress injuries. PMID:22574840

  13. Smartphone microendoscopy for high resolution fluorescence imaging

    Directory of Open Access Journals (Sweden)

    Xiangqian Hong

    2016-09-01

    Full Text Available High resolution optical endoscopes are increasingly used in diagnosis of various medical conditions of internal organs, such as the cervix and gastrointestinal (GI tracts, but they are too expensive for use in resource-poor settings. On the other hand, smartphones with high resolution cameras and Internet access have become more affordable, enabling them to diffuse into most rural areas and developing countries in the past decade. In this paper, we describe a smartphone microendoscope that can take fluorescence images with a spatial resolution of 3.1 μm. Images collected from ex vivo, in vitro and in vivo samples using the device are also presented. The compact and cost-effective smartphone microendoscope may be envisaged as a powerful tool for detecting pre-cancerous lesions of internal organs in low and middle-income countries (LMICs.

  14. High resolution NMR theory and chemical applications

    CERN Document Server

    Becker, Edwin D

    2012-01-01

    High Resolution NMR: Theory and Chemical Applications discusses the principles and theory of nuclear magnetic resonance and how this concept is used in the chemical sciences. This book is written at an intermediate level, with mathematics used to augment verbal descriptions of the phenomena. This text pays attention to developing and interrelating four approaches - the steady state energy levels, the rotating vector picture, the density matrix, and the product operator formalism. The style of this book is based on the assumption that the reader has an acquaintance with the general principles of quantum mechanics, but no extensive background in quantum theory or proficiency in mathematics is required. This book begins with a description of the basic physics, together with a brief account of the historical development of the field. It looks at the study of NMR in liquids, including high resolution NMR in the solid state and the principles of NMR imaging and localized spectroscopy. This book is intended to assis...

  15. High resolution NMR theory and chemical applications

    CERN Document Server

    Becker, Edwin D

    1999-01-01

    High Resolution NMR provides a broad treatment of the principles and theory of nuclear magnetic resonance (NMR) as it is used in the chemical sciences. It is written at an "intermediate" level, with mathematics used to augment, rather than replace, clear verbal descriptions of the phenomena. The book is intended to allow a graduate student, advanced undergraduate, or researcher to understand NMR at a fundamental level, and to see illustrations of the applications of NMR to the determination of the structure of small organic molecules and macromolecules, including proteins. Emphasis is on the study of NMR in liquids, but the treatment also includes high resolution NMR in the solid state and the principles of NMR imaging and localized spectroscopy. Careful attention is given to developing and interrelating four approaches - steady state energy levels, the rotating vector picture, the density matrix, and the product operator formalism. The presentation is based on the assumption that the reader has an acquaintan...

  16. High resolution imaging of boron carbide microstructures

    International Nuclear Information System (INIS)

    MacKinnon, I.D.R.; Aselage, T.; Van Deusen, S.B.

    1986-01-01

    Two samples of boron carbide have been examined using high resolution transmission electron microscopy (HRTEM). A hot-pressed B 13 C 2 sample shows a high density of variable width twins normal to (10*1). Subtle shifts or offsets of lattice fringes along the twin plane and normal to approx.(10*5) were also observed. A B 4 C powder showed little evidence of stacking disorder in crystalline regions

  17. Classical and modern numerical analysis theory, methods and practice

    CERN Document Server

    Ackleh, Azmy S; Kearfott, R Baker; Seshaiyer, Padmanabhan

    2009-01-01

    Mathematical Review and Computer Arithmetic Mathematical Review Computer Arithmetic Interval ComputationsNumerical Solution of Nonlinear Equations of One Variable Introduction Bisection Method The Fixed Point Method Newton's Method (Newton-Raphson Method) The Univariate Interval Newton MethodSecant Method and Müller's Method Aitken Acceleration and Steffensen's Method Roots of Polynomials Additional Notes and SummaryNumerical Linear Algebra Basic Results from Linear Algebra Normed Linear Spaces Direct Methods for Solving Linear SystemsIterative Methods for Solving Linear SystemsThe Singular Value DecompositionApproximation TheoryIntroduction Norms, Projections, Inner Product Spaces, and Orthogonalization in Function SpacesPolynomial ApproximationPiecewise Polynomial ApproximationTrigonometric ApproximationRational ApproximationWavelet BasesLeast Squares Approximation on a Finite Point SetEigenvalue-Eigenvector Computation Basic Results from Linear Algebra The Power Method The Inverse Power Method Deflation T...

  18. Ultra-high resolution protein crystallography

    International Nuclear Information System (INIS)

    Takeda, Kazuki; Hirano, Yu; Miki, Kunio

    2010-01-01

    Many protein structures have been determined by X-ray crystallography and deposited with the Protein Data Bank. However, these structures at usual resolution (1.5< d<3.0 A) are insufficient in their precision and quantity for elucidating the molecular mechanism of protein functions directly from structural information. Several studies at ultra-high resolution (d<0.8 A) have been performed with synchrotron radiation in the last decade. The highest resolution of the protein crystals was achieved at 0.54 A resolution for a small protein, crambin. In such high resolution crystals, almost all of hydrogen atoms of proteins and some hydrogen atoms of bound water molecules are experimentally observed. In addition, outer-shell electrons of proteins can be analyzed by the multipole refinement procedure. However, the influence of X-rays should be precisely estimated in order to derive meaningful information from the crystallographic results. In this review, we summarize refinement procedures, current status and perspectives for ultra high resolution protein crystallography. (author)

  19. NUMERICAL AND ANALYTIC METHODS OF ESTIMATION BRIDGES’ CONSTRUCTIONS

    Directory of Open Access Journals (Sweden)

    Y. Y. Luchko

    2010-03-01

    Full Text Available In this article the numerical and analytical methods of calculation of the stressed-and-strained state of bridge constructions are considered. The task on increasing of reliability and accuracy of the numerical method and its solution by means of calculations in two bases are formulated. The analytical solution of the differential equation of deformation of a ferro-concrete plate under the action of local loads is also obtained.

  20. Numerical method of singular problems on singular integrals

    International Nuclear Information System (INIS)

    Zhao Huaiguo; Mou Zongze

    1992-02-01

    As first part on the numerical research of singular problems, a numerical method is proposed for singular integrals. It is shown that the procedure is quite powerful for solving physics calculation with singularity such as the plasma dispersion function. Useful quadrature formulas for some class of the singular integrals are derived. In general, integrals with more complex singularities can be dealt by this method easily

  1. Numerical and adaptive grid methods for ideal magnetohydrodynamics

    Science.gov (United States)

    Loring, Burlen

    2008-02-01

    In this thesis numerical finite difference methods for ideal magnetohydrodynamics(MHD) are investigated. A review of the relevant physics, essential for interpreting the results of numerical solutions and constructing validation cases, is presented. This review includes a discusion of the propagation of small amplitude waves in the MHD system as well as a thorough discussion of MHD shocks, contacts and rarefactions and how they can be piece together to obtain a solutions to the MHD Riemann problem. Numerical issues relevant to the MHD system such as: the loss of nonlinear numerical stability in the presence of discontinuous solutions, the introduction of spurious forces due to the growth of the divergence of the magnetic flux density, the loss of pressure positivity, and the effects of non-conservative numerical methods are discussed, along with the practical approaches which can be used to remedy or minimize the negative consequences of each. The use of block structured adaptive mesh refinement is investigated in the context of a divergence free MHD code. A new method for conserving magnetic flux across AMR grid interfaces is developed and a detailed discussion of our implementation of this method using the CHOMBO AMR framework is given. A preliminary validation of the new method for conserving magnetic flux density across AMR grid interfaces illustrates that the method works. Finally a number of code validation cases are examined spurring a discussion of the strengths and weaknesses of the numerics employed.

  2. High resolution, high speed ultrahigh vacuum microscopy

    International Nuclear Information System (INIS)

    Poppa, Helmut

    2004-01-01

    The history and future of transmission electron microscopy (TEM) is discussed as it refers to the eventual development of instruments and techniques applicable to the real time in situ investigation of surface processes with high resolution. To reach this objective, it was necessary to transform conventional high resolution instruments so that an ultrahigh vacuum (UHV) environment at the sample site was created, that access to the sample by various in situ sample modification procedures was provided, and that in situ sample exchanges with other integrated surface analytical systems became possible. Furthermore, high resolution image acquisition systems had to be developed to take advantage of the high speed imaging capabilities of projection imaging microscopes. These changes to conventional electron microscopy and its uses were slowly realized in a few international laboratories over a period of almost 40 years by a relatively small number of researchers crucially interested in advancing the state of the art of electron microscopy and its applications to diverse areas of interest; often concentrating on the nucleation, growth, and properties of thin films on well defined material surfaces. A part of this review is dedicated to the recognition of the major contributions to surface and thin film science by these pioneers. Finally, some of the important current developments in aberration corrected electron optics and eventual adaptations to in situ UHV microscopy are discussed. As a result of all the path breaking developments that have led to today's highly sophisticated UHV-TEM systems, integrated fundamental studies are now possible that combine many traditional surface science approaches. Combined investigations to date have involved in situ and ex situ surface microscopies such as scanning tunneling microscopy/atomic force microscopy, scanning Auger microscopy, and photoemission electron microscopy, and area-integrating techniques such as x-ray photoelectron

  3. USGS High Resolution Orthoimagery Collection - Historical - National Geospatial Data Asset (NGDA) High Resolution Orthoimagery

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — USGS high resolution orthorectified images from The National Map combine the image characteristics of an aerial photograph with the geometric qualities of a map. An...

  4. Ring artifact correction for high-resolution micro CT

    International Nuclear Information System (INIS)

    Kyriakou, Yiannis; Prell, Daniel; Kalender, Willi A

    2009-01-01

    In high-resolution micro CT using flat detectors (FD), imperfect or defect detector elements may cause concentric-ring artifacts due to their continuous over- or underestimation of attenuation values, which often disturb image quality. We here present a dedicated image-based ring artifact correction method for high-resolution micro CT, based on median filtering of the reconstructed image and working on a transformed version of the reconstructed images in polar coordinates. This post-processing method reduced ring artifacts in the reconstructed images and improved image quality for phantom and in in vivo scans. Noise and artifacts were reduced both in transversal and in multi-planar reformations along the longitudinal axis. (note)

  5. Molecular dynamics with deterministic and stochastic numerical methods

    CERN Document Server

    Leimkuhler, Ben

    2015-01-01

    This book describes the mathematical underpinnings of algorithms used for molecular dynamics simulation, including both deterministic and stochastic numerical methods. Molecular dynamics is one of the most versatile and powerful methods of modern computational science and engineering and is used widely in chemistry, physics, materials science and biology. Understanding the foundations of numerical methods means knowing how to select the best one for a given problem (from the wide range of techniques on offer) and how to create new, efficient methods to address particular challenges as they arise in complex applications.  Aimed at a broad audience, this book presents the basic theory of Hamiltonian mechanics and stochastic differential equations, as well as topics including symplectic numerical methods, the handling of constraints and rigid bodies, the efficient treatment of Langevin dynamics, thermostats to control the molecular ensemble, multiple time-stepping, and the dissipative particle dynamics method...

  6. Two numerical methods for mean-field games

    KAUST Repository

    Gomes, Diogo A.

    2016-01-09

    Here, we consider numerical methods for stationary mean-field games (MFG) and investigate two classes of algorithms. The first one is a gradient flow method based on the variational characterization of certain MFG. The second one uses monotonicity properties of MFG. We illustrate our methods with various examples, including one-dimensional periodic MFG, congestion problems, and higher-dimensional models.

  7. Two numerical methods for mean-field games

    KAUST Repository

    Gomes, Diogo A.

    2016-01-01

    Here, we consider numerical methods for stationary mean-field games (MFG) and investigate two classes of algorithms. The first one is a gradient flow method based on the variational characterization of certain MFG. The second one uses monotonicity properties of MFG. We illustrate our methods with various examples, including one-dimensional periodic MFG, congestion problems, and higher-dimensional models.

  8. On the numerical stability analysis of pipelined Krylov subspace methods

    Czech Academy of Sciences Publication Activity Database

    Carson, E.T.; Rozložník, Miroslav; Strakoš, Z.; Tichý, P.; Tůma, M.

    submitted 2017 (2018) R&D Projects: GA ČR GA13-06684S Grant - others:GA MŠk(CZ) LL1202 Institutional support: RVO:67985807 Keywords : Krylov subspace methods * the conjugate gradient method * numerical stability * inexact computations * delay of convergence * maximal attainable accuracy * pipelined Krylov subspace methods * exascale computations

  9. Stochastic numerical methods an introduction for students and scientists

    CERN Document Server

    Toral, Raul

    2014-01-01

    Stochastic Numerical Methods introduces at Master level the numerical methods that use probability or stochastic concepts to analyze random processes. The book aims at being rather general and is addressed at students of natural sciences (Physics, Chemistry, Mathematics, Biology, etc.) and Engineering, but also social sciences (Economy, Sociology, etc.) where some of the techniques have been used recently to numerically simulate different agent-based models. Examples included in the book range from phase-transitions and critical phenomena, including details of data analysis (extraction of critical exponents, finite-size effects, etc.), to population dynamics, interfacial growth, chemical reactions, etc. Program listings are integrated in the discussion of numerical algorithms to facilitate their understanding. From the contents: Review of Probability ConceptsMonte Carlo IntegrationGeneration of Uniform and Non-uniformRandom Numbers: Non-correlated ValuesDynamical MethodsApplications to Statistical MechanicsIn...

  10. Numerical methods design, analysis, and computer implementation of algorithms

    CERN Document Server

    Greenbaum, Anne

    2012-01-01

    Numerical Methods provides a clear and concise exploration of standard numerical analysis topics, as well as nontraditional ones, including mathematical modeling, Monte Carlo methods, Markov chains, and fractals. Filled with appealing examples that will motivate students, the textbook considers modern application areas, such as information retrieval and animation, and classical topics from physics and engineering. Exercises use MATLAB and promote understanding of computational results. The book gives instructors the flexibility to emphasize different aspects--design, analysis, or computer implementation--of numerical algorithms, depending on the background and interests of students. Designed for upper-division undergraduates in mathematics or computer science classes, the textbook assumes that students have prior knowledge of linear algebra and calculus, although these topics are reviewed in the text. Short discussions of the history of numerical methods are interspersed throughout the chapters. The book a...

  11. High-Resolution Esophageal Manometry: A Time Motion Study

    Directory of Open Access Journals (Sweden)

    Daniel C Sadowski

    2008-01-01

    Full Text Available INTRODUCTION: High-resolution manometry (HRM of the esophagus is a new technique that provides a more precise assessment of esophageal motility than conventional techniques. Because HRM measures pressure events along the entire length of the esophagus simultaneously, clinical procedure time should be shorter because less catheter manipulation is required. According to manufacturer advertising, the new HRM system is more accurate and up to 50% faster than conventional methods.

  12. Accelerated high-resolution photoacoustic tomography via compressed sensing

    Science.gov (United States)

    Arridge, Simon; Beard, Paul; Betcke, Marta; Cox, Ben; Huynh, Nam; Lucka, Felix; Ogunlade, Olumide; Zhang, Edward

    2016-12-01

    Current 3D photoacoustic tomography (PAT) systems offer either high image quality or high frame rates but are not able to deliver high spatial and temporal resolution simultaneously, which limits their ability to image dynamic processes in living tissue (4D PAT). A particular example is the planar Fabry-Pérot (FP) photoacoustic scanner, which yields high-resolution 3D images but takes several minutes to sequentially map the incident photoacoustic field on the 2D sensor plane, point-by-point. However, as the spatio-temporal complexity of many absorbing tissue structures is rather low, the data recorded in such a conventional, regularly sampled fashion is often highly redundant. We demonstrate that combining model-based, variational image reconstruction methods using spatial sparsity constraints with the development of novel PAT acquisition systems capable of sub-sampling the acoustic wave field can dramatically increase the acquisition speed while maintaining a good spatial resolution: first, we describe and model two general spatial sub-sampling schemes. Then, we discuss how to implement them using the FP interferometer and demonstrate the potential of these novel compressed sensing PAT devices through simulated data from a realistic numerical phantom and through measured data from a dynamic experimental phantom as well as from in vivo experiments. Our results show that images with good spatial resolution and contrast can be obtained from highly sub-sampled PAT data if variational image reconstruction techniques that describe the tissues structures with suitable sparsity-constraints are used. In particular, we examine the use of total variation (TV) regularization enhanced by Bregman iterations. These novel reconstruction strategies offer new opportunities to dramatically increase the acquisition speed of photoacoustic scanners that employ point-by-point sequential scanning as well as reducing the channel count of parallelized schemes that use detector arrays.

  13. Numerical method for two phase flow with a unstable interface

    International Nuclear Information System (INIS)

    Glimm, J.; Marchesin, D.; McBryan, O.

    1981-01-01

    The random choice method is used to compute the oil-water interface for two dimensional porous media equations. The equations used are a pair of coupled equations; the (elliptic) pressure equation and the (hyperbolic) saturation equation. The equations do not include the dispersive capillary pressure term and the computation does not introduce numerical diffusion. The method resolves saturation discontinuities sharply. The main conclusion of this paper is that the random choice is a correct numerical procedure for this problem even in the highly fingered case. Two methods of inducing fingers are considered: deterministically, through choice of Cauchy data and heterogeneity, through maximizing the randomness of the random choice method

  14. A numerical method for a transient two-fluid model

    International Nuclear Information System (INIS)

    Le Coq, G.; Libmann, M.

    1978-01-01

    The transient boiling two-phase flow is studied. In nuclear reactors, the driving conditions for the transient boiling are a pump power decay or/and an increase in heating power. The physical model adopted for the two-phase flow is the two fluid model with the assumption that the vapor remains at saturation. The numerical method for solving the thermohydraulics problems is a shooting method, this method is highly implicit. A particular problem exists at the boiling and condensation front. A computer code using this numerical method allow the calculation of a transient boiling initiated by a steady state for a PWR or for a LMFBR

  15. Numerical methods for semiconductor heterostructures with band nonparabolicity

    International Nuclear Information System (INIS)

    Wang Weichung; Hwang Tsungmin; Lin Wenwei; Liu Jinnliang

    2003-01-01

    This article presents numerical methods for computing bound state energies and associated wave functions of three-dimensional semiconductor heterostructures with special interest in the numerical treatment of the effect of band nonparabolicity. A nonuniform finite difference method is presented to approximate a model of a cylindrical-shaped semiconductor quantum dot embedded in another semiconductor matrix. A matrix reduction method is then proposed to dramatically reduce huge eigenvalue systems to relatively very small subsystems. Moreover, the nonparabolic band structure results in a cubic type of nonlinear eigenvalue problems for which a cubic Jacobi-Davidson method with an explicit nonequivalence deflation method are proposed to compute all the desired eigenpairs. Numerical results are given to illustrate the spectrum of energy levels and the corresponding wave functions in rather detail

  16. EFFECTS OF DIFFERENT NUMERICAL INTERFACE METHODS ON HYDRODYNAMICS INSTABILITY

    Energy Technology Data Exchange (ETDEWEB)

    FRANCOIS, MARIANNE M. [Los Alamos National Laboratory; DENDY, EDWARD D. [Los Alamos National Laboratory; LOWRIE, ROBERT B. [Los Alamos National Laboratory; LIVESCU, DANIEL [Los Alamos National Laboratory; STEINKAMP, MICHAEL J. [Los Alamos National Laboratory

    2007-01-11

    The authors compare the effects of different numerical schemes for the advection and material interface treatments on the single-mode Rayleigh-Taylor instability, using the RAGE hydro-code. The interface growth and its surface density (interfacial area) versus time are investigated. The surface density metric shows to be better suited to characterize the difference in the flow, than the conventional interface growth metric. They have found that Van Leer's limiter combined to no interface treatment leads to the largest surface area. Finally, to quantify the difference between the numerical methods they have estimated the numerical viscosity in the linear-regime at different scales.

  17. High resolution reservoir geological modelling using outcrop information

    Energy Technology Data Exchange (ETDEWEB)

    Zhang Changmin; Lin Kexiang; Liu Huaibo [Jianghan Petroleum Institute, Hubei (China)] [and others

    1997-08-01

    This is China`s first case study of high resolution reservoir geological modelling using outcrop information. The key of the modelling process is to build a prototype model and using the model as a geological knowledge bank. Outcrop information used in geological modelling including seven aspects: (1) Determining the reservoir framework pattern by sedimentary depositional system and facies analysis; (2) Horizontal correlation based on the lower and higher stand duration of the paleo-lake level; (3) Determining the model`s direction based on the paleocurrent statistics; (4) Estimating the sandbody communication by photomosaic and profiles; (6) Estimating reservoir properties distribution within sandbody by lithofacies analysis; and (7) Building the reservoir model in sandbody scale by architectural element analysis and 3-D sampling. A high resolution reservoir geological model of Youshashan oil field has been built by using this method.

  18. Automated data processing of high-resolution mass spectra

    DEFF Research Database (Denmark)

    Hansen, Michael Adsetts Edberg; Smedsgaard, Jørn

    of the massive amounts of data. We present an automated data processing method to quantitatively compare large numbers of spectra from the analysis of complex mixtures, exploiting the full quality of high-resolution mass spectra. By projecting all detected ions - within defined intervals on both the time...... infusion of crude extracts into the source taking advantage of the high sensitivity, high mass resolution and accuracy and the limited fragmentation. Unfortunately, there has not been a comparable development in the data processing techniques to fully exploit gain in high resolution and accuracy...... infusion analyses of crude extract to find the relationship between species from several species terverticillate Penicillium, and also that the ions responsible for the segregation can be identified. Furthermore the process can automate the process of detecting unique species and unique metabolites....

  19. Towards high-resolution positron emission tomography for small volumes

    International Nuclear Information System (INIS)

    McKee, B.T.A.

    1982-01-01

    Some arguments are made regarding the medical usefulness of high spatial resolution in positron imaging, even if limited to small imaged volumes. Then the intrinsic limitations to spatial resolution in positron imaging are discussed. The project to build a small-volume, high resolution animal research prototype (SHARP) positron imaging system is described. The components of the system, particularly the detectors, are presented and brief mention is made of data acquisition and image reconstruction methods. Finally, some preliminary imaging results are presented; a pair of isolated point sources and 18 F in the bones of a rabbit. Although the detector system is not fully completed, these first results indicate that the goals of high sensitivity and high resolution (4 mm) have been realized. (Auth.)

  20. High-resolution investigations of edge effects in neutron imaging

    International Nuclear Information System (INIS)

    Strobl, M.; Kardjilov, N.; Hilger, A.; Kuehne, G.; Frei, G.; Manke, I.

    2009-01-01

    Edge enhancement is the main effect measured by the so-called inline or propagation-based neutron phase contrast imaging method. The effect has originally been explained by diffraction, and high spatial coherence has been claimed to be a necessary precondition. However, edge enhancement has also been found in conventional imaging with high resolution. In such cases the effects can produce artefacts and hinder quantification. In this letter the edge effects at cylindrical shaped samples and long straight edges have been studied in detail. The enhancement can be explained by refraction and total reflection. Using high-resolution imaging, where spatial resolutions better than 50 μm could be achieved, refraction and total reflection peaks - similar to diffraction patterns - could be separated and distinguished.

  1. Turbine component casting core with high resolution region

    Science.gov (United States)

    Kamel, Ahmed; Merrill, Gary B.

    2014-08-26

    A hollow turbine engine component with complex internal features can include a first region and a second, high resolution region. The first region can be defined by a first ceramic core piece formed by any conventional process, such as by injection molding or transfer molding. The second region can be defined by a second ceramic core piece formed separately by a method effective to produce high resolution features, such as tomo lithographic molding. The first core piece and the second core piece can be joined by interlocking engagement that once subjected to an intermediate thermal heat treatment process thermally deform to form a three dimensional interlocking joint between the first and second core pieces by allowing thermal creep to irreversibly interlock the first and second core pieces together such that the joint becomes physically locked together providing joint stability through thermal processing.

  2. High resolution extremity CT for biomechanics modeling

    International Nuclear Information System (INIS)

    Ashby, A.E.; Brand, H.; Hollerbach, K.; Logan, C.M.; Martz, H.E.

    1995-01-01

    With the advent of ever more powerful computing and finite element analysis (FEA) capabilities, the bone and joint geometry detail available from either commercial surface definitions or from medical CT scans is inadequate. For dynamic FEA modeling of joints, precise articular contours are necessary to get appropriate contact definition. In this project, a fresh cadaver extremity was suspended in parafin in a lucite cylinder and then scanned with an industrial CT system to generate a high resolution data set for use in biomechanics modeling

  3. High resolution extremity CT for biomechanics modeling

    Energy Technology Data Exchange (ETDEWEB)

    Ashby, A.E.; Brand, H.; Hollerbach, K.; Logan, C.M.; Martz, H.E.

    1995-09-23

    With the advent of ever more powerful computing and finite element analysis (FEA) capabilities, the bone and joint geometry detail available from either commercial surface definitions or from medical CT scans is inadequate. For dynamic FEA modeling of joints, precise articular contours are necessary to get appropriate contact definition. In this project, a fresh cadaver extremity was suspended in parafin in a lucite cylinder and then scanned with an industrial CT system to generate a high resolution data set for use in biomechanics modeling.

  4. SPIRAL2/DESIR high resolution mass separator

    Energy Technology Data Exchange (ETDEWEB)

    Kurtukian-Nieto, T., E-mail: kurtukia@cenbg.in2p3.fr [Centre d’Études Nucléaires de Bordeaux Gradignan, Université Bordeaux 1-CNRS/IN2P3, BP 120, F-33175 Gradignan Cedex (France); Baartman, R. [TRIUMF, 4004 Wesbrook Mall, Vancouver B.C., V6T 2A3 (Canada); Blank, B.; Chiron, T. [Centre d’Études Nucléaires de Bordeaux Gradignan, Université Bordeaux 1-CNRS/IN2P3, BP 120, F-33175 Gradignan Cedex (France); Davids, C. [Physics Division, Argonne National Laboratory, Argonne, IL 60439 (United States); Delalee, F. [Centre d’Études Nucléaires de Bordeaux Gradignan, Université Bordeaux 1-CNRS/IN2P3, BP 120, F-33175 Gradignan Cedex (France); Duval, M. [GANIL, CEA/DSM-CNRS/IN2P3, Bd Henri Becquerel, BP 55027, F-14076 Caen Cedex 5 (France); El Abbeir, S.; Fournier, A. [Centre d’Études Nucléaires de Bordeaux Gradignan, Université Bordeaux 1-CNRS/IN2P3, BP 120, F-33175 Gradignan Cedex (France); Lunney, D. [CSNSM-IN2P3-CNRS, Université de Paris Sud, F-91405 Orsay (France); Méot, F. [BNL, Upton, Long Island, New York (United States); Serani, L. [Centre d’Études Nucléaires de Bordeaux Gradignan, Université Bordeaux 1-CNRS/IN2P3, BP 120, F-33175 Gradignan Cedex (France); Stodel, M.-H.; Varenne, F. [GANIL, CEA/DSM-CNRS/IN2P3, Bd Henri Becquerel, BP 55027, F-14076 Caen Cedex 5 (France); and others

    2013-12-15

    DESIR is the low-energy part of the SPIRAL2 ISOL facility under construction at GANIL. DESIR includes a high-resolution mass separator (HRS) with a designed resolving power m/Δm of 31,000 for a 1 π-mm-mrad beam emittance, obtained using a high-intensity beam cooling device. The proposed design consists of two 90-degree magnetic dipoles, complemented by electrostatic quadrupoles, sextupoles, and a multipole, arranged in a symmetric configuration to minimize aberrations. A detailed description of the design and results of extensive simulations are given.

  5. Laboratory of High resolution gamma spectrometry

    International Nuclear Information System (INIS)

    Mendez G, A.; Giber F, J.; Rivas C, I.; Reyes A, B.

    1992-01-01

    The Department of Nuclear Experimentation of the Nuclear Systems Management requests the collaboration of the Engineering unit for the supervision of the execution of the work of the High resolution Gamma spectrometry and low bottom laboratory, using the hut of the sub critic reactor of the Nuclear Center of Mexico. This laboratory has the purpose of determining the activity of special materials irradiated in nuclear power plants. In this report the architecture development, concepts, materials and diagrams for the realization of this type of work are presented. (Author)

  6. High resolution neutron spectroscopy for helium isotopes

    International Nuclear Information System (INIS)

    Abdel-Wahab, M.S.; Klages, H.O.; Schmalz, G.; Haesner, B.H.; Kecskemeti, J.; Schwarz, P.; Wilczynski, J.

    1992-01-01

    A high resolution fast neutron time-of-flight spectrometer is described, neutron time-of-flight spectra are taken using a specially designed TDC in connection to an on-line computer. The high time-of-flight resolution of 5 ps/m enabled the study of the total cross section of 4 He for neutrons near the 3/2 + resonance in the 5 He nucleus. The resonance parameters were determined by a single level Breit-Winger fit to the data. (orig.)

  7. Numerical methods for axisymmetric and 3D nonlinear beams

    Science.gov (United States)

    Pinton, Gianmarco F.; Trahey, Gregg E.

    2005-04-01

    Time domain algorithms that solve the Khokhlov--Zabolotzskaya--Kuznetsov (KZK) equation are described and implemented. This equation represents the propagation of finite amplitude sound beams in a homogenous thermoviscous fluid for axisymmetric and fully three dimensional geometries. In the numerical solution each of the terms is considered separately and the numerical methods are compared with known solutions. First and second order operator splitting are used to combine the separate terms in the KZK equation and their convergence is examined.

  8. Numerical methods of mathematical optimization with Algol and Fortran programs

    CERN Document Server

    Künzi, Hans P; Zehnder, C A; Rheinboldt, Werner

    1971-01-01

    Numerical Methods of Mathematical Optimization: With ALGOL and FORTRAN Programs reviews the theory and the practical application of the numerical methods of mathematical optimization. An ALGOL and a FORTRAN program was developed for each one of the algorithms described in the theoretical section. This should result in easy access to the application of the different optimization methods.Comprised of four chapters, this volume begins with a discussion on the theory of linear and nonlinear optimization, with the main stress on an easily understood, mathematically precise presentation. In addition

  9. Numerical methods for modeling photonic-crystal VCSELs

    DEFF Research Database (Denmark)

    Dems, Maciej; Chung, Il-Sug; Nyakas, Peter

    2010-01-01

    We show comparison of four different numerical methods for simulating Photonic-Crystal (PC) VCSELs. We present the theoretical basis behind each method and analyze the differences by studying a benchmark VCSEL structure, where the PC structure penetrates all VCSEL layers, the entire top-mirror DBR...... to the effective index method. The simulation results elucidate the strength and weaknesses of the analyzed methods; and outline the limits of applicability of the different models....

  10. Valve cam design using numerical step-by-step method

    OpenAIRE

    Vasilyev, Aleksandr; Bakhracheva, Yuliya; Kabore, Ousman; Zelenskiy, Yuriy

    2014-01-01

    This article studies the numerical step-by-step method of cam profile design. The results of the study are used for designing the internal combustion engine valve gear. This method allows to profile the peak efficiency of cams in view of many restrictions, connected with valve gear serviceability and reliability.

  11. Investigating Convergence Patterns for Numerical Methods Using Data Analysis

    Science.gov (United States)

    Gordon, Sheldon P.

    2013-01-01

    The article investigates the patterns that arise in the convergence of numerical methods, particularly those in the errors involved in successive iterations, using data analysis and curve fitting methods. In particular, the results obtained are used to convey a deeper level of understanding of the concepts of linear, quadratic, and cubic…

  12. A numerical test of the collective coordinate method

    International Nuclear Information System (INIS)

    Dobrowolski, T.; Tatrocki, P.

    2008-01-01

    The purpose of this Letter is to compare the dynamics of the kink interacting with the imperfection which follows from the collective coordinate method with the numerical results obtained on the ground of the field theoretical model. We showed that for weekly interacting kinks the collective coordinate method works similarly well for low and extremely large speeds

  13. Efficient Numerical Methods for Stochastic Differential Equations in Computational Finance

    KAUST Repository

    Happola, Juho

    2017-09-19

    Stochastic Differential Equations (SDE) offer a rich framework to model the probabilistic evolution of the state of a system. Numerical approximation methods are typically needed in evaluating relevant Quantities of Interest arising from such models. In this dissertation, we present novel effective methods for evaluating Quantities of Interest relevant to computational finance when the state of the system is described by an SDE.

  14. Application of numerical analysis methods to thermoluminescence dosimetry

    International Nuclear Information System (INIS)

    Gomez Ros, J. M.; Delgado, A.

    1989-01-01

    This report presents the application of numerical methods to thermoluminescence dosimetry (TLD), showing the advantages obtained over conventional evaluation systems. Different configurations of the analysis method are presented to operate in specific dosimetric applications of TLD, such as environmental monitoring and mailed dosimetry systems for quality assurance in radiotherapy facilities. (Author) 10 refs

  15. Efficient Numerical Methods for Stochastic Differential Equations in Computational Finance

    KAUST Repository

    Happola, Juho

    2017-01-01

    Stochastic Differential Equations (SDE) offer a rich framework to model the probabilistic evolution of the state of a system. Numerical approximation methods are typically needed in evaluating relevant Quantities of Interest arising from such models. In this dissertation, we present novel effective methods for evaluating Quantities of Interest relevant to computational finance when the state of the system is described by an SDE.

  16. A numerical method for solving singular De`s

    Energy Technology Data Exchange (ETDEWEB)

    Mahaver, W.T.

    1996-12-31

    A numerical method is developed for solving singular differential equations using steepest descent based on weighted Sobolev gradients. The method is demonstrated on a variety of first and second order problems, including linear constrained, unconstrained, and partially constrained first order problems, a nonlinear first order problem with irregular singularity, and two second order variational problems.

  17. Limiting liability via high resolution image processing

    Energy Technology Data Exchange (ETDEWEB)

    Greenwade, L.E.; Overlin, T.K.

    1996-12-31

    The utilization of high resolution image processing allows forensic analysts and visualization scientists to assist detectives by enhancing field photographs, and by providing the tools and training to increase the quality and usability of field photos. Through the use of digitized photographs and computerized enhancement software, field evidence can be obtained and processed as `evidence ready`, even in poor lighting and shadowed conditions or darkened rooms. These images, which are most often unusable when taken with standard camera equipment, can be shot in the worst of photographic condition and be processed as usable evidence. Visualization scientists have taken the use of digital photographic image processing and moved the process of crime scene photos into the technology age. The use of high resolution technology will assist law enforcement in making better use of crime scene photography and positive identification of prints. Valuable court room and investigation time can be saved and better served by this accurate, performance based process. Inconclusive evidence does not lead to convictions. Enhancement of the photographic capability helps solve one major problem with crime scene photos, that if taken with standard equipment and without the benefit of enhancement software would be inconclusive, thus allowing guilty parties to be set free due to lack of evidence.

  18. High resolution studies of barium Rydberg states

    International Nuclear Information System (INIS)

    Eliel, E.R.

    1982-01-01

    The subtle structure of Rydberg states of barium with orbital angular momentum 0, 1, 2 and 3 is investigated. Some aspects of atomic theory for a configuration with two valence electrons are reviewed. The Multi Channel Quantum Defect Theory (MQDT) is concisely introduced as a convenient way to describe interactions between Rydberg series. Three high-resolution UV studies are presented. The first two, presenting results on a transition in indium and europium serve as an illustration of the frequency doubling technique. The third study is of hyperfine structure and isotope shifts in low-lying p states in Sr and Ba. An extensive study of the 6snp and 6snf Rydberg states of barium is presented with particular emphasis on the 6snf states. It is shown that the level structure cannot be fully explained with the model introduced earlier. Rather an effective two-body spin-orbit interaction has to be introduced to account for the observed splittings, illustrating that high resolution studies on Rydberg states offer an unique opportunity to determine the importance of such effects. Finally, the 6sns and 6snd series are considered. The hyperfine induced isotope shift in the simple excitation spectra to 6sns 1 S 0 is discussed and attention is paid to series perturbers. It is shown that level mixing parameters can easily be extracted from the experimental data. (Auth.)

  19. Principles of high resolution NMR in solids

    CERN Document Server

    Mehring, Michael

    1983-01-01

    The field of Nuclear Magnetic Resonance (NMR) has developed at a fascinating pace during the last decade. It always has been an extremely valuable tool to the organic chemist by supplying molecular "finger print" spectra at the atomic level. Unfortunately the high resolution achievable in liquid solutions could not be obtained in solids and physicists and physical chemists had to live with unresolved lines open to a wealth of curve fitting procedures and a vast amount of speculations. High resolution NMR in solids seemed to be a paradoxon. Broad structure­ less lines are usually encountered when dealing with NMR in solids. Only with the recent advent of mUltiple pulse, magic angle, cross-polarization, two-dimen­ sional and multiple-quantum spectroscopy and other techniques during the last decade it became possible to resolve finer details of nuclear spin interactions in solids. I have felt that graduate students, researchers and others beginning to get involved with these techniques needed a book which trea...

  20. High-Resolution PET Detector. Final report

    International Nuclear Information System (INIS)

    Karp, Joel

    2014-01-01

    The objective of this project was to develop an understanding of the limits of performance for a high resolution PET detector using an approach based on continuous scintillation crystals rather than pixelated crystals. The overall goal was to design a high-resolution detector, which requires both high spatial resolution and high sensitivity for 511 keV gammas. Continuous scintillation detectors (Anger cameras) have been used extensively for both single-photon and PET scanners, however, these instruments were based on NaI(Tl) scintillators using relatively large, individual photo-multipliers. In this project we investigated the potential of this type of detector technology to achieve higher spatial resolution through the use of improved scintillator materials and photo-sensors, and modification of the detector surface to optimize the light response function.We achieved an average spatial resolution of 3-mm for a 25-mm thick, LYSO continuous detector using a maximum likelihood position algorithm and shallow slots cut into the entrance surface

  1. Application of the photoelastic experimental hybrid method with new numerical method to the high stress distribution

    International Nuclear Information System (INIS)

    Hawong, Jai Sug; Lee, Dong Hun; Lee, Dong Ha; Tche, Konstantin

    2004-01-01

    In this research, the photoelastic experimental hybrid method with Hook-Jeeves numerical method has been developed: This method is more precise and stable than the photoelastic experimental hybrid method with Newton-Rapson numerical method with Gaussian elimination method. Using the photoelastic experimental hybrid method with Hook-Jeeves numerical method, we can separate stress components from isochromatics only and stress intensity factors and stress concentration factors can be determined. The photoelastic experimental hybrid method with Hook-Jeeves had better be used in the full field experiment than the photoelastic experimental hybrid method with Newton-Rapson with Gaussian elimination method

  2. Numerical perturbative methods in the quantum theory of physical systems

    International Nuclear Information System (INIS)

    Adam, G.

    1980-01-01

    During the last two decades, development of digital electronic computers has led to the deployment of new, distinct methods in theoretical physics. These methods, based on the advances of modern numerical analysis as well as on specific equations describing physical processes, enabled to perform precise calculations of high complexity which have completed and sometimes changed our image of many physical phenomena. Our efforts have concentrated on the development of numerical methods with such intrinsic performances as to allow a successful approach of some Key issues in present theoretical physics on smaller computation systems. The basic principle of such methods is to translate, in numerical analysis language, the theory of perturbations which is suited to numerical rather than to analytical computation. This idea has been illustrated by working out two problems which arise from the time independent Schroedinger equation in the non-relativistic approximation, within both quantum systems with a small number of particles and systems with a large number of particles, respectively. In the first case, we are led to the numerical solution of some quadratic ordinary differential equations (first section of the thesis) and in the second case, to the solution of some secular equations in the Brillouin area (second section). (author)

  3. Numerical methods for Bayesian inference in the face of aging

    International Nuclear Information System (INIS)

    Clarotti, C.A.; Villain, B.; Procaccia, H.

    1996-01-01

    In recent years, much attention has been paid to Bayesian methods for Risk Assessment. Until now, these methods have been studied from a theoretical point of view. Researchers have been mainly interested in: studying the effectiveness of Bayesian methods in handling rare events; debating about the problem of priors and other philosophical issues. An aspect central to the Bayesian approach is numerical computation because any safety/reliability problem, in a Bayesian frame, ends with a problem of numerical integration. This aspect has been neglected until now because most Risk studies assumed the Exponential model as the basic probabilistic model. The existence of conjugate priors makes numerical integration unnecessary in this case. If aging is to be taken into account, no conjugate family is available and the use of numerical integration becomes compulsory. EDF (National Board of Electricity, of France) and ENEA (National Committee for Energy, New Technologies and Environment, of Italy) jointly carried out a research program aimed at developing quadrature methods suitable for Bayesian Interference with underlying Weibull or gamma distributions. The paper will illustrate the main results achieved during the above research program and will discuss, via some sample cases, the performances of the numerical algorithms which on the appearance of stress corrosion cracking in the tubes of Steam Generators of PWR French power plants. (authors)

  4. On numerical solution of Burgers' equation by homotopy analysis method

    International Nuclear Information System (INIS)

    Inc, Mustafa

    2008-01-01

    In this Letter, we present the Homotopy Analysis Method (shortly HAM) for obtaining the numerical solution of the one-dimensional nonlinear Burgers' equation. The initial approximation can be freely chosen with possible unknown constants which can be determined by imposing the boundary and initial conditions. Convergence of the solution and effects for the method is discussed. The comparison of the HAM results with the Homotopy Perturbation Method (HPM) and the results of [E.N. Aksan, Appl. Math. Comput. 174 (2006) 884; S. Kutluay, A. Esen, Int. J. Comput. Math. 81 (2004) 1433; S. Abbasbandy, M.T. Darvishi, Appl. Math. Comput. 163 (2005) 1265] are made. The results reveal that HAM is very simple and effective. The HAM contains the auxiliary parameter h, which provides us with a simple way to adjust and control the convergence region of solution series. The numerical solutions are compared with the known analytical and some numerical solutions

  5. Interdisciplinary Study of Numerical Methods and Power Plants Engineering

    Directory of Open Access Journals (Sweden)

    Ioana OPRIS

    2014-08-01

    Full Text Available The development of technology, electronics and computing opened the way for a cross-disciplinary research that brings benefits by combining the achievements of different fields. To prepare the students for their future interdisciplinary approach,aninterdisciplinary teaching is adopted. This ensures their progress in knowledge, understanding and ability to navigate through different fields. Aiming these results, the Universities introduce new interdisciplinary courses which explore complex problems by studying subjects from different domains. The paper presents a problem encountered in designingpower plants. The method of solvingthe problem isused to explain the numerical methods and to exercise programming.The goal of understanding a numerical algorithm that solves a linear system of equations is achieved by using the knowledge of heat transfer to design the regenerative circuit of a thermal power plant. In this way, the outcomes from the prior courses (mathematics and physics are used to explain a new subject (numerical methods and to advance future ones (power plants.

  6. High-resolution CT findings in Streptococcus milleri pulmonary infection

    International Nuclear Information System (INIS)

    Okada, F.; Ono, A.; Ando, Y.; Nakayama, T.; Ishii, H.; Hiramatsu, K.; Sato, H.; Kira, A.; Otabe, M.; Mori, H.

    2013-01-01

    Aim: To assess pulmonary high-resolution computed tomography (CT) findings in patients with acute Streptococcus milleri pulmonary infection. Materials and methods: Sixty consecutive patients with acute S. milleri pneumonia who had undergone high-resolution CT chest examinations between January 2004 and March 2010 were retrospectively identified. Twenty-seven patients with concurrent infections were excluded. The final study group comprised 33 patients (25 men, 8 women; aged 20–88 years, mean 63.1 years) with S. milleri infection. The patients' clinical findings were assessed. Parenchymal abnormalities, enlarged lymph nodes, and pleural effusion were evaluated on high-resolution CT. Results: Underlying conditions included malignancy (n = 15), a smoking habit (n = 11), and diabetes mellitus (n = 8). CT images of all patients showed abnormal findings, including ground-glass opacity (n = 24), bronchial wall thickening (n = 23), consolidation (n = 17), and cavities (n = 7). Pleural effusion was found in 18 patients, and complex pleural effusions were found in seven patients. Conclusion: Pulmonary infection caused by S. milleri was observed mostly in male patients with underlying conditions such as malignancy or a smoking habit. The CT findings in patients with S. milleri consisted mainly of ground-glass opacity, bronchial wall thickening, pleural effusions, and cavities

  7. Simulation and Prediction of Weather Radar Clutter Using a Wave Propagator on High Resolution NWP Data

    DEFF Research Database (Denmark)

    Benzon, Hans-Henrik; Bovith, Thomas

    2008-01-01

    for prediction of this type of weather radar clutter is presented. The method uses a wave propagator to identify areas of potential non-standard propagation. The wave propagator uses a three dimensional refractivity field derived from the geophysical parameters: temperature, humidity, and pressure obtained from......Weather radars are essential sensors for observation of precipitation in the troposphere and play a major part in weather forecasting and hydrological modelling. Clutter caused by non-standard wave propagation is a common problem in weather radar applications, and in this paper a method...... a high-resolution Numerical Weather Prediction (NWP) model. The wave propagator is based on the parabolic equation approximation to the electromagnetic wave equation. The parabolic equation is solved using the well-known Fourier split-step method. Finally, the radar clutter prediction technique is used...

  8. MATH: A Scientific Tool for Numerical Methods Calculation and Visualization

    Directory of Open Access Journals (Sweden)

    Henrich Glaser-Opitz

    2016-02-01

    Full Text Available MATH is an easy to use application for various numerical methods calculations with graphical user interface and integrated plotting tool written in Qt with extensive use of Qwt library for plotting options and use of Gsl and MuParser libraries as a numerical and parser helping libraries. It can be found at http://sourceforge.net/projects/nummath. MATH is a convenient tool for use in education process because of its capability of showing every important step in solution process to better understand how it is done. MATH also enables fast comparison of similar method speed and precision.

  9. Numerical simulation methods for wave propagation through optical waveguides

    International Nuclear Information System (INIS)

    Sharma, A.

    1993-01-01

    The simulation of the field propagation through waveguides requires numerical solutions of the Helmholtz equation. For this purpose a method based on the principle of orthogonal collocation was recently developed. The method is also applicable to nonlinear pulse propagation through optical fibers. Some of the salient features of this method and its application to both linear and nonlinear wave propagation through optical waveguides are discussed in this report. 51 refs, 8 figs, 2 tabs

  10. High Resolution Powder Diffraction and Structure Determination

    International Nuclear Information System (INIS)

    Cox, D. E.

    1999-01-01

    It is clear that high-resolution synchrotrons X-ray powder diffraction is a very powerful and convenient tool for material characterization and structure determination. Most investigations to date have been carried out under ambient conditions and have focused on structure solution and refinement. The application of high-resolution techniques to increasingly complex structures will certainly represent an important part of future studies, and it has been seen how ab initio solution of structures with perhaps 100 atoms in the asymmetric unit is within the realms of possibility. However, the ease with which temperature-dependence measurements can be made combined with improvements in the technology of position-sensitive detectors will undoubtedly stimulate precise in situ structural studies of phase transitions and related phenomena. One challenge in this area will be to develop high-resolution techniques for ultra-high pressure investigations in diamond anvil cells. This will require highly focused beams and very precise collimation in front of the cell down to dimensions of 50 (micro)m or less. Anomalous scattering offers many interesting possibilities as well. As a means of enhancing scattering contrast it has applications not only to the determination of cation distribution in mixed systems such as the superconducting oxides discussed in Section 9.5.3, but also to the location of specific cations in partially occupied sites, such as the extra-framework positions in zeolites, for example. Another possible application is to provide phasing information for ab initio structure solution. Finally, the precise determination of f as a function of energy through an absorption edge can provide useful information about cation oxidation states, particularly in conjunction with XANES data. In contrast to many experiments at a synchrotron facility, powder diffraction is a relatively simple and user-friendly technique, and most of the procedures and software for data analysis

  11. High resolution microprofiling, fractionation and speciation at sediment water interfaces

    Science.gov (United States)

    Fabricius, Anne-Lena; Duester, Lars; Ecker, Dennis; Ternes, Thomas A.

    2016-04-01

    Within aquatic environments, the exchange between the sediment and the overlaying water is often driven by steep gradients of, e.g., the oxygen concentration, the redox potential or the pH value at the sediment water interface (SWI). Important transport processes at the SWI are sedimentation and resuspension of particulate matter and diffusional fluxes of dissolved substances. To gain a better understanding of the key factors and processes determining the fate of substances at the SWI, methods with a spatial high resolution are required that enable the investigation of several sediment parameters in parallel to different analytes of interest in the sediment pore water. Moreover, beside the total content, questions concerning the speciation and fractionation are of concern in studying the different (transport) processes. Due to the availability of numerous micro-sensors and -electrodes (e.g., O2, redox potential, pH value, H2S, N2O) and the development of methods for pore water sampling [1], the toolbox to study the heterogeneous and often dynamic conditions at the SWI at a sub-millimetre scale were considerably improved. Nevertheless, the methods available for pore water sampling often require the installation of the sampling devices at the sampling site and/or intensive preparation procedures that may influence the conditions at the area studied and/or the characteristics of the samples taken. By combination of a micro profiling system with a new micro filtration probe head connected to a pump and a fraction collector, a micro profiling and micro sampling system ("missy") was developed that enables for the first time a direct, automate and low invasive sampling of small volumes (content of metal(loid)s, but also their fractionation (size dependent and micelle mediated) or speciation related distributions along sediment depth profiles in parallel to different sediment parameters (O2, redox and pH). Together with the results of missy-experiments, the results of

  12. Direct numerical methods of mathematical modeling in mechanical structural design

    International Nuclear Information System (INIS)

    Sahili, Jihad; Verchery, Georges; Ghaddar, Ahmad; Zoaeter, Mohamed

    2002-01-01

    Full text.Structural design and numerical methods are generally interactive; requiring optimization procedures as the structure is analyzed. This analysis leads to define some mathematical terms, as the stiffness matrix, which are resulting from the modeling and then used in numerical techniques during the dimensioning procedure. These techniques and many others involve the calculation of the generalized inverse of the stiffness matrix, called also the 'compliance matrix'. The aim of this paper is to introduce first, some different existing mathematical procedures, used to calculate the compliance matrix from the stiffness matrix, then apply direct numerical methods to solve the obtained system with the lowest computational time, and to compare the obtained results. The results show a big difference of the computational time between the different procedures

  13. High resolution CT of the lung

    Energy Technology Data Exchange (ETDEWEB)

    Itoh, Harumi (Kyoto Univ. (Japan). Faculty of Medicine)

    1991-02-01

    The emergence of computed tomography (CT) in the early 1970s has greatly contributed to diagnostic radiology. The brain was the first organ examined with CT, followed by the abdomen. For the chest, CT has also come into use shortly after the introduction in the examination of the thoracic cavity and mediastinum. CT techniques were, however, of limited significance in the evaluation of pulmonary diseases, especially diffuse pulmonary diseases. High-resolution CT (HRCT) has been introduced in clinical investigations of the lung field. This article is designed to present chest radiographic and conventional tomographic interpretations and to introduce findings of HRCT corresponding to the same shadows, with a summation of the significance of HRCT and issues of diagnostic imaging. Materials outlined are tuberculosis, pneumoconiosis, bronchopneumonia, mycoplasma pneumonia, lymphangitic carcinomatosis, sarcoidosis, diffuse panbronchiolitis, interstitial pneumonia, and pulmonary emphysema. Finally, an overview of basic investigations evolved from HRCT is given. (N.K.) 140 refs.

  14. Constructing a WISE High Resolution Galaxy Atlas

    Science.gov (United States)

    Jarrett, T. H.; Masci, F.; Tsai, C. W.; Petty, S.; Cluver, M.; Assef, Roberto J.; Benford, D.; Blain, A.; Bridge, C.; Donoso, E.; hide

    2012-01-01

    After eight months of continuous observations, the Wide-field Infrared Survey Explorer (WISE) mapped the entire sky at 3.4 micron, 4.6 micron, 12 micron, and 22 micron. We have begun a dedicated WISE High Resolution Galaxy Atlas project to fully characterize large, nearby galaxies and produce a legacy image atlas and source catalog. Here we summarize the deconvolution techniques used to significantly improve the spatial resolution of WISE imaging, specifically designed to study the internal anatomy of nearby galaxies. As a case study, we present results for the galaxy NGC 1566, comparing the WISE enhanced-resolution image processing to that of Spitzer, Galaxy Evolution Explorer, and ground-based imaging. This is the first paper in a two-part series; results for a larger sample of nearby galaxies are presented in the second paper.

  15. A high resolution jet analysis for LEP

    International Nuclear Information System (INIS)

    Hariri, S.

    1992-11-01

    A high resolution multijet analysis of hadronic events produced in e + e - annihilation at a C.M.S. energy of 91.2 GeV is described. Hadronic events produced in e + e - annihilations are generated using the Monte Carlo program JETSET7.3 with its two options: Matrix Element (M.E.) and Parton Showers (P.S.). The shower option is used with its default parameter values while the M.E. option is used with an invariant mass cut Y CUT =0.01 instead of 0.02. This choice ensures a better continuity in the evolution of the event shape variables. (K.A.) 3 refs.; 26 figs.; 1 tab

  16. High Resolution Displays Using NCAP Liquid Crystals

    Science.gov (United States)

    Macknick, A. Brian; Jones, Phil; White, Larry

    1989-07-01

    Nematic curvilinear aligned phase (NCAP) liquid crystals have been found useful for high information content video displays. NCAP materials are liquid crystals which have been encapsulated in a polymer matrix and which have a light transmission which is variable with applied electric fields. Because NCAP materials do not require polarizers, their on-state transmission is substantially better than twisted nematic cells. All dimensional tolerances are locked in during the encapsulation process and hence there are no critical sealing or spacing issues. By controlling the polymer/liquid crystal morphology, switching speeds of NCAP materials have been significantly improved over twisted nematic systems. Recent work has combined active matrix addressing with NCAP materials. Active matrices, such as thin film transistors, have given displays of high resolution. The paper will discuss the advantages of NCAP materials specifically designed for operation at video rates on transistor arrays; applications for both backlit and projection displays will be discussed.

  17. High resolution VUV facility at INDUS-1

    International Nuclear Information System (INIS)

    Krishnamurty, G.; Saraswathy, P.; Rao, P.M.R.; Mishra, A.P.; Kartha, V.B.

    1993-01-01

    Synchrotron radiation (SR) generated in the electron storage rings is an unique source for the study of atomic and molecular spectroscopy especially in the vacuum ultra violet region. Realizing the potential of this light source, efforts are in progress to develop a beamline facility at INDUS-1 to carry out high resolution atomic and molecular spectroscopy. This beam line consists of a fore-optic which is a combination of three cylindrical mirrors. The mirrors are so chosen that SR beam having a 60 mrad (horizontal) x 6 mrad (vertical) divergence is focussed onto a slit of a 6.65 metre off-plane spectrometer in Eagle Mount equipped with horizontal slit and vertical dispersion. The design of the various components of the beam line is completed. It is decided to build the spectrometer as per the requirements of the user community. Details of the various aspects of the beam line will be presented. (author). 3 figs

  18. High-resolution CT of airway reactivity

    International Nuclear Information System (INIS)

    Herold, C.J.; Brown, R.H.; Hirshman, C.A.; Mitzner, W.; Zerhouni, E.A.

    1990-01-01

    Assessment of airway reactivity has generally been limited to experimental nonimaging models. This authors of this paper used high-resolution CT (HRCT) to evaluate airway reactivity and to calculate airway resistance (Raw) compared with lung resistance (RL). Ten anesthetized and ventilated dogs were investigated with HRCT (10 contiguous 2-mm sections through the lower lung lobes) during control state, following aerosol histamine challenge, and following posthistamine hyperinflation. The HRCT scans were digitized, and areas of 10 airways per dog (diameter, 1-10 mm) were measured with a computer edging process. Changes in airway area and Raw (calculated by 1/[area] 2 ) were measured. RL was assessed separately, following the same protocol. Data were analyzed by use of a paired t-test with significance at p < .05

  19. High resolution crystal calorimetry at LHC

    International Nuclear Information System (INIS)

    Schneegans, M.; Ferrere, D.; Lebeau, M.; Vivargent, M.

    1991-01-01

    The search for Higgs bosons above Lep200 reach could be one of the main tasks of the future pp and ee colliders. In the intermediate mass region, and in particular in the range 80-140 GeV/c 2 , only the 2-photon decay mode of a Higgs produced inclusively or in association with a W, gives a good chance of observation. A 'dedicated' very high resolution calorimeter with photon angle reconstruction and pion identification capability should detect a Higgs signal with high probability. A crystal calorimeter can be considered as a conservative approach to such a detector, since a large design and operation experience already exists. The extensive R and D needed for finding a dense, fast and radiation hard crystal, is under way. Guide-lines for designing an optimum calorimeter for LHC are discussed and preliminary configurations are given. (author) 7 refs., 3 figs., 2 tabs

  20. High resolution tomography using analog coding

    International Nuclear Information System (INIS)

    Brownell, G.L.; Burnham, C.A.; Chesler, D.A.

    1985-01-01

    As part of a 30-year program in the development of positron instrumentation, the authors have developed a high resolution bismuth germanate (BGO) ring tomography (PCR) employing 360 detectors and 90 photomultiplier tubes for one plane. The detectors are shaped as trapezoid and are 4 mm wide at the front end. When assembled, they form an essentially continuous cylindrical detector. Light from a scintillation in the detector is viewed through a cylindrical light pipe by the photomultiplier tubes. By use of an analog coding scheme, the detector emitting light is identified from the phototube signals. In effect, each phototube can identify four crystals. PCR is designed as a static device and does not use interpolative motion. This results in considerable advantage when performing dynamic studies. PCR is the positron tomography analog of the γ-camera widely used in nuclear medicine