WorldWideScience

Sample records for high-resolution numerical methods

  1. Implementation and assessment of high-resolution numerical methods in TRACE

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Dean, E-mail: wangda@ornl.gov [Oak Ridge National Laboratory, 1 Bethel Valley RD 6167, Oak Ridge, TN 37831 (United States); Mahaffy, John H.; Staudenmeier, Joseph; Thurston, Carl G. [U.S. Nuclear Regulatory Commission, Washington, DC 20555 (United States)

    2013-10-15

    Highlights: • Study and implement high-resolution numerical methods for two-phase flow. • They can achieve better numerical accuracy than the 1st-order upwind scheme. • They are of great numerical robustness and efficiency. • Great application for BWR stability analysis and boron injection. -- Abstract: The 1st-order upwind differencing numerical scheme is widely employed to discretize the convective terms of the two-phase flow transport equations in reactor systems analysis codes such as TRACE and RELAP. While very robust and efficient, 1st-order upwinding leads to excessive numerical diffusion. Standard 2nd-order numerical methods (e.g., Lax–Wendroff and Beam–Warming) can effectively reduce numerical diffusion but often produce spurious oscillations for steep gradients. To overcome the difficulties with the standard higher-order schemes, high-resolution schemes such as nonlinear flux limiters have been developed and successfully applied in numerical simulation of fluid-flow problems in recent years. The present work contains a detailed study on the implementation and assessment of six nonlinear flux limiters in TRACE. These flux limiters selected are MUSCL, Van Leer (VL), OSPRE, Van Albada (VA), ENO, and Van Albada 2 (VA2). The assessment is focused on numerical stability, convergence, and accuracy of the flux limiters and their applicability for boiling water reactor (BWR) stability analysis. It is found that VA and MUSCL work best among of the six flux limiters. Both of them not only have better numerical accuracy than the 1st-order upwind scheme but also preserve great robustness and efficiency.

  2. Implementation and assessment of high-resolution numerical methods in TRACE

    International Nuclear Information System (INIS)

    Wang, Dean; Mahaffy, John H.; Staudenmeier, Joseph; Thurston, Carl G.

    2013-01-01

    Highlights: • Study and implement high-resolution numerical methods for two-phase flow. • They can achieve better numerical accuracy than the 1st-order upwind scheme. • They are of great numerical robustness and efficiency. • Great application for BWR stability analysis and boron injection. -- Abstract: The 1st-order upwind differencing numerical scheme is widely employed to discretize the convective terms of the two-phase flow transport equations in reactor systems analysis codes such as TRACE and RELAP. While very robust and efficient, 1st-order upwinding leads to excessive numerical diffusion. Standard 2nd-order numerical methods (e.g., Lax–Wendroff and Beam–Warming) can effectively reduce numerical diffusion but often produce spurious oscillations for steep gradients. To overcome the difficulties with the standard higher-order schemes, high-resolution schemes such as nonlinear flux limiters have been developed and successfully applied in numerical simulation of fluid-flow problems in recent years. The present work contains a detailed study on the implementation and assessment of six nonlinear flux limiters in TRACE. These flux limiters selected are MUSCL, Van Leer (VL), OSPRE, Van Albada (VA), ENO, and Van Albada 2 (VA2). The assessment is focused on numerical stability, convergence, and accuracy of the flux limiters and their applicability for boiling water reactor (BWR) stability analysis. It is found that VA and MUSCL work best among of the six flux limiters. Both of them not only have better numerical accuracy than the 1st-order upwind scheme but also preserve great robustness and efficiency

  3. Climate change and high-resolution whole-building numerical modelling

    NARCIS (Netherlands)

    Blocken, B.J.E.; Briggen, P.M.; Schellen, H.L.; Hensen, J.L.M.

    2010-01-01

    This paper briefly discusses the need of high-resolution whole-building numerical modelling in the context of climate change. High-resolution whole-building numerical modelling can be used for detailed analysis of the potential consequences of climate change on buildings and to evaluate remedial

  4. Assessment of high-resolution methods for numerical simulations of compressible turbulence with shock waves

    International Nuclear Information System (INIS)

    Johnsen, Eric; Larsson, Johan; Bhagatwala, Ankit V.; Cabot, William H.; Moin, Parviz; Olson, Britton J.; Rawat, Pradeep S.; Shankar, Santhosh K.; Sjoegreen, Bjoern; Yee, H.C.; Zhong Xiaolin; Lele, Sanjiva K.

    2010-01-01

    Flows in which shock waves and turbulence are present and interact dynamically occur in a wide range of applications, including inertial confinement fusion, supernovae explosion, and scramjet propulsion. Accurate simulations of such problems are challenging because of the contradictory requirements of numerical methods used to simulate turbulence, which must minimize any numerical dissipation that would otherwise overwhelm the small scales, and shock-capturing schemes, which introduce numerical dissipation to stabilize the solution. The objective of the present work is to evaluate the performance of several numerical methods capable of simultaneously handling turbulence and shock waves. A comprehensive range of high-resolution methods (WENO, hybrid WENO/central difference, artificial diffusivity, adaptive characteristic-based filter, and shock fitting) and suite of test cases (Taylor-Green vortex, Shu-Osher problem, shock-vorticity/entropy wave interaction, Noh problem, compressible isotropic turbulence) relevant to problems with shocks and turbulence are considered. The results indicate that the WENO methods provide sharp shock profiles, but overwhelm the physical dissipation. The hybrid method is minimally dissipative and leads to sharp shocks and well-resolved broadband turbulence, but relies on an appropriate shock sensor. Artificial diffusivity methods in which the artificial bulk viscosity is based on the magnitude of the strain-rate tensor resolve vortical structures well but damp dilatational modes in compressible turbulence; dilatation-based artificial bulk viscosity methods significantly improve this behavior. For well-defined shocks, the shock fitting approach yields good results.

  5. Waterspout Forecasting Method Over the Eastern Adriatic Using a High-Resolution Numerical Weather Model

    Science.gov (United States)

    Renko, Tanja; Ivušić, Sarah; Telišman Prtenjak, Maja; Šoljan, Vinko; Horvat, Igor

    2018-03-01

    In this study, a synoptic and mesoscale analysis was performed and Szilagyi's waterspout forecasting method was tested on ten waterspout events in the period of 2013-2016. Data regarding waterspout occurrences were collected from weather stations, an online survey at the official website of the National Meteorological and Hydrological Service of Croatia and eyewitness reports from newspapers and the internet. Synoptic weather conditions were analyzed using surface pressure fields, 500 hPa level synoptic charts, SYNOP reports and atmospheric soundings. For all observed waterspout events, a synoptic type was determined using the 500 hPa geopotential height chart. The occurrence of lightning activity was determined from the LINET lightning database, and waterspouts were divided into thunderstorm-related and "fair weather" ones. Mesoscale characteristics (with a focus on thermodynamic instability indices) were determined using the high-resolution (500 m grid length) mesoscale numerical weather model and model results were compared with the available observations. Because thermodynamic instability indices are usually insufficient for forecasting waterspout activity, the performance of the Szilagyi Waterspout Index (SWI) was tested using vertical atmospheric profiles provided by the mesoscale numerical model. The SWI successfully forecasted all waterspout events, even the winter events. This indicates that the Szilagyi's waterspout prognostic method could be used as a valid prognostic tool for the eastern Adriatic.

  6. Applications of high-resolution spatial discretization scheme and Jacobian-free Newton–Krylov method in two-phase flow problems

    International Nuclear Information System (INIS)

    Zou, Ling; Zhao, Haihua; Zhang, Hongbin

    2015-01-01

    Highlights: • Using high-resolution spatial scheme in solving two-phase flow problems. • Fully implicit time integrations scheme. • Jacobian-free Newton–Krylov method. • Analytical solution for two-phase water faucet problem. - Abstract: The majority of the existing reactor system analysis codes were developed using low-order numerical schemes in both space and time. In many nuclear thermal–hydraulics applications, it is desirable to use higher-order numerical schemes to reduce numerical errors. High-resolution spatial discretization schemes provide high order spatial accuracy in smooth regions and capture sharp spatial discontinuity without nonphysical spatial oscillations. In this work, we adapted an existing high-resolution spatial discretization scheme on staggered grids in two-phase flow applications. Fully implicit time integration schemes were also implemented to reduce numerical errors from operator-splitting types of time integration schemes. The resulting nonlinear system has been successfully solved using the Jacobian-free Newton–Krylov (JFNK) method. The high-resolution spatial discretization and high-order fully implicit time integration numerical schemes were tested and numerically verified for several two-phase test problems, including a two-phase advection problem, a two-phase advection with phase appearance/disappearance problem, and the water faucet problem. Numerical results clearly demonstrated the advantages of using such high-resolution spatial and high-order temporal numerical schemes to significantly reduce numerical diffusion and therefore improve accuracy. Our study also demonstrated that the JFNK method is stable and robust in solving two-phase flow problems, even when phase appearance/disappearance exists

  7. Applying multi-resolution numerical methods to geodynamics

    Science.gov (United States)

    Davies, David Rhodri

    Computational models yield inaccurate results if the underlying numerical grid fails to provide the necessary resolution to capture a simulation's important features. For the large-scale problems regularly encountered in geodynamics, inadequate grid resolution is a major concern. The majority of models involve multi-scale dynamics, being characterized by fine-scale upwelling and downwelling activity in a more passive, large-scale background flow. Such configurations, when coupled to the complex geometries involved, present a serious challenge for computational methods. Current techniques are unable to resolve localized features and, hence, such models cannot be solved efficiently. This thesis demonstrates, through a series of papers and closely-coupled appendices, how multi-resolution finite-element methods from the forefront of computational engineering can provide a means to address these issues. The problems examined achieve multi-resolution through one of two methods. In two-dimensions (2-D), automatic, unstructured mesh refinement procedures are utilized. Such methods improve the solution quality of convection dominated problems by adapting the grid automatically around regions of high solution gradient, yielding enhanced resolution of the associated flow features. Thermal and thermo-chemical validation tests illustrate that the technique is robust and highly successful, improving solution accuracy whilst increasing computational efficiency. These points are reinforced when the technique is applied to geophysical simulations of mid-ocean ridge and subduction zone magmatism. To date, successful goal-orientated/error-guided grid adaptation techniques have not been utilized within the field of geodynamics. The work included herein is therefore the first geodynamical application of such methods. In view of the existing three-dimensional (3-D) spherical mantle dynamics codes, which are built upon a quasi-uniform discretization of the sphere and closely coupled

  8. High-resolution method for evolving complex interface networks

    Science.gov (United States)

    Pan, Shucheng; Hu, Xiangyu Y.; Adams, Nikolaus A.

    2018-04-01

    In this paper we describe a high-resolution transport formulation of the regional level-set approach for an improved prediction of the evolution of complex interface networks. The novelty of this method is twofold: (i) construction of local level sets and reconstruction of a global level set, (ii) local transport of the interface network by employing high-order spatial discretization schemes for improved representation of complex topologies. Various numerical test cases of multi-region flow problems, including triple-point advection, single vortex flow, mean curvature flow, normal driven flow, dry foam dynamics and shock-bubble interaction show that the method is accurate and suitable for a wide range of complex interface-network evolutions. Its overall computational cost is comparable to the Semi-Lagrangian regional level-set method while the prediction accuracy is significantly improved. The approach thus offers a viable alternative to previous interface-network level-set method.

  9. The effect of high-resolution orography on numerical modelling of atmospheric flow: a preliminary experiment

    International Nuclear Information System (INIS)

    Scarani, C.; Tampieri, F.; Tibaldi, S.

    1983-01-01

    The effect of increasing the resolution of the topography in models of numerical weather prediction is assessed. Different numerical experiments have been performed, referring to a case of cyclogenesis in the lee of the Alps. From the comparison, it appears that the lower atmospheric levels are better described by the model with higherresolution topography; comparable horizontal resolution runs with smoother topography appear to be less satisfactory in this respect. It turns out also that the vertical propagation of the signal due to the front-mountain interaction is faster in the high-resolution experiment

  10. A new automated assign and analysing method for high-resolution rotationally resolved spectra using genetic algorithms

    NARCIS (Netherlands)

    Meerts, W.L.; Schmitt, M.

    2006-01-01

    This paper describes a numerical technique that has recently been developed to automatically assign and fit high-resolution spectra. The method makes use of genetic algorithms (GA). The current algorithm is compared with previously used analysing methods. The general features of the GA and its

  11. Eulerian and Lagrangian statistics from high resolution numerical simulations of weakly compressible turbulence

    NARCIS (Netherlands)

    Benzi, R.; Biferale, L.; Fisher, R.T.; Lamb, D.Q.; Toschi, F.

    2009-01-01

    We report a detailed study of Eulerian and Lagrangian statistics from high resolution Direct Numerical Simulations of isotropic weakly compressible turbulence. Reynolds number at the Taylor microscale is estimated to be around 600. Eulerian and Lagrangian statistics is evaluated over a huge data

  12. A method for generating high resolution satellite image time series

    Science.gov (United States)

    Guo, Tao

    2014-10-01

    There is an increasing demand for satellite remote sensing data with both high spatial and temporal resolution in many applications. But it still is a challenge to simultaneously improve spatial resolution and temporal frequency due to the technical limits of current satellite observation systems. To this end, much R&D efforts have been ongoing for years and lead to some successes roughly in two aspects, one includes super resolution, pan-sharpen etc. methods which can effectively enhance the spatial resolution and generate good visual effects, but hardly preserve spectral signatures and result in inadequate analytical value, on the other hand, time interpolation is a straight forward method to increase temporal frequency, however it increase little informative contents in fact. In this paper we presented a novel method to simulate high resolution time series data by combing low resolution time series data and a very small number of high resolution data only. Our method starts with a pair of high and low resolution data set, and then a spatial registration is done by introducing LDA model to map high and low resolution pixels correspondingly. Afterwards, temporal change information is captured through a comparison of low resolution time series data, and then projected onto the high resolution data plane and assigned to each high resolution pixel according to the predefined temporal change patterns of each type of ground objects. Finally the simulated high resolution data is generated. A preliminary experiment shows that our method can simulate a high resolution data with a reasonable accuracy. The contribution of our method is to enable timely monitoring of temporal changes through analysis of time sequence of low resolution images only, and usage of costly high resolution data can be reduces as much as possible, and it presents a highly effective way to build up an economically operational monitoring solution for agriculture, forest, land use investigation

  13. Stochastic porous media modeling and high-resolution schemes for numerical simulation of subsurface immiscible fluid flow transport

    Science.gov (United States)

    Brantson, Eric Thompson; Ju, Binshan; Wu, Dan; Gyan, Patricia Semwaah

    2018-04-01

    This paper proposes stochastic petroleum porous media modeling for immiscible fluid flow simulation using Dykstra-Parson coefficient (V DP) and autocorrelation lengths to generate 2D stochastic permeability values which were also used to generate porosity fields through a linear interpolation technique based on Carman-Kozeny equation. The proposed method of permeability field generation in this study was compared to turning bands method (TBM) and uniform sampling randomization method (USRM). On the other hand, many studies have also reported that, upstream mobility weighting schemes, commonly used in conventional numerical reservoir simulators do not accurately capture immiscible displacement shocks and discontinuities through stochastically generated porous media. This can be attributed to high level of numerical smearing in first-order schemes, oftentimes misinterpreted as subsurface geological features. Therefore, this work employs high-resolution schemes of SUPERBEE flux limiter, weighted essentially non-oscillatory scheme (WENO), and monotone upstream-centered schemes for conservation laws (MUSCL) to accurately capture immiscible fluid flow transport in stochastic porous media. The high-order schemes results match well with Buckley Leverett (BL) analytical solution without any non-oscillatory solutions. The governing fluid flow equations were solved numerically using simultaneous solution (SS) technique, sequential solution (SEQ) technique and iterative implicit pressure and explicit saturation (IMPES) technique which produce acceptable numerical stability and convergence rate. A comparative and numerical examples study of flow transport through the proposed method, TBM and USRM permeability fields revealed detailed subsurface instabilities with their corresponding ultimate recovery factors. Also, the impact of autocorrelation lengths on immiscible fluid flow transport were analyzed and quantified. A finite number of lines used in the TBM resulted into visual

  14. Ultra high resolution tomography

    Energy Technology Data Exchange (ETDEWEB)

    Haddad, W.S.

    1994-11-15

    Recent work and results on ultra high resolution three dimensional imaging with soft x-rays will be presented. This work is aimed at determining microscopic three dimensional structure of biological and material specimens. Three dimensional reconstructed images of a microscopic test object will be presented; the reconstruction has a resolution on the order of 1000 A in all three dimensions. Preliminary work with biological samples will also be shown, and the experimental and numerical methods used will be discussed.

  15. Multi-GPU Accelerated Admittance Method for High-Resolution Human Exposure Evaluation.

    Science.gov (United States)

    Xiong, Zubiao; Feng, Shi; Kautz, Richard; Chandra, Sandeep; Altunyurt, Nevin; Chen, Ji

    2015-12-01

    A multi-graphics processing unit (GPU) accelerated admittance method solver is presented for solving the induced electric field in high-resolution anatomical models of human body when exposed to external low-frequency magnetic fields. In the solver, the anatomical model is discretized as a three-dimensional network of admittances. The conjugate orthogonal conjugate gradient (COCG) iterative algorithm is employed to take advantage of the symmetric property of the complex-valued linear system of equations. Compared against the widely used biconjugate gradient stabilized method, the COCG algorithm can reduce the solving time by 3.5 times and reduce the storage requirement by about 40%. The iterative algorithm is then accelerated further by using multiple NVIDIA GPUs. The computations and data transfers between GPUs are overlapped in time by using asynchronous concurrent execution design. The communication overhead is well hidden so that the acceleration is nearly linear with the number of GPU cards. Numerical examples show that our GPU implementation running on four NVIDIA Tesla K20c cards can reach 90 times faster than the CPU implementation running on eight CPU cores (two Intel Xeon E5-2603 processors). The implemented solver is able to solve large dimensional problems efficiently. A whole adult body discretized in 1-mm resolution can be solved in just several minutes. The high efficiency achieved makes it practical to investigate human exposure involving a large number of cases with a high resolution that meets the requirements of international dosimetry guidelines.

  16. A new high-resolution electromagnetic method for subsurface imaging

    Science.gov (United States)

    Feng, Wanjie

    For most electromagnetic (EM) geophysical systems, the contamination of primary fields on secondary fields ultimately limits the capability of the controlled-source EM methods. Null coupling techniques were proposed to solve this problem. However, the small orientation errors in the null coupling systems greatly restrict the applications of these systems. Another problem encountered by most EM systems is the surface interference and geologic noise, which sometimes make the geophysical survey impossible to carry out. In order to solve these problems, the alternating target antenna coupling (ATAC) method was introduced, which greatly removed the influence of the primary field and reduced the surface interference. But this system has limitations on the maximum transmitter moment that can be used. The differential target antenna coupling (DTAC) method was proposed to allow much larger transmitter moments and at the same time maintain the advantages of the ATAC method. In this dissertation, first, the theoretical DTAC calculations were derived mathematically using Born and Wolf's complex magnetic vector. 1D layered and 2D blocked earth models were used to demonstrate that the DTAC method has no responses for 1D and 2D structures. Analytical studies of the plate model influenced by conductive and resistive backgrounds were presented to explain the physical phenomenology behind the DTAC method, which is the magnetic fields of the subsurface targets are required to be frequency dependent. Then, the advantages of the DTAC method, e.g., high-resolution, reducing the geologic noise and insensitive to surface interference, were analyzed using surface and subsurface numerical examples in the EMGIMA software. Next, the theoretical advantages, such as high resolution and insensitive to surface interference, were verified by designing and developing a low-power (moment of 50 Am 2) vertical-array DTAC system and testing it on controlled targets and scaled target coils. At last, a

  17. Hybrid RANS-LES using high order numerical methods

    Science.gov (United States)

    Henry de Frahan, Marc; Yellapantula, Shashank; Vijayakumar, Ganesh; Knaus, Robert; Sprague, Michael

    2017-11-01

    Understanding the impact of wind turbine wake dynamics on downstream turbines is particularly important for the design of efficient wind farms. Due to their tractable computational cost, hybrid RANS/LES models are an attractive framework for simulating separation flows such as the wake dynamics behind a wind turbine. High-order numerical methods can be computationally efficient and provide increased accuracy in simulating complex flows. In the context of LES, high-order numerical methods have shown some success in predictions of turbulent flows. However, the specifics of hybrid RANS-LES models, including the transition region between both modeling frameworks, pose unique challenges for high-order numerical methods. In this work, we study the effect of increasing the order of accuracy of the numerical scheme in simulations of canonical turbulent flows using RANS, LES, and hybrid RANS-LES models. We describe the interactions between filtering, model transition, and order of accuracy and their effect on turbulence quantities such as kinetic energy spectra, boundary layer evolution, and dissipation rate. This work was funded by the U.S. Department of Energy, Exascale Computing Project, under Contract No. DE-AC36-08-GO28308 with the National Renewable Energy Laboratory.

  18. A Numerical Method to Generate High Temporal Resolution Precipitation Time Series by Combining Weather Radar Measurements with a Nowcast Model

    DEFF Research Database (Denmark)

    Nielsen, Jesper Ellerbæk; Thorndahl, Søren Liedtke; Rasmussen, Michael R.

    2014-01-01

    The topic of this paper is temporal interpolation of precipitation observed by weather radars. Precipitation measurements with high spatial and temporal resolution are, in general, desired for urban drainage applications. An advection-based interpolation method is developed which uses methods...

  19. High-resolution X-ray crystal structure of bovine H-protein using the high-pressure cryocooling method

    International Nuclear Information System (INIS)

    Higashiura, Akifumi; Ohta, Kazunori; Masaki, Mika; Sato, Masaru; Inaka, Koji; Tanaka, Hiroaki; Nakagawa, Atsushi

    2013-01-01

    Using the high-pressure cryocooling method, the high-resolution X-ray crystal structure of bovine H-protein was determined at 0.86 Å resolution. This is the first ultra-high-resolution structure obtained from a high-pressure cryocooled crystal. Recently, many technical improvements in macromolecular X-ray crystallography have increased the number of structures deposited in the Protein Data Bank and improved the resolution limit of protein structures. Almost all high-resolution structures have been determined using a synchrotron radiation source in conjunction with cryocooling techniques, which are required in order to minimize radiation damage. However, optimization of cryoprotectant conditions is a time-consuming and difficult step. To overcome this problem, the high-pressure cryocooling method was developed (Kim et al., 2005 ▶) and successfully applied to many protein-structure analyses. In this report, using the high-pressure cryocooling method, the X-ray crystal structure of bovine H-protein was determined at 0.86 Å resolution. Structural comparisons between high- and ambient-pressure cryocooled crystals at ultra-high resolution illustrate the versatility of this technique. This is the first ultra-high-resolution X-ray structure obtained using the high-pressure cryocooling method

  20. High accuracy mantle convection simulation through modern numerical methods

    KAUST Repository

    Kronbichler, Martin

    2012-08-21

    Numerical simulation of the processes in the Earth\\'s mantle is a key piece in understanding its dynamics, composition, history and interaction with the lithosphere and the Earth\\'s core. However, doing so presents many practical difficulties related to the numerical methods that can accurately represent these processes at relevant scales. This paper presents an overview of the state of the art in algorithms for high-Rayleigh number flows such as those in the Earth\\'s mantle, and discusses their implementation in the Open Source code Aspect (Advanced Solver for Problems in Earth\\'s ConvecTion). Specifically, we show how an interconnected set of methods for adaptive mesh refinement (AMR), higher order spatial and temporal discretizations, advection stabilization and efficient linear solvers can provide high accuracy at a numerical cost unachievable with traditional methods, and how these methods can be designed in a way so that they scale to large numbers of processors on compute clusters. Aspect relies on the numerical software packages deal.II and Trilinos, enabling us to focus on high level code and keeping our implementation compact. We present results from validation tests using widely used benchmarks for our code, as well as scaling results from parallel runs. © 2012 The Authors Geophysical Journal International © 2012 RAS.

  1. Solving phase appearance/disappearance two-phase flow problems with high resolution staggered grid and fully implicit schemes by the Jacobian-free Newton–Krylov Method

    Energy Technology Data Exchange (ETDEWEB)

    Zou, Ling; Zhao, Haihua; Zhang, Hongbin

    2016-04-01

    The phase appearance/disappearance issue presents serious numerical challenges in two-phase flow simulations. Many existing reactor safety analysis codes use different kinds of treatments for the phase appearance/disappearance problem. However, to our best knowledge, there are no fully satisfactory solutions. Additionally, the majority of the existing reactor system analysis codes were developed using low-order numerical schemes in both space and time. In many situations, it is desirable to use high-resolution spatial discretization and fully implicit time integration schemes to reduce numerical errors. In this work, we adapted a high-resolution spatial discretization scheme on staggered grid mesh and fully implicit time integration methods (such as BDF1 and BDF2) to solve the two-phase flow problems. The discretized nonlinear system was solved by the Jacobian-free Newton Krylov (JFNK) method, which does not require the derivation and implementation of analytical Jacobian matrix. These methods were tested with a few two-phase flow problems with phase appearance/disappearance phenomena considered, such as a linear advection problem, an oscillating manometer problem, and a sedimentation problem. The JFNK method demonstrated extremely robust and stable behaviors in solving the two-phase flow problems with phase appearance/disappearance. No special treatments such as water level tracking or void fraction limiting were used. High-resolution spatial discretization and second- order fully implicit method also demonstrated their capabilities in significantly reducing numerical errors.

  2. Analysis and Application of High Resolution Numerical Perturbation Algorithm for Convective-Diffusion Equation

    International Nuclear Information System (INIS)

    Gao Zhi; Shen Yi-Qing

    2012-01-01

    The high resolution numerical perturbation (NP) algorithm is analyzed and tested using various convective-diffusion equations. The NP algorithm is constructed by splitting the second order central difference schemes of both convective and diffusion terms of the convective-diffusion equation into upstream and downstream parts, then the perturbation reconstruction functions of the convective coefficient are determined using the power-series of grid interval and eliminating the truncated errors of the modified differential equation. The important nature, i.e. the upwind dominance nature, which is the basis to ensuring that the NP schemes are stable and essentially oscillation free, is firstly presented and verified. Various numerical cases show that the NP schemes are efficient, robust, and more accurate than the original second order central scheme

  3. Direct Calculation of Permeability by High-Accurate Finite Difference and Numerical Integration Methods

    KAUST Repository

    Wang, Yi

    2016-07-21

    Velocity of fluid flow in underground porous media is 6~12 orders of magnitudes lower than that in pipelines. If numerical errors are not carefully controlled in this kind of simulations, high distortion of the final results may occur [1-4]. To fit the high accuracy demands of fluid flow simulations in porous media, traditional finite difference methods and numerical integration methods are discussed and corresponding high-accurate methods are developed. When applied to the direct calculation of full-tensor permeability for underground flow, the high-accurate finite difference method is confirmed to have numerical error as low as 10-5% while the high-accurate numerical integration method has numerical error around 0%. Thus, the approach combining the high-accurate finite difference and numerical integration methods is a reliable way to efficiently determine the characteristics of general full-tensor permeability such as maximum and minimum permeability components, principal direction and anisotropic ratio. Copyright © Global-Science Press 2016.

  4. Hybrid methods for airframe noise numerical prediction

    Energy Technology Data Exchange (ETDEWEB)

    Terracol, M.; Manoha, E.; Herrero, C.; Labourasse, E.; Redonnet, S. [ONERA, Department of CFD and Aeroacoustics, BP 72, Chatillon (France); Sagaut, P. [Laboratoire de Modelisation en Mecanique - UPMC/CNRS, Paris (France)

    2005-07-01

    This paper describes some significant steps made towards the numerical simulation of the noise radiated by the high-lift devices of a plane. Since the full numerical simulation of such configuration is still out of reach for present supercomputers, some hybrid strategies have been developed to reduce the overall cost of such simulations. The proposed strategy relies on the coupling of an unsteady nearfield CFD with an acoustic propagation solver based on the resolution of the Euler equations for midfield propagation in an inhomogeneous field, and the use of an integral solver for farfield acoustic predictions. In the first part of this paper, this CFD/CAA coupling strategy is presented. In particular, the numerical method used in the propagation solver is detailed, and two applications of this coupling method to the numerical prediction of the aerodynamic noise of an airfoil are presented. Then, a hybrid RANS/LES method is proposed in order to perform some unsteady simulations of complex noise sources. This method allows for significant reduction of the cost of such a simulation by considerably reducing the extent of the LES zone. This method is described and some results of the numerical simulation of the three-dimensional unsteady flow in the slat cove of a high-lift profile are presented. While these results remain very difficult to validate with experiments on similar configurations, they represent up to now the first 3D computations of this kind of flow. (orig.)

  5. High Resolution Numerical Simulations of Primary Atomization in Diesel Sprays with Single Component Reference Fuels

    Science.gov (United States)

    2015-09-01

    NC. 14. ABSTRACT A high-resolution numerical simulation of jet breakup and spray formation from a complex diesel fuel injector at diesel engine... diesel fuel injector at diesel engine type conditions has been performed. A full understanding of the primary atomization process in diesel fuel... diesel liquid sprays the complexity is further compounded by the physical attributes present including nozzle turbulence, large density ratios

  6. High-resolution X-ray crystal structure of bovine H-protein using the high-pressure cryocooling method.

    Science.gov (United States)

    Higashiura, Akifumi; Ohta, Kazunori; Masaki, Mika; Sato, Masaru; Inaka, Koji; Tanaka, Hiroaki; Nakagawa, Atsushi

    2013-11-01

    Recently, many technical improvements in macromolecular X-ray crystallography have increased the number of structures deposited in the Protein Data Bank and improved the resolution limit of protein structures. Almost all high-resolution structures have been determined using a synchrotron radiation source in conjunction with cryocooling techniques, which are required in order to minimize radiation damage. However, optimization of cryoprotectant conditions is a time-consuming and difficult step. To overcome this problem, the high-pressure cryocooling method was developed (Kim et al., 2005) and successfully applied to many protein-structure analyses. In this report, using the high-pressure cryocooling method, the X-ray crystal structure of bovine H-protein was determined at 0.86 Å resolution. Structural comparisons between high- and ambient-pressure cryocooled crystals at ultra-high resolution illustrate the versatility of this technique. This is the first ultra-high-resolution X-ray structure obtained using the high-pressure cryocooling method.

  7. Pyrosequencing™ : A one-step method for high resolution HLA typing

    Directory of Open Access Journals (Sweden)

    Marincola Francesco M

    2003-11-01

    Full Text Available Abstract While the use of high-resolution molecular typing in routine matching of human leukocyte antigens (HLA is expected to improve unrelated donor selection and transplant outcome, the genetic complexity of HLA still makes the current methodology limited and laborious. Pyrosequencing™ is a gel-free, sequencing-by-synthesis method. In a Pyrosequencing reaction, nucleotide incorporation proceeds sequentially along each DNA template at a given nucleotide dispensation order (NDO that is programmed into a pyrosequencer. Here we describe the design of a NDO that generates a pyrogram unique for any given allele or combination of alleles. We present examples of unique pyrograms generated from each of two heterozygous HLA templates, which would otherwise remain cis/trans ambiguous using standard sequencing based typing (SBT method. In addition, we display representative data that demonstrate long read and linear signal generation. These features are prerequisite of high-resolution typing and automated data analysis. In conclusion Pyrosequencing is a one-step method for high resolution DNA typing.

  8. Progress in high-resolution x-ray holographic microscopy

    International Nuclear Information System (INIS)

    Jacobsen, C.; Kirz, J.; Howells, M.; McQuaid, K.; Rothman, S.; Feder, R.; Sayre, D.

    1987-07-01

    Among the various types of x-ray microscopes that have been demonstrated, the holographic microscope has had the largest gap between promise and performance. The difficulties of fabricating x-ray optical elements have led some to view holography as the most attractive method for obtaining the ultimate in high resolution x-ray micrographs; however, we know of no investigations prior to 1987 that clearly demonstrated submicron resolution in reconstructed images. Previous efforts suffered from problems such as limited resolution and dynamic range in the recording media, low coherent x-ray flux, and aberrations and diffraction limits in visible light reconstruction. We have addressed the recording limitations through the use of an undulator x-ray source and high-resolution photoresist recording media. For improved results in the readout and reconstruction steps, we have employed metal shadowing and transmission electron microscopy, along with numerical reconstruction techniques. We believe that this approach will allow holography to emerge as a practical method of high-resolution x-ray microscopy. 30 refs., 4 figs

  9. Progress in high-resolution x-ray holographic microscopy

    Energy Technology Data Exchange (ETDEWEB)

    Jacobsen, C.; Kirz, J.; Howells, M.; McQuaid, K.; Rothman, S.; Feder, R.; Sayre, D.

    1987-07-01

    Among the various types of x-ray microscopes that have been demonstrated, the holographic microscope has had the largest gap between promise and performance. The difficulties of fabricating x-ray optical elements have led some to view holography as the most attractive method for obtaining the ultimate in high resolution x-ray micrographs; however, we know of no investigations prior to 1987 that clearly demonstrated submicron resolution in reconstructed images. Previous efforts suffered from problems such as limited resolution and dynamic range in the recording media, low coherent x-ray flux, and aberrations and diffraction limits in visible light reconstruction. We have addressed the recording limitations through the use of an undulator x-ray source and high-resolution photoresist recording media. For improved results in the readout and reconstruction steps, we have employed metal shadowing and transmission electron microscopy, along with numerical reconstruction techniques. We believe that this approach will allow holography to emerge as a practical method of high-resolution x-ray microscopy. 30 refs., 4 figs.

  10. Sinking, merging and stationary plumes in a coupled chemotaxis-fluid model: a high-resolution numerical approach

    KAUST Repository

    Chertock, A.

    2012-02-02

    Aquatic bacteria like Bacillus subtilis are heavier than water yet they are able to swim up an oxygen gradient and concentrate in a layer below the water surface, which will undergo Rayleigh-Taylor-type instabilities for sufficiently high concentrations. In the literature, a simplified chemotaxis-fluid system has been proposed as a model for bio-convection in modestly diluted cell suspensions. It couples a convective chemotaxis system for the oxygen-consuming and oxytactic bacteria with the incompressible Navier-Stokes equations subject to a gravitational force proportional to the relative surplus of the cell density compared to the water density. In this paper, we derive a high-resolution vorticity-based hybrid finite-volume finite-difference scheme, which allows us to investigate the nonlinear dynamics of a two-dimensional chemotaxis-fluid system with boundary conditions matching an experiment of Hillesdon et al. (Bull. Math. Biol., vol. 57, 1995, pp. 299-344). We present selected numerical examples, which illustrate (i) the formation of sinking plumes, (ii) the possible merging of neighbouring plumes and (iii) the convergence towards numerically stable stationary plumes. The examples with stable stationary plumes show how the surface-directed oxytaxis continuously feeds cells into a high-concentration layer near the surface, from where the fluid flow (recurring upwards in the space between the plumes) transports the cells into the plumes, where then gravity makes the cells sink and constitutes the driving force in maintaining the fluid convection and, thus, in shaping the plumes into (numerically) stable stationary states. Our numerical method is fully capable of solving the coupled chemotaxis-fluid system and enabling a full exploration of its dynamics, which cannot be done in a linearised framework. © 2012 Cambridge University Press.

  11. Numerical simulation of compressible two-phase flow using a diffuse interface method

    International Nuclear Information System (INIS)

    Ansari, M.R.; Daramizadeh, A.

    2013-01-01

    Highlights: ► Compressible two-phase gas–gas and gas–liquid flows simulation are conducted. ► Interface conditions contain shock wave and cavitations. ► A high-resolution diffuse interface method is investigated. ► The numerical results exhibit very good agreement with experimental results. -- Abstract: In this article, a high-resolution diffuse interface method is investigated for simulation of compressible two-phase gas–gas and gas–liquid flows, both in the presence of shock wave and in flows with strong rarefaction waves similar to cavitations. A Godunov method and HLLC Riemann solver is used for discretization of the Kapila five-equation model and a modified Schmidt equation of state (EOS) is used to simulate the cavitation regions. This method is applied successfully to some one- and two-dimensional compressible two-phase flows with interface conditions that contain shock wave and cavitations. The numerical results obtained in this attempt exhibit very good agreement with experimental results, as well as previous numerical results presented by other researchers based on other numerical methods. In particular, the algorithm can capture the complex flow features of transient shocks, such as the material discontinuities and interfacial instabilities, without any oscillation and additional diffusion. Numerical examples show that the results of the method presented here compare well with other sophisticated modeling methods like adaptive mesh refinement (AMR) and local mesh refinement (LMR) for one- and two-dimensional problems

  12. Processing method for high resolution monochromator

    International Nuclear Information System (INIS)

    Kiriyama, Koji; Mitsui, Takaya

    2006-12-01

    A processing method for high resolution monochromator (HRM) has been developed at Japanese Atomic Energy Agency/Quantum Beam Science Directorate/Synchrotron Radiation Research unit at SPring-8. For manufacturing a HRM, a sophisticated slicing machine and X-ray diffractometer have been installed for shaping a crystal ingot and orienting precisely the surface of a crystal ingot, respectively. The specification of the slicing machine is following; Maximum size of a diamond blade is φ 350mm in diameter, φ 38.1mm in the spindle diameter, and 2mm in thickness. A large crystal such as an ingot with 100mm in diameter, 200mm in length can be cut. Thin crystal samples such as a wafer can be also cut using by another sample holder. Working distance of a main shaft with the direction perpendicular to working table in the machine is 350mm at maximum. Smallest resolution of the main shaft with directions of front-and-back and top-and-bottom are 0.001mm read by a digital encoder. 2mm/min can set for cutting samples in the forward direction. For orienting crystal faces relative to the blade direction adjustment, a one-circle goniometer and 2-circle segment are equipped on the working table in the machine. A rotation and a tilt of the stage can be done by manual operation. Digital encoder in a turn stage is furnished and has angle resolution of less than 0.01 degrees. In addition, a hand drill as a supporting device for detailed processing of crystal is prepared. Then, an ideal crystal face can be cut from crystal samples within an accuracy of about 0.01 degrees. By installation of these devices, a high energy resolution monochromator crystal for inelastic x-ray scattering and a beam collimator are got in hand and are expected to be used for nanotechnology studies. (author)

  13. Developing Local Scale, High Resolution, Data to Interface with Numerical Storm Models

    Science.gov (United States)

    Witkop, R.; Becker, A.; Stempel, P.

    2017-12-01

    High resolution, physical storm models that can rapidly predict storm surge, inundation, rainfall, wind velocity and wave height at the intra-facility scale for any storm affecting Rhode Island have been developed by Researchers at the University of Rhode Island's (URI's) Graduate School of Oceanography (GSO) (Ginis et al., 2017). At the same time, URI's Marine Affairs Department has developed methods that inhere individual geographic points into GSO's models and enable the models to accurately incorporate local scale, high resolution data (Stempel et al., 2017). This combination allows URI's storm models to predict any storm's impacts on individual Rhode Island facilities in near real time. The research presented here determines how a coastal Rhode Island town's critical facility managers (FMs) perceive their assets as being vulnerable to quantifiable hurricane-related forces at the individual facility scale and explores methods to elicit this information from FMs in a format usable for incorporation into URI's storm models.

  14. A postprocessing method based on high-resolution spectral estimation for FDTD calculation of phononic band structures

    Energy Technology Data Exchange (ETDEWEB)

    Su Xiaoxing, E-mail: xxsu@bjtu.edu.c [School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044 (China); Li Jianbao; Wang Yuesheng [Institute of Engineering Mechanics, Beijing Jiaotong University, Beijing 100044 (China)

    2010-05-15

    If the energy bands of a phononic crystal are calculated by the finite difference time domain (FDTD) method combined with the fast Fourier transform (FFT), good estimation of the eigenfrequencies can only be ensured by the postprocessing of sufficiently long time series generated by a large number of FDTD iterations. In this paper, a postprocessing method based on the high-resolution spectral estimation via the Yule-Walker method is proposed to overcome this difficulty. Numerical simulation results for three-dimensional acoustic and two-dimensional elastic systems show that, compared with the classic FFT-based postprocessing method, the proposed method can give much better estimation of the eigenfrequencies when the FDTD is run with relatively few iterations.

  15. A postprocessing method based on high-resolution spectral estimation for FDTD calculation of phononic band structures

    International Nuclear Information System (INIS)

    Su Xiaoxing; Li Jianbao; Wang Yuesheng

    2010-01-01

    If the energy bands of a phononic crystal are calculated by the finite difference time domain (FDTD) method combined with the fast Fourier transform (FFT), good estimation of the eigenfrequencies can only be ensured by the postprocessing of sufficiently long time series generated by a large number of FDTD iterations. In this paper, a postprocessing method based on the high-resolution spectral estimation via the Yule-Walker method is proposed to overcome this difficulty. Numerical simulation results for three-dimensional acoustic and two-dimensional elastic systems show that, compared with the classic FFT-based postprocessing method, the proposed method can give much better estimation of the eigenfrequencies when the FDTD is run with relatively few iterations.

  16. Calibrating a numerical model's morphology using high-resolution spatial and temporal datasets from multithread channel flume experiments.

    Science.gov (United States)

    Javernick, L.; Bertoldi, W.; Redolfi, M.

    2017-12-01

    Accessing or acquiring high quality, low-cost topographic data has never been easier due to recent developments of the photogrammetric techniques of Structure-from-Motion (SfM). Researchers can acquire the necessary SfM imagery with various platforms, with the ability to capture millimetre resolution and accuracy, or large-scale areas with the help of unmanned platforms. Such datasets in combination with numerical modelling have opened up new opportunities to study river environments physical and ecological relationships. While numerical models overall predictive accuracy is most influenced by topography, proper model calibration requires hydraulic data and morphological data; however, rich hydraulic and morphological datasets remain scarce. This lack in field and laboratory data has limited model advancement through the inability to properly calibrate, assess sensitivity, and validate the models performance. However, new time-lapse imagery techniques have shown success in identifying instantaneous sediment transport in flume experiments and their ability to improve hydraulic model calibration. With new capabilities to capture high resolution spatial and temporal datasets of flume experiments, there is a need to further assess model performance. To address this demand, this research used braided river flume experiments and captured time-lapse observed sediment transport and repeat SfM elevation surveys to provide unprecedented spatial and temporal datasets. Through newly created metrics that quantified observed and modeled activation, deactivation, and bank erosion rates, the numerical model Delft3d was calibrated. This increased temporal data of both high-resolution time series and long-term temporal coverage provided significantly improved calibration routines that refined calibration parameterization. Model results show that there is a trade-off between achieving quantitative statistical and qualitative morphological representations. Specifically, statistical

  17. Effect of fluid elasticity on the numerical stability of high-resolution schemes for high shearing contraction flows using OpenFOAM

    Directory of Open Access Journals (Sweden)

    T. Chourushi

    2017-01-01

    Full Text Available Viscoelastic fluids due to their non-linear nature play an important role in process and polymer industries. These non-linear characteristics of fluid, influence final outcome of the product. Such processes though look simple are numerically challenging to study, due to the loss of numerical stability. Over the years, various methodologies have been developed to overcome this numerical limitation. In spite of this, numerical solutions are considered distant from accuracy, as first-order upwind-differencing scheme (UDS is often employed for improving the stability of algorithm. To elude this effect, some works been reported in the past, where high-resolution-schemes (HRS were employed and Deborah number was varied. However, these works are limited to creeping flows and do not detail any information on the numerical stability of HRS. Hence, this article presents the numerical study of high shearing contraction flows, where stability of HRS are addressed in reference to fluid elasticity. Results suggest that all HRS show some order of undue oscillations in flow variable profiles, measured along vertical lines placed near contraction region in the upstream section of domain, at varied elasticity number E≈5. Furthermore, by E, a clear relationship between numerical stability of HRS and E was obtained, which states that the order of undue oscillations in flow variable profiles is directly proportional to E.

  18. Analytical method for high resolution liquid chromatography for quality control French Macaw

    International Nuclear Information System (INIS)

    Garcia Penna, Caridad M; Torres Amaro, Leonid; Menendez Castillo, Rosa; Sanchez, Esther; Martinez Espinosa, Vivian; Gonzalez, Maria Lidia; Rodriguez, Carlos

    2007-01-01

    Was developed and validated an analytical method for high resolution liquid chromatography applicable to quality control of drugs dry French Macaw (Senna alata L. Roxb.) With ultraviolet detection at 340 nm. The method for high resolution liquid chromatography used to quantify the sennosides A and B, main components, was validated and proved to be specific, linear, precise and accurate. (Author)

  19. A NEW HIGH RESOLUTION OPTICAL METHOD FOR OBTAINING THE TOPOGRAPHY OF FRACTURE SURFACES IN ROCKS

    Directory of Open Access Journals (Sweden)

    Steven Ogilvie

    2011-05-01

    Full Text Available Surface roughness plays a major role in the movement of fluids through fracture systems. Fracture surface profiling is necessary to tune the properties of numerical fractures required in fluid flow modelling to those of real rock fractures. This is achieved using a variety of (i mechanical and (ii optical techniques. Stylus profilometry is a popularly used mechanical method and can measure surface heights with high precision, but only gives a good horizontal resolution in one direction on the fracture plane. This method is also expensive and simultaneous coverage of the surface is not possible. Here, we describe the development of an optical method which images cast copies of rough rock fractures using in-house developed hardware and image analysis software (OptiProf™ that incorporates image improvement and noise suppression features. This technique images at high resolutions, 15-200 μm for imaged areas of 10 × 7.5 mm and 100 × 133 mm, respectively and a similar vertical resolution (15 μm for a maximum topography of 4 mm. It uses in-house developed hardware and image analysis (OptiProf™ software and is cheap and non-destructive, providing continuous coverage of the fracture surface. The fracture models are covered with dye and fluid thicknesses above the rough surfaces converted into topographies using the Lambert-Beer Law. The dye is calibrated using 2 devices with accurately known thickness; (i a polycarbonate tile with wells of different depths and (ii a wedge-shaped vial made from silica glass. The data from each of the two surfaces can be combined to provide an aperture map of the fracture for the scenario where the surfaces touch at a single point or any greater mean aperture. The topography and aperture maps are used to provide data for the generation of synthetic fractures, tuned to the original fracture and used in numerical flow modelling.

  20. Estimating Hydraulic Resistance for Floodplain Mapping and Hydraulic Studies from High-Resolution Topography: Physical and Numerical Simulations

    Science.gov (United States)

    Minear, J. T.

    2017-12-01

    One of the primary unknown variables in hydraulic analyses is hydraulic resistance, values for which are typically set using broad assumptions or calibration, with very few methods available for independent and robust determination. A better understanding of hydraulic resistance would be highly useful for understanding floodplain processes, forecasting floods, advancing sediment transport and hydraulic coupling, and improving higher dimensional flood modeling (2D+), as well as correctly calculating flood discharges for floods that are not directly measured. The relationship of observed features to hydraulic resistance is difficult to objectively quantify in the field, partially because resistance occurs at a variety of scales (i.e. grain, unit and reach) and because individual resistance elements, such as trees, grass and sediment grains, are inherently difficult to measure. Similar to photogrammetric techniques, Terrestrial Laser Scanning (TLS, also known as Ground-based LiDAR) has shown great ability to rapidly collect high-resolution topographic datasets for geomorphic and hydrodynamic studies and could be used to objectively quantify the features that collectively create hydraulic resistance in the field. Because of its speed in data collection and remote sensing ability, TLS can be used both for pre-flood and post-flood studies that require relatively quick response in relatively dangerous settings. Using datasets collected from experimental flume runs and numerical simulations, as well as field studies of several rivers in California and post-flood rivers in Colorado, this study evaluates the use of high-resolution topography to estimate hydraulic resistance, particularly from grain-scale elements. Contrary to conventional practice, experimental laboratory runs with bed grain size held constant but with varying grain-scale protusion create a nearly twenty-fold variation in measured hydraulic resistance. The ideal application of this high-resolution topography

  1. High-resolution numerical modeling of mesoscale island wakes and sensitivity to static topographic relief data

    Directory of Open Access Journals (Sweden)

    C. G. Nunalee

    2015-08-01

    Full Text Available Recent decades have witnessed a drastic increase in the fidelity of numerical weather prediction (NWP modeling. Currently, both research-grade and operational NWP models regularly perform simulations with horizontal grid spacings as fine as 1 km. This migration towards higher resolution potentially improves NWP model solutions by increasing the resolvability of mesoscale processes and reducing dependency on empirical physics parameterizations. However, at the same time, the accuracy of high-resolution simulations, particularly in the atmospheric boundary layer (ABL, is also sensitive to orographic forcing which can have significant variability on the same spatial scale as, or smaller than, NWP model grids. Despite this sensitivity, many high-resolution atmospheric simulations do not consider uncertainty with respect to selection of static terrain height data set. In this paper, we use the Weather Research and Forecasting (WRF model to simulate realistic cases of lower tropospheric flow over and downstream of mountainous islands using the default global 30 s United States Geographic Survey terrain height data set (GTOPO30, the Shuttle Radar Topography Mission (SRTM, and the Global Multi-resolution Terrain Elevation Data set (GMTED2010 terrain height data sets. While the differences between the SRTM-based and GMTED2010-based simulations are extremely small, the GTOPO30-based simulations differ significantly. Our results demonstrate cases where the differences between the source terrain data sets are significant enough to produce entirely different orographic wake mechanics, such as vortex shedding vs. no vortex shedding. These results are also compared to MODIS visible satellite imagery and ASCAT near-surface wind retrievals. Collectively, these results highlight the importance of utilizing accurate static orographic boundary conditions when running high-resolution mesoscale models.

  2. A METHOD TO CALIBRATE THE HIGH-RESOLUTION CATANIA ASTROPHYSICAL OBSERVATORY SPECTROPOLARIMETER

    Energy Technology Data Exchange (ETDEWEB)

    Leone, F.; Gangi, M.; Giarrusso, M.; Scalia, C. [Università di Catania, Dipartimento di Fisica e Astronomia, Sezione Astrofisica, Via S. Sofia 78, I-95123 Catania (Italy); Avila, G. [ESO, Karl-Schwarzschild-Straße 2, D-85748, Garching bei München (Germany); Bellassai, G.; Bruno, P.; Catalano, S.; Benedetto, R. Di; Stefano, A. Di; Greco, V.; Martinetti, E.; Miraglia, M.; Munari, M.; Pontoni, C.; Scuderi, S.; Spanó, P. [INAF—Osservatorio Astrofisico di Catania, Via S. Sofia 78, I-95123 Catania (Italy)

    2016-05-01

    The Catania Astrophysical Observatory Spectropolarimeter (CAOS) is a white-pupil cross-dispersed échelle spectrograph with a spectral resolution of up to R  = 55,000 in the 375–1100 nm range in a single exposure, with complete coverage up to 856 nm. CAOS is linked to the 36-inch telescope, at Mount Etna Observatory, with a couple of 100 μ m optical fibers and it achieves a signal-to-noise ratio better than 60 for a V  = 10 mag star in one hour. CAOS is thermally stabilized in temperature within a 0.01 K rms, so that radial velocities are measured with a precision better than 100 m s{sup −1} from a single spectral line. Linear and circular spectropolarimetric observations are possible by means of a Savart plate working in series with a half-wave and a quarter-wave retarder plate in the 376–850 nm range. As is usual for high-resolution spectropolarimeters, CAOS is suitable to measure all Stokes parameters across spectral lines and it cannot measure the absolute degree of polarization. Observations of unpolarized standard stars show that instrumental polarization is generally zero at 550 nm and can increase up to 3% at the other wavelengths. Since polarized and unpolarized standard stars are useless, we suggest a method to calibrate a high-resolution spectropolarimeter on the basis of the polarimetric properties of spectral lines formed in the presence of a magnetic field. As applied to CAOS, observations of magnetic chemically peculiar stars of the main sequence show that the cross-talk from linear to circular polarization is smaller than 0.4% and that conversion from circular to linear is less than 2.7%. Strength and wavelength dependences of cross-talk can be entirely ascribed, via numerical simulations, to the incorrect retardance of achromatic wave plates.

  3. Improved methods for high resolution electron microscopy

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, J.R.

    1987-04-01

    Existing methods of making support films for high resolution transmission electron microscopy are investigated and novel methods are developed. Existing methods of fabricating fenestrated, metal reinforced specimen supports (microgrids) are evaluated for their potential to reduce beam induced movement of monolamellar crystals of C/sub 44/H/sub 90/ paraffin supported on thin carbon films. Improved methods of producing hydrophobic carbon films by vacuum evaporation, and improved methods of depositing well ordered monolamellar paraffin crystals on carbon films are developed. A novel technique for vacuum evaporation of metals is described which is used to reinforce microgrids. A technique is also developed to bond thin carbon films to microgrids with a polymer bonding agent. Unique biochemical methods are described to accomplish site specific covalent modification of membrane proteins. Protocols are given which covalently convert the carboxy terminus of papain cleaved bacteriorhodopsin to a free thiol. 53 refs., 19 figs., 1 tab.

  4. High-resolution pyrimidine- and ribose-specific 4D HCCH-COSY spectra of RNA using the filter diagonalization method

    International Nuclear Information System (INIS)

    Douglas, Justin T.; Latham, Michael P.; Armstrong, Geoffrey S.; Bendiak, Brad; Pardi, Arthur

    2008-01-01

    The NMR spectra of nucleic acids suffer from severe peak overlap, which complicates resonance assignments. 4D NMR experiments can overcome much of the degeneracy in 2D and 3D spectra; however, the linear increase in acquisition time with each new dimension makes it impractical to acquire high-resolution 4D spectra using standard Fourier transform (FT) techniques. The filter diagonalization method (FDM) is a numerically efficient algorithm that fits the entire multi-dimensional time-domain data to a set of multi-dimensional oscillators. Selective 4D constant-time HCCH-COSY experiments that correlate the H5-C5-C6-H6 base spin systems of pyrimidines or the H1'-C1'-C2'-H2' spin systems of ribose sugars were acquired on the 13 C-labeled iron responsive element (IRE) RNA. FDM-processing of these 4D experiments recorded with only 8 complex points in the indirect dimensions showed superior spectral resolution than FT-processed spectra. Practical aspects of obtaining optimal FDM-processed spectra are discussed. The results here demonstrate that FDM-processing can be used to obtain high-resolution 4D spectra on a medium sized RNA in a fraction of the acquisition time normally required for high-resolution, high-dimensional spectra

  5. High resolution numerical investigation on the effect of convective instability on long term CO2 storage in saline aquifers

    International Nuclear Information System (INIS)

    Lu, C; Lichtner, P C

    2007-01-01

    CO 2 sequestration (capture, separation, and long term storage) in various geologic media including depleted oil reservoirs, saline aquifers, and oceanic sediments is being considered as a possible solution to reduce green house gas emissions. Dissolution of supercritical CO 2 in formation brines is considered an important storage mechanism to prevent possible leakage. Accurate prediction of the plume dissolution rate and migration is essential. Analytical analysis and numerical experiments have demonstrated that convective instability (Rayleigh instability) has a crucial effect on the dissolution behavior and subsequent mineralization reactions. Global stability analysis indicates that a certain grid resolution is needed to capture the features of density-driven fingering phenomena. For 3-D field scale simulations, high resolution leads to large numbers of grid nodes, unfeasible for a single workstation. In this study, we investigate the effects of convective instability on geologic sequestration of CO 2 by taking advantage of parallel computing using the code PFLOTRAN, a massively parallel 3-D reservoir simulator for modeling subsurface multiphase, multicomponent reactive flow and transport based on continuum scale mass and energy conservation equations. The onset, development and long-term fate of a supercritical CO 2 plume will be resolved with high resolution numerical simulations to investigate the rate of plume dissolution caused by fingering phenomena

  6. Accessing High Spatial Resolution in Astronomy Using Interference Methods

    Science.gov (United States)

    Carbonel, Cyril; Grasset, Sébastien; Maysonnave, Jean

    2018-04-01

    In astronomy, methods such as direct imaging or interferometry-based techniques (Michelson stellar interferometry for example) are used for observations. A particular advantage of interferometry is that it permits greater spatial resolution compared to direct imaging with a single telescope, which is limited by diffraction owing to the aperture of the instrument as shown by Rueckner et al. in a lecture demonstration. The focus of this paper, addressed to teachers and/or students in high schools and universities, is to easily underline both an application of interferometry in astronomy and stress its interest for resolution. To this end very simple optical experiments are presented to explain all the concepts. We show how an interference pattern resulting from the combined signals of two telescopes allows us to measure the distance between two stars with a resolution beyond the diffraction limit. Finally this work emphasizes the breathtaking resolution obtained in state-of-the-art instruments such as the VLTi (Very Large Telescope interferometer).

  7. FFT-enhanced IHS transform method for fusing high-resolution satellite images

    Science.gov (United States)

    Ling, Y.; Ehlers, M.; Usery, E.L.; Madden, M.

    2007-01-01

    Existing image fusion techniques such as the intensity-hue-saturation (IHS) transform and principal components analysis (PCA) methods may not be optimal for fusing the new generation commercial high-resolution satellite images such as Ikonos and QuickBird. One problem is color distortion in the fused image, which causes visual changes as well as spectral differences between the original and fused images. In this paper, a fast Fourier transform (FFT)-enhanced IHS method is developed for fusing new generation high-resolution satellite images. This method combines a standard IHS transform with FFT filtering of both the panchromatic image and the intensity component of the original multispectral image. Ikonos and QuickBird data are used to assess the FFT-enhanced IHS transform method. Experimental results indicate that the FFT-enhanced IHS transform method may improve upon the standard IHS transform and the PCA methods in preserving spectral and spatial information. ?? 2006 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS).

  8. Steady-state transport equation resolution by particle methods, and numerical results

    International Nuclear Information System (INIS)

    Mercier, B.

    1985-10-01

    A method to solve steady-state transport equation has been given. Principles of the method are given. The method is studied in two different cases; estimations given by the theory are compared to numerical results. Results got in 1-D (spherical geometry) and in 2-D (axisymmetric geometry) are given [fr

  9. A fast and automatic mosaic method for high-resolution satellite images

    Science.gov (United States)

    Chen, Hongshun; He, Hui; Xiao, Hongyu; Huang, Jing

    2015-12-01

    We proposed a fast and fully automatic mosaic method for high-resolution satellite images. First, the overlapped rectangle is computed according to geographical locations of the reference and mosaic images and feature points on both the reference and mosaic images are extracted by a scale-invariant feature transform (SIFT) algorithm only from the overlapped region. Then, the RANSAC method is used to match feature points of both images. Finally, the two images are fused into a seamlessly panoramic image by the simple linear weighted fusion method or other method. The proposed method is implemented in C++ language based on OpenCV and GDAL, and tested by Worldview-2 multispectral images with a spatial resolution of 2 meters. Results show that the proposed method can detect feature points efficiently and mosaic images automatically.

  10. A method for geological hazard extraction using high-resolution remote sensing

    International Nuclear Information System (INIS)

    Wang, Q J; Chen, Y; Bi, J T; Lin, Q Z; Li, M X

    2014-01-01

    Taking Yingxiu, the epicentre of the Wenchuan earthquake, as the study area, a method for geological disaster extraction using high-resolution remote sensing imagery was proposed in this study. A high-resolution Digital Elevation Model (DEM) was used to create mask imagery to remove interfering factors such as buildings and water at low altitudes. Then, the mask imagery was diced into several small parts to reduce the large images' inconsistency, and they were used as the sources to be classified. After that, vector conversion was done on the classified imagery in ArcGIS. Finally, to ensure accuracy, other interfering factors such as buildings at high altitudes, bare land, and land covered by little vegetation were removed manually. Because the method can extract geological hazards in a short time, it is of great importance for decision-makers and rescuers who need to know the degree of damage in the disaster area, especially within 72 hours after an earthquake. Therefore, the method will play an important role in decision making, rescue, and disaster response planning

  11. Towards High Resolution Numerical Algorithms for Wave Dominated Physical Phenomena

    Science.gov (United States)

    2009-01-30

    Modelling and Numerical Analysis, 40(5):815-841, 2006. [31] Michael Dumbser, Martin Kaser, and Eleuterio F. Toro. An arbitrary high-order Discontinuous...proximation of PML, SIAM J. Numer. Anal., 41 (2003), pp. 287-305. [60] E. BECACHE, S. FAUQUEUX, AND P. JOLY , Stability of perfectly matched layers, group...time-domain performance analysis, IEEE Trans, on Magnetics, 38 (2002), pp. 657- 660. [64] J. DIAZ AND P. JOLY , An analysis of higher-order boundary

  12. Mathematical modelling and numerical resolution of multi-phase compressible fluid flows problems

    International Nuclear Information System (INIS)

    Lagoutiere, Frederic

    2000-01-01

    This work deals with Eulerian compressible multi-species fluid dynamics, the species being either mixed or separated (with interfaces). The document is composed of three parts. The first parts devoted to the numerical resolution of model problems: advection equation, Burgers equation, and Euler equations, in dimensions one and two. The goal is to find a precise method, especially for discontinuous initial conditions, and we develop non dissipative algorithms. They are based on a downwind finite-volume discretization under some stability constraints. The second part treats of the mathematical modelling of fluids mixtures. We construct and analyse a set of multi-temperature and multi-pressure models that are entropy, symmetrizable, hyperbolic, not ever conservative. In the third part, we apply the ideas developed in the first part (downwind discretization) to the numerical resolution of the partial differential problems we have constructed for fluids mixtures in the second part. We present some numerical results in dimensions one and two. (author) [fr

  13. Immersed boundary methods for high-resolution simulation of atmospheric boundary-layer flow over complex terrain

    Science.gov (United States)

    Lundquist, Katherine Ann

    Mesoscale models, such as the Weather Research and Forecasting (WRF) model, are increasingly used for high resolution simulations, particularly in complex terrain, but errors associated with terrain-following coordinates degrade the accuracy of the solution. Use of an alternative Cartesian gridding technique, known as an immersed boundary method (IBM), alleviates coordinate transformation errors and eliminates restrictions on terrain slope which currently limit mesoscale models to slowly varying terrain. In this dissertation, an immersed boundary method is developed for use in numerical weather prediction. Use of the method facilitates explicit resolution of complex terrain, even urban terrain, in the WRF mesoscale model. First, the errors that arise in the WRF model when complex terrain is present are presented. This is accomplished using a scalar advection test case, and comparing the numerical solution to the analytical solution. Results are presented for different orders of advection schemes, grid resolutions and aspect ratios, as well as various degrees of terrain slope. For comparison, results from the same simulation are presented using the IBM. Both two-dimensional and three-dimensional immersed boundary methods are then described, along with details that are specific to the implementation of IBM in the WRF code. Our IBM is capable of imposing both Dirichlet and Neumann boundary conditions. Additionally, a method for coupling atmospheric physics parameterizations at the immersed boundary is presented, making IB methods much more functional in the context of numerical weather prediction models. The two-dimensional IB method is verified through comparisons of solutions for gentle terrain slopes when using IBM and terrain-following grids. The canonical case of flow over a Witch of Agnesi hill provides validation of the basic no-slip and zero gradient boundary conditions. Specified diurnal heating in a valley, producing anabatic winds, is used to validate the

  14. Immersed Boundary Methods for High-Resolution Simulation of Atmospheric Boundary-Layer Flow Over Complex Terrain

    Energy Technology Data Exchange (ETDEWEB)

    Lundquist, K A [Univ. of California, Berkeley, CA (United States)

    2010-05-12

    Mesoscale models, such as the Weather Research and Forecasting (WRF) model, are increasingly used for high resolution simulations, particularly in complex terrain, but errors associated with terrain-following coordinates degrade the accuracy of the solution. Use of an alternative Cartesian gridding technique, known as an immersed boundary method (IBM), alleviates coordinate transformation errors and eliminates restrictions on terrain slope which currently limit mesoscale models to slowly varying terrain. In this dissertation, an immersed boundary method is developed for use in numerical weather prediction. Use of the method facilitates explicit resolution of complex terrain, even urban terrain, in the WRF mesoscale model. First, the errors that arise in the WRF model when complex terrain is present are presented. This is accomplished using a scalar advection test case, and comparing the numerical solution to the analytical solution. Results are presented for different orders of advection schemes, grid resolutions and aspect ratios, as well as various degrees of terrain slope. For comparison, results from the same simulation are presented using the IBM. Both two-dimensional and three-dimensional immersed boundary methods are then described, along with details that are specific to the implementation of IBM in the WRF code. Our IBM is capable of imposing both Dirichlet and Neumann boundary conditions. Additionally, a method for coupling atmospheric physics parameterizations at the immersed boundary is presented, making IB methods much more functional in the context of numerical weather prediction models. The two-dimensional IB method is verified through comparisons of solutions for gentle terrain slopes when using IBM and terrain-following grids. The canonical case of flow over a Witch of Agnesi hill provides validation of the basic no-slip and zero gradient boundary conditions. Specified diurnal heating in a valley, producing anabatic winds, is used to validate the

  15. Highly sensitive high resolution Raman spectroscopy using resonant ionization methods

    International Nuclear Information System (INIS)

    Owyoung, A.; Esherick, P.

    1984-05-01

    In recent years, the introduction of stimulated Raman methods has offered orders of magnitude improvement in spectral resolving power for gas phase Raman studies. Nevertheless, the inherent weakness of the Raman process suggests the need for significantly more sensitive techniques in Raman spectroscopy. In this we describe a new approach to this problem. Our new technique, which we call ionization-detected stimulated Raman spectroscopy (IDSRS), combines high-resolution SRS with highly-sensitive resonant laser ionization to achieve an increase in sensitivity of over three orders of magnitude. The excitation/detection process involves three sequential steps: (1) population of a vibrationally excited state via stimulated Raman pumping; (2) selective ionization of the vibrationally excited molecule with a tunable uv source; and (3) collection of the ionized species at biased electrodes where they are detected as current in an external circuit

  16. A numerical study of super-resolution through fast 3D wideband algorithm for scattering in highly-heterogeneous media

    KAUST Repository

    Létourneau, Pierre-David

    2016-09-19

    We present a wideband fast algorithm capable of accurately computing the full numerical solution of the problem of acoustic scattering of waves by multiple finite-sized bodies such as spherical scatterers in three dimensions. By full solution, we mean that no assumption (e.g. Rayleigh scattering, geometrical optics, weak scattering, Born single scattering, etc.) is necessary regarding the properties of the scatterers, their distribution or the background medium. The algorithm is also fast in the sense that it scales linearly with the number of unknowns. We use this algorithm to study the phenomenon of super-resolution in time-reversal refocusing in highly-scattering media recently observed experimentally (Lemoult et al., 2011), and provide numerical arguments towards the fact that such a phenomenon can be explained through a homogenization theory.

  17. Assessment of the Suitability of High Resolution Numerical Weather Model Outputs for Hydrological Modelling in Mountainous Cold Regions

    Science.gov (United States)

    Rasouli, K.; Pomeroy, J. W.; Hayashi, M.; Fang, X.; Gutmann, E. D.; Li, Y.

    2017-12-01

    The hydrology of mountainous cold regions has a large spatial variability that is driven both by climate variability and near-surface process variability associated with complex terrain and patterns of vegetation, soils, and hydrogeology. There is a need to downscale large-scale atmospheric circulations towards the fine scales that cold regions hydrological processes operate at to assess their spatial variability in complex terrain and quantify uncertainties by comparison to field observations. In this research, three high resolution numerical weather prediction models, namely, the Intermediate Complexity Atmosphere Research (ICAR), Weather Research and Forecasting (WRF), and Global Environmental Multiscale (GEM) models are used to represent spatial and temporal patterns of atmospheric conditions appropriate for hydrological modelling. An area covering high mountains and foothills of the Canadian Rockies was selected to assess and compare high resolution ICAR (1 km × 1 km), WRF (4 km × 4 km), and GEM (2.5 km × 2.5 km) model outputs with station-based meteorological measurements. ICAR with very low computational cost was run with different initial and boundary conditions and with finer spatial resolution, which allowed an assessment of modelling uncertainty and scaling that was difficult with WRF. Results show that ICAR, when compared with WRF and GEM, performs very well in precipitation and air temperature modelling in the Canadian Rockies, while all three models show a fair performance in simulating wind and humidity fields. Representation of local-scale atmospheric dynamics leading to realistic fields of temperature and precipitation by ICAR, WRF, and GEM makes these models suitable for high resolution cold regions hydrological predictions in complex terrain, which is a key factor in estimating water security in western Canada.

  18. High resolution time integration for SN radiation transport

    International Nuclear Information System (INIS)

    Thoreson, Greg; McClarren, Ryan G.; Chang, Jae H.

    2009-01-01

    First-order, second-order, and high resolution time discretization schemes are implemented and studied for the discrete ordinates (S N ) equations. The high resolution method employs a rate of convergence better than first-order, but also suppresses artificial oscillations introduced by second-order schemes in hyperbolic partial differential equations. The high resolution method achieves these properties by nonlinearly adapting the time stencil to use a first-order method in regions where oscillations could be created. We employ a quasi-linear solution scheme to solve the nonlinear equations that arise from the high resolution method. All three methods were compared for accuracy and convergence rates. For non-absorbing problems, both second-order and high resolution converged to the same solution as the first-order with better convergence rates. High resolution is more accurate than first-order and matches or exceeds the second-order method

  19. High speed, High resolution terahertz spectrometers

    International Nuclear Information System (INIS)

    Kim, Youngchan; Yee, Dae Su; Yi, Miwoo; Ahn, Jaewook

    2008-01-01

    A variety of sources and methods have been developed for terahertz spectroscopy during almost two decades. Terahertz time domain spectroscopy (THz TDS)has attracted particular attention as a basic measurement method in the fields of THz science and technology. Recently, asynchronous optical sampling (AOS)THz TDS has been demonstrated, featuring rapid data acquisition and a high spectral resolution. Also, terahertz frequency comb spectroscopy (TFCS)possesses attractive features for high precision terahertz spectroscopy. In this presentation, we report on these two types of terahertz spectrometer. Our high speed, high resolution terahertz spectrometer is demonstrated using two mode locked femtosecond lasers with slightly different repetition frequencies without a mechanical delay stage. The repetition frequencies of the two femtosecond lasers are stabilized by use of two phase locked loops sharing the same reference oscillator. The time resolution of our terahertz spectrometer is measured using the cross correlation method to be 270 fs. AOS THz TDS is presented in Fig. 1, which shows a time domain waveform rapidly acquired on a 10ns time window. The inset shows a zoom into the signal with 100ps time window. The spectrum obtained by the fast Fourier Transformation (FFT)of the time domain waveform has a frequency resolution of 100MHz. The dependence of the signal to noise ratio (SNR)on the measurement time is also investigated

  20. Detecting breast microcalcifications using super-resolution and wave-equation ultrasound imaging: a numerical phantom study

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Lianjie [Los Alamos National Laboratory; Simonetti, Francesco [IMPERIAL COLLEGE LONDON; Huthwaite, Peter [IMPERIAL COLLEGE LONDON; Rosenberg, Robert [UNM; Williamson, Michael [UNM

    2010-01-01

    Ultrasound image resolution and quality need to be significantly improved for breast microcalcification detection. Super-resolution imaging with the factorization method has recently been developed as a promising tool to break through the resolution limit of conventional imaging. In addition, wave-equation reflection imaging has become an effective method to reduce image speckles by properly handling ultrasound scattering/diffraction from breast heterogeneities during image reconstruction. We explore the capabilities of a novel super-resolution ultrasound imaging method and a wave-equation reflection imaging scheme for detecting breast microcalcifications. Super-resolution imaging uses the singular value decomposition and a factorization scheme to achieve an image resolution that is not possible for conventional ultrasound imaging. Wave-equation reflection imaging employs a solution to the acoustic-wave equation in heterogeneous media to backpropagate ultrasound scattering/diffraction waves to scatters and form images of heterogeneities. We construct numerical breast phantoms using in vivo breast images, and use a finite-difference wave-equation scheme to generate ultrasound data scattered from inclusions that mimic microcalcifications. We demonstrate that microcalcifications can be detected at full spatial resolution using the super-resolution ultrasound imaging and wave-equation reflection imaging methods.

  1. The potential of high resolution ultrasonic in-situ methods

    International Nuclear Information System (INIS)

    Schuster, K.

    2010-01-01

    Document available in extended abstract form only. In the framework of geomechanical assessment of final repository underground openings the knowledge of geophysical rock parameters are of importance. Ultrasonic methods proved to be good geophysical tools to provide appropriate high resolution parameters for the characterisation of rock. In this context the detection and characterisation of rock heterogeneities at different scales, including the Excavation Damaged/disturbed Zone (EDZ/EdZ) features, play an important role. Especially, kinematic and dynamic parameters derived from ultrasonic measurements can be linked very close to rock mechanic investigations and interpretations. BGR uses high resolution ultrasonic methods, starting with emitted frequencies of about 1 kHz (seismic) and going up to about 100 kHz. The method development is going on and appropriate research and investigations are performed since many years at different European radioactive waste disposal related underground research laboratories in different potential host rocks. The most frequented are: Mont Terri Rock Laboratory, Switzerland (Opalinus Clay, OPA), Underground Research Laboratory Meuse/Haute- Marne, France (Callovo-Oxfordian, COX), Underground Research Facility Mol, Belgium (Boom Clay, BC), Aespoe Hard Rock Laboratory, Sweden (granites), Rock Laboratory Grimsel, Switzerland (granites) and Asse salt mine, Germany (rock salt). The methods can be grouped into borehole based methods and noninvasive methods like refraction and reflection methods, which are performed in general from the drift wall. Additionally, as a combination of these both methods a sort of vertical seismic profiling (VSP) is applied. The best qualified method, or a combination of methods, have to be chosen according to the scientific questions and the local site conditions. The degree of spatial resolution of zones of interest or any kind of anomaly depends strongly on the distance of these objects to the ultrasonic

  2. A high-resolution method for the localization of proanthocyanidins in plant tissues

    Directory of Open Access Journals (Sweden)

    Panter Stephen

    2011-05-01

    Full Text Available Abstract Background Histochemical staining of plant tissues with 4-dimethylaminocinnamaldehyde (DMACA or vanillin-HCl is widely used to characterize spatial patterns of proanthocyanidin accumulation in plant tissues. These methods are limited in their ability to allow high-resolution imaging of proanthocyanidin deposits. Results Tissue embedding techniques were used in combination with DMACA staining to analyze the accumulation of proanthocyanidins in Lotus corniculatus (L. and Trifolium repens (L. tissues. Embedding of plant tissues in LR White or paraffin matrices, with or without DMACA staining, preserved the physical integrity of the plant tissues, allowing high-resolution imaging that facilitated cell-specific localization of proanthocyanidins. A brown coloration was seen in proanthocyanidin-producing cells when plant tissues were embedded without DMACA staining and this was likely to have been due to non-enzymatic oxidation of proanthocyanidins and the formation of colored semiquinones and quinones. Conclusions This paper presents a simple, high-resolution method for analysis of proanthocyanidin accumulation in organs, tissues and cells of two plant species with different patterns of proanthocyanidin accumulation, namely Lotus corniculatus (birdsfoot trefoil and Trifolium repens (white clover. This technique was used to characterize cell type-specific patterns of proanthocyanidin accumulation in white clover flowers at different stages of development.

  3. Numerical methods in software and analysis

    CERN Document Server

    Rice, John R

    1992-01-01

    Numerical Methods, Software, and Analysis, Second Edition introduces science and engineering students to the methods, tools, and ideas of numerical computation. Introductory courses in numerical methods face a fundamental problem-there is too little time to learn too much. This text solves that problem by using high-quality mathematical software. In fact, the objective of the text is to present scientific problem solving using standard mathematical software. This book discusses numerous programs and software packages focusing on the IMSL library (including the PROTRAN system) and ACM Algorithm

  4. Quality and sensitivity of high-resolution numerical simulation of urban heat islands

    Science.gov (United States)

    Li, Dan; Bou-Zeid, Elie

    2014-05-01

    High-resolution numerical simulations of the urban heat island (UHI) effect with the widely-used Weather Research and Forecasting (WRF) model are assessed. Both the sensitivity of the results to the simulation setup, and the quality of the simulated fields as representations of the real world, are investigated. Results indicate that the WRF-simulated surface temperatures are more sensitive to the planetary boundary layer (PBL) scheme choice during nighttime, and more sensitive to the surface thermal roughness length parameterization during daytime. The urban surface temperatures simulated by WRF are also highly sensitive to the urban canopy model (UCM) used. The implementation in this study of an improved UCM (the Princeton UCM or PUCM) that allows the simulation of heterogeneous urban facets and of key hydrological processes, together with the so-called CZ09 parameterization for the thermal roughness length, significantly reduce the bias (Changing UCMs and PBL schemes does not alter the performance of WRF in reproducing bulk boundary layer temperature profiles significantly. The results illustrate the wide range of urban environmental conditions that various configurations of WRF can produce, and the significant biases that should be assessed before inferences are made based on WRF outputs. The optimal set-up of WRF-PUCM developed in this paper also paves the way for a confident exploration of the city-scale impacts of UHI mitigation strategies in the companion paper (Li et al 2014).

  5. Tailored high-resolution numerical weather forecasts for energy efficient predictive building control

    Science.gov (United States)

    Stauch, V. J.; Gwerder, M.; Gyalistras, D.; Oldewurtel, F.; Schubiger, F.; Steiner, P.

    2010-09-01

    The high proportion of the total primary energy consumption by buildings has increased the public interest in the optimisation of buildings' operation and is also driving the development of novel control approaches for the indoor climate. In this context, the use of weather forecasts presents an interesting and - thanks to advances in information and predictive control technologies and the continuous improvement of numerical weather prediction (NWP) models - an increasingly attractive option for improved building control. Within the research project OptiControl (www.opticontrol.ethz.ch) predictive control strategies for a wide range of buildings, heating, ventilation and air conditioning (HVAC) systems, and representative locations in Europe are being investigated with the aid of newly developed modelling and simulation tools. Grid point predictions for radiation, temperature and humidity of the high-resolution limited area NWP model COSMO-7 (see www.cosmo-model.org) and local measurements are used as disturbances and inputs into the building system. The control task considered consists in minimizing energy consumption whilst maintaining occupant comfort. In this presentation, we use the simulation-based OptiControl methodology to investigate the impact of COSMO-7 forecasts on the performance of predictive building control and the resulting energy savings. For this, we have selected building cases that were shown to benefit from a prediction horizon of up to 3 days and therefore, are particularly suitable for the use of numerical weather forecasts. We show that the controller performance is sensitive to the quality of the weather predictions, most importantly of the incident radiation on differently oriented façades. However, radiation is characterised by a high temporal and spatial variability in part caused by small scale and fast changing cloud formation and dissolution processes being only partially represented in the COSMO-7 grid point predictions. On the

  6. Homogenization-based topology optimization for high-resolution manufacturable micro-structures

    DEFF Research Database (Denmark)

    Groen, Jeroen Peter; Sigmund, Ole

    2018-01-01

    This paper presents a projection method to obtain high-resolution, manufacturable structures from efficient and coarse-scale, homogenization-based topology optimization results. The presented approach bridges coarse and fine scale, such that the complex periodic micro-structures can be represented...... by a smooth and continuous lattice on the fine mesh. A heuristic methodology allows control of the projected topology, such that a minimum length-scale on both solid and void features is ensured in the final result. Numerical examples show excellent behavior of the method, where performances of the projected...

  7. Analysis of the impact of spatial resolution on land/water classifications using high-resolution aerial imagery

    Science.gov (United States)

    Enwright, Nicholas M.; Jones, William R.; Garber, Adrienne L.; Keller, Matthew J.

    2014-01-01

    Long-term monitoring efforts often use remote sensing to track trends in habitat or landscape conditions over time. To most appropriately compare observations over time, long-term monitoring efforts strive for consistency in methods. Thus, advances and changes in technology over time can present a challenge. For instance, modern camera technology has led to an increasing availability of very high-resolution imagery (i.e. submetre and metre) and a shift from analogue to digital photography. While numerous studies have shown that image resolution can impact the accuracy of classifications, most of these studies have focused on the impacts of comparing spatial resolution changes greater than 2 m. Thus, a knowledge gap exists on the impacts of minor changes in spatial resolution (i.e. submetre to about 1.5 m) in very high-resolution aerial imagery (i.e. 2 m resolution or less). This study compared the impact of spatial resolution on land/water classifications of an area dominated by coastal marsh vegetation in Louisiana, USA, using 1:12,000 scale colour-infrared analogue aerial photography (AAP) scanned at four different dot-per-inch resolutions simulating ground sample distances (GSDs) of 0.33, 0.54, 1, and 2 m. Analysis of the impact of spatial resolution on land/water classifications was conducted by exploring various spatial aspects of the classifications including density of waterbodies and frequency distributions in waterbody sizes. This study found that a small-magnitude change (1–1.5 m) in spatial resolution had little to no impact on the amount of water classified (i.e. percentage mapped was less than 1.5%), but had a significant impact on the mapping of very small waterbodies (i.e. waterbodies ≤ 250 m2). These findings should interest those using temporal image classifications derived from very high-resolution aerial photography as a component of long-term monitoring programs.

  8. Numerical tsunami hazard assessment of the submarine volcano Kick 'em Jenny in high resolution are

    Science.gov (United States)

    Dondin, Frédéric; Dorville, Jean-Francois Marc; Robertson, Richard E. A.

    2016-04-01

    Landslide-generated tsunami are infrequent phenomena that can be potentially highly hazardous for population located in the near-field domain of the source. The Lesser Antilles volcanic arc is a curved 800 km chain of volcanic islands. At least 53 flank collapse episodes have been recognized along the arc. Several of these collapses have been associated with underwater voluminous deposits (volume > 1 km3). Due to their momentum these events were likely capable of generating regional tsunami. However no clear field evidence of tsunami associated with these voluminous events have been reported but the occurrence of such an episode nowadays would certainly have catastrophic consequences. Kick 'em Jenny (KeJ) is the only active submarine volcano of the Lesser Antilles Arc (LAA), with a current edifice volume estimated to 1.5 km3. It is the southernmost edifice of the LAA with recognized associated volcanic landslide deposits. The volcano appears to have undergone three episodes of flank failure. Numerical simulations of one of these episodes associated with a collapse volume of ca. 4.4 km3 and considering a single pulse collapse revealed that this episode would have produced a regional tsunami with amplitude of 30 m. In the present study we applied a detailed hazard assessment on KeJ submarine volcano (KeJ) form its collapse to its waves impact on high resolution coastal area of selected island of the LAA in order to highlight needs to improve alert system and risk mitigation. We present the assessment process of tsunami hazard related to shoreline surface elevation (i.e. run-up) and flood dynamic (i.e. duration, height, speed...) at the coast of LAA island in the case of a potential flank collapse scenario at KeJ. After quantification of potential initial volumes of collapse material using relative slope instability analysis (RSIA, VolcanoFit 2.0 & SSAP 4.5) based on seven geomechanical models, the tsunami source have been simulate by St-Venant equations-based code

  9. Three-Dimensional Imaging and Numerical Reconstruction of Graphite/Epoxy Composite Microstructure Based on Ultra-High Resolution X-Ray Computed Tomography

    Science.gov (United States)

    Czabaj, M. W.; Riccio, M. L.; Whitacre, W. W.

    2014-01-01

    A combined experimental and computational study aimed at high-resolution 3D imaging, visualization, and numerical reconstruction of fiber-reinforced polymer microstructures at the fiber length scale is presented. To this end, a sample of graphite/epoxy composite was imaged at sub-micron resolution using a 3D X-ray computed tomography microscope. Next, a novel segmentation algorithm was developed, based on concepts adopted from computer vision and multi-target tracking, to detect and estimate, with high accuracy, the position of individual fibers in a volume of the imaged composite. In the current implementation, the segmentation algorithm was based on Global Nearest Neighbor data-association architecture, a Kalman filter estimator, and several novel algorithms for virtualfiber stitching, smoothing, and overlap removal. The segmentation algorithm was used on a sub-volume of the imaged composite, detecting 508 individual fibers. The segmentation data were qualitatively compared to the tomographic data, demonstrating high accuracy of the numerical reconstruction. Moreover, the data were used to quantify a) the relative distribution of individual-fiber cross sections within the imaged sub-volume, and b) the local fiber misorientation relative to the global fiber axis. Finally, the segmentation data were converted using commercially available finite element (FE) software to generate a detailed FE mesh of the composite volume. The methodology described herein demonstrates the feasibility of realizing an FE-based, virtual-testing framework for graphite/fiber composites at the constituent level.

  10. Two-photon high-resolution measurement of partial pressure of oxygen in cerebral vasculature and tissue

    Science.gov (United States)

    Sakadžić, Sava; Roussakis, Emmanuel; Yaseen, Mohammad A.; Mandeville, Emiri T.; Srinivasan, Vivek J.; Arai, Ken; Ruvinskaya, Svetlana; Devor, Anna; Lo, Eng H.; Vinogradov, Sergei A.; Boas, David A.

    2010-01-01

    The ability to measure oxygen partial pressure (pO2) with high temporal and spatial resolution in three dimensions is crucial for understanding oxygen delivery and consumption in normal and diseased brain. Among existing pO2 measurement methods, phosphorescence quenching is optimally suited for the task. However, previous attempts to couple phosphorescence with two-photon laser scanning microscopy have faced substantial difficulties because of extremely low two-photon absorption cross-sections of conventional phosphorescent probes. Here, we report the first practical in vivo two-photon high-resolution pO2 measurements in small rodents’ cortical microvasculature and tissue, made possible by combining an optimized imaging system with a two-photon-enhanced phosphorescent nanoprobe. The method features a measurement depth of up to 250 µm, sub-second temporal resolution and requires low probe concentration. Most importantly, the properties of the probe allowed for the first direct high-resolution measurement of cortical extravascular (tissue) pO2, opening numerous possibilities for functional metabolic brain studies. PMID:20693997

  11. Advancement of compressible multiphase flows and sodium-water reaction analysis program SERAPHIM. Validation of a numerical method for the simulation of highly underexpanded jets

    International Nuclear Information System (INIS)

    Uchibori, Akihiro; Ohshima, Hiroyuki; Watanabe, Akira

    2010-01-01

    SERAPHIM is a computer program for the simulation of the compressible multiphase flow involving the sodium-water chemical reaction under a tube failure accident in a steam generator of sodium cooled fast reactors. In this study, the numerical analysis of the highly underexpanded air jets into the air or into the water was performed as a part of validation of the SERAPHIM program. The multi-fluid model, the second-order TVD scheme and the HSMAC method considering a compressibility were used in this analysis. Combining these numerical methods makes it possible to calculate the multiphase flow including supersonic gaseous jets. In the case of the air jet into the air, the calculated pressure, the shape of the jet and the location of a Mach disk agreed with the existing experimental results. The effect of the difference scheme and the mesh resolution on the prediction accuracy was clarified through these analyses. The behavior of the air jet into the water was also reproduced successfully by the proposed numerical method. (author)

  12. A simple and rapid method for high-resolution visualization of single-ion tracks

    Directory of Open Access Journals (Sweden)

    Masaaki Omichi

    2014-11-01

    Full Text Available Prompt determination of spatial points of single-ion tracks plays a key role in high-energy particle induced-cancer therapy and gene/plant mutations. In this study, a simple method for the high-resolution visualization of single-ion tracks without etching was developed through the use of polyacrylic acid (PAA-N, N’-methylene bisacrylamide (MBAAm blend films. One of the steps of the proposed method includes exposure of the irradiated films to water vapor for several minutes. Water vapor was found to promote the cross-linking reaction of PAA and MBAAm to form a bulky cross-linked structure; the ion-track scars were detectable at a nanometer scale by atomic force microscopy. This study demonstrated that each scar is easily distinguishable, and the amount of generated radicals of the ion tracks can be estimated by measuring the height of the scars, even in highly dense ion tracks. This method is suitable for the visualization of the penumbra region in a single-ion track with a high spatial resolution of 50 nm, which is sufficiently small to confirm that a single ion hits a cell nucleus with a size ranging between 5 and 20 μm.

  13. Research on Horizontal Accuracy Method of High Spatial Resolution Remotely Sensed Orthophoto Image

    Science.gov (United States)

    Xu, Y. M.; Zhang, J. X.; Yu, F.; Dong, S.

    2018-04-01

    At present, in the inspection and acceptance of high spatial resolution remotly sensed orthophoto image, the horizontal accuracy detection is testing and evaluating the accuracy of images, which mostly based on a set of testing points with the same accuracy and reliability. However, it is difficult to get a set of testing points with the same accuracy and reliability in the areas where the field measurement is difficult and the reference data with high accuracy is not enough. So it is difficult to test and evaluate the horizontal accuracy of the orthophoto image. The uncertainty of the horizontal accuracy has become a bottleneck for the application of satellite borne high-resolution remote sensing image and the scope of service expansion. Therefore, this paper proposes a new method to test the horizontal accuracy of orthophoto image. This method using the testing points with different accuracy and reliability. These points' source is high accuracy reference data and field measurement. The new method solves the horizontal accuracy detection of the orthophoto image in the difficult areas and provides the basis for providing reliable orthophoto images to the users.

  14. A simple and rapid method for high-resolution visualization of single-ion tracks

    Energy Technology Data Exchange (ETDEWEB)

    Omichi, Masaaki [Department of Applied Chemistry, Graduate School of Engineering, Osaka University, Osaka 565-0871 (Japan); Center for Collaborative Research, Anan National College of Technology, Anan, Tokushima 774-0017 (Japan); Choi, Wookjin; Sakamaki, Daisuke; Seki, Shu, E-mail: seki@chem.eng.osaka-u.ac.jp [Department of Applied Chemistry, Graduate School of Engineering, Osaka University, Osaka 565-0871 (Japan); Tsukuda, Satoshi [Institute of Multidisciplinary Research for Advanced Materials, Tohoku University, Sendai, Miyagi 980-8577 (Japan); Sugimoto, Masaki [Japan Atomic Energy Agency, Takasaki Advanced Radiation Research Institute, Gunma, Gunma 370-1292 (Japan)

    2014-11-15

    Prompt determination of spatial points of single-ion tracks plays a key role in high-energy particle induced-cancer therapy and gene/plant mutations. In this study, a simple method for the high-resolution visualization of single-ion tracks without etching was developed through the use of polyacrylic acid (PAA)-N, N’-methylene bisacrylamide (MBAAm) blend films. One of the steps of the proposed method includes exposure of the irradiated films to water vapor for several minutes. Water vapor was found to promote the cross-linking reaction of PAA and MBAAm to form a bulky cross-linked structure; the ion-track scars were detectable at a nanometer scale by atomic force microscopy. This study demonstrated that each scar is easily distinguishable, and the amount of generated radicals of the ion tracks can be estimated by measuring the height of the scars, even in highly dense ion tracks. This method is suitable for the visualization of the penumbra region in a single-ion track with a high spatial resolution of 50 nm, which is sufficiently small to confirm that a single ion hits a cell nucleus with a size ranging between 5 and 20 μm.

  15. High-resolution intravital microscopy.

    Directory of Open Access Journals (Sweden)

    Volker Andresen

    Full Text Available Cellular communication constitutes a fundamental mechanism of life, for instance by permitting transfer of information through synapses in the nervous system and by leading to activation of cells during the course of immune responses. Monitoring cell-cell interactions within living adult organisms is crucial in order to draw conclusions on their behavior with respect to the fate of cells, tissues and organs. Until now, there is no technology available that enables dynamic imaging deep within the tissue of living adult organisms at sub-cellular resolution, i.e. detection at the level of few protein molecules. Here we present a novel approach called multi-beam striped-illumination which applies for the first time the principle and advantages of structured-illumination, spatial modulation of the excitation pattern, to laser-scanning-microscopy. We use this approach in two-photon-microscopy--the most adequate optical deep-tissue imaging-technique. As compared to standard two-photon-microscopy, it achieves significant contrast enhancement and up to 3-fold improved axial resolution (optical sectioning while photobleaching, photodamage and acquisition speed are similar. Its imaging depth is comparable to multifocal two-photon-microscopy and only slightly less than in standard single-beam two-photon-microscopy. Precisely, our studies within mouse lymph nodes demonstrated 216% improved axial and 23% improved lateral resolutions at a depth of 80 µm below the surface. Thus, we are for the first time able to visualize the dynamic interactions between B cells and immune complex deposits on follicular dendritic cells within germinal centers (GCs of live mice. These interactions play a decisive role in the process of clonal selection, leading to affinity maturation of the humoral immune response. This novel high-resolution intravital microscopy method has a huge potential for numerous applications in neurosciences, immunology, cancer research and

  16. High-Resolution Intravital Microscopy

    Science.gov (United States)

    Andresen, Volker; Pollok, Karolin; Rinnenthal, Jan-Leo; Oehme, Laura; Günther, Robert; Spiecker, Heinrich; Radbruch, Helena; Gerhard, Jenny; Sporbert, Anje; Cseresnyes, Zoltan; Hauser, Anja E.; Niesner, Raluca

    2012-01-01

    Cellular communication constitutes a fundamental mechanism of life, for instance by permitting transfer of information through synapses in the nervous system and by leading to activation of cells during the course of immune responses. Monitoring cell-cell interactions within living adult organisms is crucial in order to draw conclusions on their behavior with respect to the fate of cells, tissues and organs. Until now, there is no technology available that enables dynamic imaging deep within the tissue of living adult organisms at sub-cellular resolution, i.e. detection at the level of few protein molecules. Here we present a novel approach called multi-beam striped-illumination which applies for the first time the principle and advantages of structured-illumination, spatial modulation of the excitation pattern, to laser-scanning-microscopy. We use this approach in two-photon-microscopy - the most adequate optical deep-tissue imaging-technique. As compared to standard two-photon-microscopy, it achieves significant contrast enhancement and up to 3-fold improved axial resolution (optical sectioning) while photobleaching, photodamage and acquisition speed are similar. Its imaging depth is comparable to multifocal two-photon-microscopy and only slightly less than in standard single-beam two-photon-microscopy. Precisely, our studies within mouse lymph nodes demonstrated 216% improved axial and 23% improved lateral resolutions at a depth of 80 µm below the surface. Thus, we are for the first time able to visualize the dynamic interactions between B cells and immune complex deposits on follicular dendritic cells within germinal centers (GCs) of live mice. These interactions play a decisive role in the process of clonal selection, leading to affinity maturation of the humoral immune response. This novel high-resolution intravital microscopy method has a huge potential for numerous applications in neurosciences, immunology, cancer research and developmental biology

  17. Multi-group transport methods for high-resolution neutron activation analysis

    International Nuclear Information System (INIS)

    Burns, K. A.; Smith, L. E.; Gesh, C. J.; Shaver, M. W.

    2009-01-01

    The accurate and efficient simulation of coupled neutron-photon problems is necessary for several important radiation detection applications. Examples include the detection of nuclear threats concealed in cargo containers and prompt gamma neutron activation analysis for nondestructive determination of elemental composition of unknown samples. In these applications, high-resolution gamma-ray spectrometers are used to preserve as much information as possible about the emitted photon flux, which consists of both continuum and characteristic gamma rays with discrete energies. Monte Carlo transport is the most commonly used modeling tool for this type of problem, but computational times for many problems can be prohibitive. This work explores the use of multi-group deterministic methods for the simulation of neutron activation problems. Central to this work is the development of a method for generating multi-group neutron-photon cross-sections in a way that separates the discrete and continuum photon emissions so that the key signatures in neutron activation analysis (i.e., the characteristic line energies) are preserved. The mechanics of the cross-section preparation method are described and contrasted with standard neutron-gamma cross-section sets. These custom cross-sections are then applied to several benchmark problems. Multi-group results for neutron and photon flux are compared to MCNP results. Finally, calculated responses of high-resolution spectrometers are compared. Preliminary findings show promising results when compared to MCNP. A detailed discussion of the potential benefits and shortcomings of the multi-group-based approach, in terms of accuracy, and computational efficiency, is provided. (authors)

  18. High resolution time integration for Sn radiation transport

    International Nuclear Information System (INIS)

    Thoreson, Greg; McClarren, Ryan G.; Chang, Jae H.

    2008-01-01

    First order, second order and high resolution time discretization schemes are implemented and studied for the S n equations. The high resolution method employs a rate of convergence better than first order, but also suppresses artificial oscillations introduced by second order schemes in hyperbolic differential equations. All three methods were compared for accuracy and convergence rates. For non-absorbing problems, both second order and high resolution converged to the same solution as the first order with better convergence rates. High resolution is more accurate than first order and matches or exceeds the second order method. (authors)

  19. Delineation of wetland areas from high resolution WorldView-2 data by object-based method

    International Nuclear Information System (INIS)

    Hassan, N; Hamid, J R A; Adnan, N A; Jaafar, M

    2014-01-01

    Various classification methods are available that can be used to delineate land cover types. Object-based is one of such methods for delineating the land cover from satellite imageries. This paper focuses on the digital image processing aspects of discriminating wetland areas via object-based method using high resolution satellite multispectral WorldView-2 image data taken over part of Penang Island region. This research is an attempt to improve the wetland area delineation in conjunction with a range of classification techniques which can be applied to satellite data with high spatial and spectral resolution such as World View 2. The intent is to determine a suitable approach to delineate and map these wetland areas more appropriately. There are common parameters to take into account that are pivotal in object-based method which are the spatial resolution and the range of spectral channels of the imaging sensor system. The preliminary results of the study showed object-based analysis is capable of delineating wetland region of interest with an accuracy that is acceptable to the required tolerance for land cover classification

  20. Visual quantification of diffuse emphysema with Sakal's method and high-resolution chest CT

    International Nuclear Information System (INIS)

    Feuerstein, I.M.; McElvaney, N.G.; Simon, T.R.; Hubbard, R.C.; Crystal, R.G.

    1990-01-01

    This paper determines the accuracy and efficacy of visual quantitation for a diffuse form of pulmonary emphysema with high-resolution CT (HRCT). Twenty- five adults patients with symptomatic emphysema due to α-antitrypsin deficiency prospectively underwent HRCT with 1.5-mm sections, a high-spatial-resolution algorithm, and targeted reconstruction. Photography was performed with narrow lung windows to accentuate diffuse emphysema. Emphysema was then scored with use of a modification of Sakai's extent and severity scoring method. The scans were all scored by the same blinded observer. Pulmonary function testing (PFT), including diffusing capacity measurement, was performed in all patients. Results were statistically correlated with the use of regression analysis

  1. Application of the photoelastic experimental hybrid method with new numerical method to the high stress distribution

    International Nuclear Information System (INIS)

    Hawong, Jai Sug; Lee, Dong Hun; Lee, Dong Ha; Tche, Konstantin

    2004-01-01

    In this research, the photoelastic experimental hybrid method with Hook-Jeeves numerical method has been developed: This method is more precise and stable than the photoelastic experimental hybrid method with Newton-Rapson numerical method with Gaussian elimination method. Using the photoelastic experimental hybrid method with Hook-Jeeves numerical method, we can separate stress components from isochromatics only and stress intensity factors and stress concentration factors can be determined. The photoelastic experimental hybrid method with Hook-Jeeves had better be used in the full field experiment than the photoelastic experimental hybrid method with Newton-Rapson with Gaussian elimination method

  2. Application of the Oslo method to high resolution gamma spectra

    Science.gov (United States)

    Simon, A.; Guttormsen, M.; Larsen, A. C.; Beausang, C. W.; Humby, P.

    2015-10-01

    Hauser-Feshbach statistical model is a widely used tool for calculation of the reaction cross section, in particular for astrophysical processes. The HF model requires as an input an optical potential, gamma-strength function (GSF) and level density (LD) to properly model the statistical properties of the nucleus. The Oslo method is a well established technique to extract GSFs and LDs from experimental data, typically used for gamma-spectra obtained with scintillation detectors. Here, the first application of the Oslo method to high-resolution data obtained using the Ge detectors of the STARLITER setup at TAMU is discussed. The GSFs and LDs extracted from (p,d) and (p,t) reactions on 152154 ,Sm targets will be presented.

  3. A new method for high-resolution characterization of hydraulic conductivity

    Science.gov (United States)

    Liu, Gaisheng; Butler, J.J.; Bohling, Geoffrey C.; Reboulet, Ed; Knobbe, Steve; Hyndman, D.W.

    2009-01-01

    A new probe has been developed for high-resolution characterization of hydraulic conductivity (K) in shallow unconsolidated formations. The probe was recently applied at the Macrodispersion Experiment (MADE) site in Mississippi where K was rapidly characterized at a resolution as fine as 0.015 m, which has not previously been possible. Eleven profiles were obtained with K varying up to 7 orders of magnitude in individual profiles. Currently, high-resolution (0.015-m) profiling has an upper K limit of 10 m/d; lower-resolution (???0.4-m) mode is used in more permeable zones pending modifications. The probe presents a new means to help address unresolved issues of solute transport in heterogeneous systems. Copyright 2009 by the American Geophysical Union.

  4. Numerical Methods for Stochastic Computations A Spectral Method Approach

    CERN Document Server

    Xiu, Dongbin

    2010-01-01

    The first graduate-level textbook to focus on fundamental aspects of numerical methods for stochastic computations, this book describes the class of numerical methods based on generalized polynomial chaos (gPC). These fast, efficient, and accurate methods are an extension of the classical spectral methods of high-dimensional random spaces. Designed to simulate complex systems subject to random inputs, these methods are widely used in many areas of computer science and engineering. The book introduces polynomial approximation theory and probability theory; describes the basic theory of gPC meth

  5. Pixel-Wise Classification Method for High Resolution Remote Sensing Imagery Using Deep Neural Networks

    Directory of Open Access Journals (Sweden)

    Rui Guo

    2018-03-01

    Full Text Available Considering the classification of high spatial resolution remote sensing imagery, this paper presents a novel classification method for such imagery using deep neural networks. Deep learning methods, such as a fully convolutional network (FCN model, achieve state-of-the-art performance in natural image semantic segmentation when provided with large-scale datasets and respective labels. To use data efficiently in the training stage, we first pre-segment training images and their labels into small patches as supplements of training data using graph-based segmentation and the selective search method. Subsequently, FCN with atrous convolution is used to perform pixel-wise classification. In the testing stage, post-processing with fully connected conditional random fields (CRFs is used to refine results. Extensive experiments based on the Vaihingen dataset demonstrate that our method performs better than the reference state-of-the-art networks when applied to high-resolution remote sensing imagery classification.

  6. Time lens for high-resolution neutron time-of-flight spectrometers

    International Nuclear Information System (INIS)

    Baumann, K.; Gaehler, R.; Grigoriev, P.; Kats, E.I.

    2005-01-01

    We examine in analytic and numeric ways the imaging effects of temporal neutron lenses created by traveling magnetic fields. For fields of parabolic shape we derive the imaging equations, investigate the time magnification, the evolution of the phase-space element, the gain factor, and the effect of finite beam size. The main aberration effects are calculated numerically. The system is technologically feasible and should convert neutron time-of-flight instruments from pinhole to imaging configuration in time, thus enhancing intensity and/or time resolution. Further fields of application for high-resolution spectrometry may be opened

  7. Texton-based super-resolution for achieving high spatiotemporal resolution in hybrid camera system

    Science.gov (United States)

    Kamimura, Kenji; Tsumura, Norimichi; Nakaguchi, Toshiya; Miyake, Yoichi

    2010-05-01

    Many super-resolution methods have been proposed to enhance the spatial resolution of images by using iteration and multiple input images. In a previous paper, we proposed the example-based super-resolution method to enhance an image through pixel-based texton substitution to reduce the computational cost. In this method, however, we only considered the enhancement of a texture image. In this study, we modified this texton substitution method for a hybrid camera to reduce the required bandwidth of a high-resolution video camera. We applied our algorithm to pairs of high- and low-spatiotemporal-resolution videos, which were synthesized to simulate a hybrid camera. The result showed that the fine detail of the low-resolution video can be reproduced compared with bicubic interpolation and the required bandwidth could be reduced to about 1/5 in a video camera. It was also shown that the peak signal-to-noise ratios (PSNRs) of the images improved by about 6 dB in a trained frame and by 1.0-1.5 dB in a test frame, as determined by comparison with the processed image using bicubic interpolation, and the average PSNRs were higher than those obtained by the well-known Freeman’s patch-based super-resolution method. Compared with that of the Freeman’s patch-based super-resolution method, the computational time of our method was reduced to almost 1/10.

  8. Comparison of four machine learning methods for object-oriented change detection in high-resolution satellite imagery

    Science.gov (United States)

    Bai, Ting; Sun, Kaimin; Deng, Shiquan; Chen, Yan

    2018-03-01

    High resolution image change detection is one of the key technologies of remote sensing application, which is of great significance for resource survey, environmental monitoring, fine agriculture, military mapping and battlefield environment detection. In this paper, for high-resolution satellite imagery, Random Forest (RF), Support Vector Machine (SVM), Deep belief network (DBN), and Adaboost models were established to verify the possibility of different machine learning applications in change detection. In order to compare detection accuracy of four machine learning Method, we applied these four machine learning methods for two high-resolution images. The results shows that SVM has higher overall accuracy at small samples compared to RF, Adaboost, and DBN for binary and from-to change detection. With the increase in the number of samples, RF has higher overall accuracy compared to Adaboost, SVM and DBN.

  9. Methods to assess high-resolution subsurface gas concentrations and gas fluxes in wetland ecosystems

    DEFF Research Database (Denmark)

    Elberling, Bo; Kühl, Michael; Glud, Ronnie Nøhr

    2013-01-01

    The need for measurements of soil gas concentrations and surface fluxes of greenhouse gases at high temporal and spatial resolution in wetland ecosystem has lead to the introduction of several new analytical techniques and methods. In addition to the automated flux chamber methodology for high-re...

  10. Accelerated high-resolution photoacoustic tomography via compressed sensing

    Science.gov (United States)

    Arridge, Simon; Beard, Paul; Betcke, Marta; Cox, Ben; Huynh, Nam; Lucka, Felix; Ogunlade, Olumide; Zhang, Edward

    2016-12-01

    Current 3D photoacoustic tomography (PAT) systems offer either high image quality or high frame rates but are not able to deliver high spatial and temporal resolution simultaneously, which limits their ability to image dynamic processes in living tissue (4D PAT). A particular example is the planar Fabry-Pérot (FP) photoacoustic scanner, which yields high-resolution 3D images but takes several minutes to sequentially map the incident photoacoustic field on the 2D sensor plane, point-by-point. However, as the spatio-temporal complexity of many absorbing tissue structures is rather low, the data recorded in such a conventional, regularly sampled fashion is often highly redundant. We demonstrate that combining model-based, variational image reconstruction methods using spatial sparsity constraints with the development of novel PAT acquisition systems capable of sub-sampling the acoustic wave field can dramatically increase the acquisition speed while maintaining a good spatial resolution: first, we describe and model two general spatial sub-sampling schemes. Then, we discuss how to implement them using the FP interferometer and demonstrate the potential of these novel compressed sensing PAT devices through simulated data from a realistic numerical phantom and through measured data from a dynamic experimental phantom as well as from in vivo experiments. Our results show that images with good spatial resolution and contrast can be obtained from highly sub-sampled PAT data if variational image reconstruction techniques that describe the tissues structures with suitable sparsity-constraints are used. In particular, we examine the use of total variation (TV) regularization enhanced by Bregman iterations. These novel reconstruction strategies offer new opportunities to dramatically increase the acquisition speed of photoacoustic scanners that employ point-by-point sequential scanning as well as reducing the channel count of parallelized schemes that use detector arrays.

  11. Combining the Pixel-based and Object-based Methods for Building Change Detection Using High-resolution Remote Sensing Images

    Directory of Open Access Journals (Sweden)

    ZHANG Zhiqiang

    2018-01-01

    Full Text Available Timely and accurate change detection of buildings provides important information for urban planning and management.Accompanying with the rapid development of satellite remote sensing technology,detecting building changes from high-resolution remote sensing images have received wide attention.Given that pixel-based methods of change detection often lead to low accuracy while object-based methods are complicated for uses,this research proposes a method that combines pixel-based and object-based methods for detecting building changes from high-resolution remote sensing images.First,based on the multiple features extracted from the high-resolution images,a random forest classifier is applied to detect changed building at the pixel level.Then,a segmentation method is applied to segement the post-phase remote sensing image and to get post-phase image objects.Finally,both changed building at the pixel level and post-phase image objects are fused to recognize the changed building objects.Multi-temporal QuickBird images are used as experiment data for building change detection with high-resolution remote sensing images,the results indicate that the proposed method could reduce the influence of environmental difference,such as light intensity and view angle,on building change detection,and effectively improve the accuracies of building change detection.

  12. High-resolution seismic wave propagation using local time stepping

    KAUST Repository

    Peter, Daniel; Rietmann, Max; Galvez, Percy; Ampuero, Jean Paul

    2017-01-01

    High-resolution seismic wave simulations often require local refinements in numerical meshes to accurately capture e.g. steep topography or complex fault geometry. Together with explicit time schemes, this dramatically reduces the global time step

  13. Multi-example feature-constrained back-projection method for image super-resolution

    Institute of Scientific and Technical Information of China (English)

    Junlei Zhang; Dianguang Gai; Xin Zhang; Xuemei Li

    2017-01-01

    Example-based super-resolution algorithms,which predict unknown high-resolution image information using a relationship model learnt from known high- and low-resolution image pairs, have attracted considerable interest in the field of image processing. In this paper, we propose a multi-example feature-constrained back-projection method for image super-resolution. Firstly, we take advantage of a feature-constrained polynomial interpolation method to enlarge the low-resolution image. Next, we consider low-frequency images of different resolutions to provide an example pair. Then, we use adaptive k NN search to find similar patches in the low-resolution image for every image patch in the high-resolution low-frequency image, leading to a regression model between similar patches to be learnt. The learnt model is applied to the low-resolution high-frequency image to produce high-resolution high-frequency information. An iterative back-projection algorithm is used as the final step to determine the final high-resolution image.Experimental results demonstrate that our method improves the visual quality of the high-resolution image.

  14. Adaptive optics with pupil tracking for high resolution retinal imaging.

    Science.gov (United States)

    Sahin, Betul; Lamory, Barbara; Levecq, Xavier; Harms, Fabrice; Dainty, Chris

    2012-02-01

    Adaptive optics, when integrated into retinal imaging systems, compensates for rapidly changing ocular aberrations in real time and results in improved high resolution images that reveal the photoreceptor mosaic. Imaging the retina at high resolution has numerous potential medical applications, and yet for the development of commercial products that can be used in the clinic, the complexity and high cost of the present research systems have to be addressed. We present a new method to control the deformable mirror in real time based on pupil tracking measurements which uses the default camera for the alignment of the eye in the retinal imaging system and requires no extra cost or hardware. We also present the first experiments done with a compact adaptive optics flood illumination fundus camera where it was possible to compensate for the higher order aberrations of a moving model eye and in vivo in real time based on pupil tracking measurements, without the real time contribution of a wavefront sensor. As an outcome of this research, we showed that pupil tracking can be effectively used as a low cost and practical adaptive optics tool for high resolution retinal imaging because eye movements constitute an important part of the ocular wavefront dynamics.

  15. High-resolution satellite image segmentation using Hölder exponents

    Indian Academy of Sciences (India)

    Keywords. High resolution image; texture analysis; segmentation; IKONOS; Hölder exponent; cluster. ... are that. • it can be used as a tool to measure the roughness ... uses reinforcement learning to learn the reward values of ..... The numerical.

  16. Quality and sensitivity of high-resolution numerical simulation of urban heat islands

    International Nuclear Information System (INIS)

    Li, Dan; Bou-Zeid, Elie

    2014-01-01

    High-resolution numerical simulations of the urban heat island (UHI) effect with the widely-used Weather Research and Forecasting (WRF) model are assessed. Both the sensitivity of the results to the simulation setup, and the quality of the simulated fields as representations of the real world, are investigated. Results indicate that the WRF-simulated surface temperatures are more sensitive to the planetary boundary layer (PBL) scheme choice during nighttime, and more sensitive to the surface thermal roughness length parameterization during daytime. The urban surface temperatures simulated by WRF are also highly sensitive to the urban canopy model (UCM) used. The implementation in this study of an improved UCM (the Princeton UCM or PUCM) that allows the simulation of heterogeneous urban facets and of key hydrological processes, together with the so-called CZ09 parameterization for the thermal roughness length, significantly reduce the bias (<1.5 °C) in the surface temperature fields as compared to satellite observations during daytime. The boundary layer potential temperature profiles are captured by WRF reasonable well at both urban and rural sites; the biases in these profiles relative to aircraft-mounted senor measurements are on the order of 1.5 °C. Changing UCMs and PBL schemes does not alter the performance of WRF in reproducing bulk boundary layer temperature profiles significantly. The results illustrate the wide range of urban environmental conditions that various configurations of WRF can produce, and the significant biases that should be assessed before inferences are made based on WRF outputs. The optimal set-up of WRF-PUCM developed in this paper also paves the way for a confident exploration of the city-scale impacts of UHI mitigation strategies in the companion paper (Li et al 2014). (letter)

  17. High-resolution numerical simulation of summer wind field comparing WRF boundary-layer parametrizations over complex Arctic topography: case study from central Spitsbergen

    Czech Academy of Sciences Publication Activity Database

    Láska, K.; Chládová, Zuzana; Hošek, Jiří

    2017-01-01

    Roč. 26, č. 4 (2017), s. 391-408 ISSN 0941-2948 Institutional support: RVO:68378289 Keywords : surface wind field * model evaluation * topographic effect * circulation pattern * Svalbard Subject RIV: DG - Athmosphere Sciences, Meteorology OBOR OECD: Meteorology and atmospheric sciences Impact factor: 1.989, year: 2016 http://www.schweizerbart.de/papers/metz/detail/prepub/87659/High_resolution_numerical_simulation_of_summer_wind_field_comparing_WRF_boundary_layer_parametrizations_over_complex_Arctic_topography_case_study_from_central_Spitsbergen

  18. Methods of numerical relativity

    International Nuclear Information System (INIS)

    Piran, T.

    1983-01-01

    Numerical Relativity is an alternative to analytical methods for obtaining solutions for Einstein equations. Numerical methods are particularly useful for studying generation of gravitational radiation by potential strong sources. The author reviews the analytical background, the numerical analysis aspects and techniques and some of the difficulties involved in numerical relativity. (Auth.)

  19. High resolution CT in diffuse lung disease

    International Nuclear Information System (INIS)

    Webb, W.R.

    1995-01-01

    High resolution CT (computerized tomography) was discussed in detail. The conclusions were HRCT is able to define lung anatomy at the secondary lobular level and define a variety of abnormalities in patients with diffuse lung diseases. Evidence from numerous studies indicates that HRCT can play a major role in the assessment of diffuse infiltrative lung disease and is indicate clinically (95 refs.)

  20. High resolution CT in diffuse lung disease

    Energy Technology Data Exchange (ETDEWEB)

    Webb, W R [California Univ., San Francisco, CA (United States). Dept. of Radiology

    1996-12-31

    High resolution CT (computerized tomography) was discussed in detail. The conclusions were HRCT is able to define lung anatomy at the secondary lobular level and define a variety of abnormalities in patients with diffuse lung diseases. Evidence from numerous studies indicates that HRCT can play a major role in the assessment of diffuse infiltrative lung disease and is indicate clinically (95 refs.).

  1. Performance of the operational high-resolution numerical weather predictions of the Daphne project

    Science.gov (United States)

    Tegoulias, Ioannis; Pytharoulis, Ioannis; Karacostas, Theodore; Kartsios, Stergios; Kotsopoulos, Stelios; Bampzelis, Dimitrios

    2015-04-01

    In the framework of the DAPHNE project, the Department of Meteorology and Climatology (http://meteo.geo.auth.gr) of the Aristotle University of Thessaloniki, Greece, utilizes the nonhydrostatic Weather Research and Forecasting model with the Advanced Research dynamic solver (WRF-ARW) in order to produce high-resolution weather forecasts over Thessaly in central Greece. The aim of the DAPHNE project is to tackle the problem of drought in this area by means of Weather Modification. Cloud seeding assists the convective clouds to produce rain more efficiently or reduce hailstone size in favour of raindrops. The most favourable conditions for such a weather modification program in Thessaly occur in the period from March to October when convective clouds are triggered more frequently. Three model domains, using 2-way telescoping nesting, cover: i) Europe, the Mediterranean sea and northern Africa (D01), ii) Greece (D02) and iii) the wider region of Thessaly (D03; at selected periods) at horizontal grid-spacings of 15km, 5km and 1km, respectively. This research work intents to describe the atmospheric model setup and analyse its performance during a selected period of the operational phase of the project. The statistical evaluation of the high-resolution operational forecasts is performed using surface observations, gridded fields and radar data. Well established point verification methods combined with novel object based upon these methods, provide in depth analysis of the model skill. Spatial characteristics are adequately captured but a variable time lag between forecast and observation is noted. Acknowledgments: This research work has been co-financed by the European Union (European Regional Development Fund) and Greek national funds, through the action "COOPERATION 2011: Partnerships of Production and Research Institutions in Focused Research and Technology Sectors" (contract number 11SYN_8_1088 - DAPHNE) in the framework of the operational programme "Competitiveness

  2. A Saliency Guided Semi-Supervised Building Change Detection Method for High Resolution Remote Sensing Images

    Directory of Open Access Journals (Sweden)

    Bin Hou

    2016-08-01

    Full Text Available Characterizations of up to date information of the Earth’s surface are an important application providing insights to urban planning, resources monitoring and environmental studies. A large number of change detection (CD methods have been developed to solve them by utilizing remote sensing (RS images. The advent of high resolution (HR remote sensing images further provides challenges to traditional CD methods and opportunities to object-based CD methods. While several kinds of geospatial objects are recognized, this manuscript mainly focuses on buildings. Specifically, we propose a novel automatic approach combining pixel-based strategies with object-based ones for detecting building changes with HR remote sensing images. A multiresolution contextual morphological transformation called extended morphological attribute profiles (EMAPs allows the extraction of geometrical features related to the structures within the scene at different scales. Pixel-based post-classification is executed on EMAPs using hierarchical fuzzy clustering. Subsequently, the hierarchical fuzzy frequency vector histograms are formed based on the image-objects acquired by simple linear iterative clustering (SLIC segmentation. Then, saliency and morphological building index (MBI extracted on difference images are used to generate a pseudo training set. Ultimately, object-based semi-supervised classification is implemented on this training set by applying random forest (RF. Most of the important changes are detected by the proposed method in our experiments. This study was checked for effectiveness using visual evaluation and numerical evaluation.

  3. An introduction to numerical methods and analysis

    CERN Document Server

    Epperson, James F

    2013-01-01

    Praise for the First Edition "". . . outstandingly appealing with regard to its style, contents, considerations of requirements of practice, choice of examples, and exercises.""-Zentralblatt MATH "". . . carefully structured with many detailed worked examples.""-The Mathematical Gazette The Second Edition of the highly regarded An Introduction to Numerical Methods and Analysis provides a fully revised guide to numerical approximation. The book continues to be accessible and expertly guides readers through the many available techniques of numerical methods and analysis. An Introduction to

  4. Experimental Investigation and High Resolution Simulation of In-Situ Combustion Processes

    Energy Technology Data Exchange (ETDEWEB)

    Margot Gerritsen; Tony Kovscek

    2008-04-30

    This final technical report describes work performed for the project 'Experimental Investigation and High Resolution Numerical Simulator of In-Situ Combustion Processes', DE-FC26-03NT15405. In summary, this work improved our understanding of in-situ combustion (ISC) process physics and oil recovery. This understanding was translated into improved conceptual models and a suite of software algorithms that extended predictive capabilities. We pursued experimental, theoretical, and numerical tasks during the performance period. The specific project objectives were (i) identification, experimentally, of chemical additives/injectants that improve combustion performance and delineation of the physics of improved performance, (ii) establishment of a benchmark one-dimensional, experimental data set for verification of in-situ combustion dynamics computed by simulators, (iii) develop improved numerical methods that can be used to describe in-situ combustion more accurately, and (iv) to lay the underpinnings of a highly efficient, 3D, in-situ combustion simulator using adaptive mesh refinement techniques and parallelization. We believe that project goals were met and exceeded as discussed.

  5. Evaluation of the CPU time for solving the radiative transfer equation with high-order resolution schemes applying the normalized weighting-factor method

    Science.gov (United States)

    Xamán, J.; Zavala-Guillén, I.; Hernández-López, I.; Uriarte-Flores, J.; Hernández-Pérez, I.; Macías-Melo, E. V.; Aguilar-Castro, K. M.

    2018-03-01

    In this paper, we evaluated the convergence rate (CPU time) of a new mathematical formulation for the numerical solution of the radiative transfer equation (RTE) with several High-Order (HO) and High-Resolution (HR) schemes. In computational fluid dynamics, this procedure is known as the Normalized Weighting-Factor (NWF) method and it is adopted here. The NWF method is used to incorporate the high-order resolution schemes in the discretized RTE. The NWF method is compared, in terms of computer time needed to obtain a converged solution, with the widely used deferred-correction (DC) technique for the calculations of a two-dimensional cavity with emitting-absorbing-scattering gray media using the discrete ordinates method. Six parameters, viz. the grid size, the order of quadrature, the absorption coefficient, the emissivity of the boundary surface, the under-relaxation factor, and the scattering albedo are considered to evaluate ten schemes. The results showed that using the DC method, in general, the scheme that had the lowest CPU time is the SOU. In contrast, with the results of theDC procedure the CPU time for DIAMOND and QUICK schemes using the NWF method is shown to be, between the 3.8 and 23.1% faster and 12.6 and 56.1% faster, respectively. However, the other schemes are more time consuming when theNWFis used instead of the DC method. Additionally, a second test case was presented and the results showed that depending on the problem under consideration, the NWF procedure may be computationally faster or slower that the DC method. As an example, the CPU time for QUICK and SMART schemes are 61.8 and 203.7%, respectively, slower when the NWF formulation is used for the second test case. Finally, future researches to explore the computational cost of the NWF method in more complex problems are required.

  6. A Coastal Bay Summer Breeze Study, Part 2: High-resolution Numerical Simulation of Sea-breeze Local Influences

    Science.gov (United States)

    Calmet, Isabelle; Mestayer, Patrice G.; van Eijk, Alexander M. J.; Herlédant, Olivier

    2018-04-01

    We complete the analysis of the data obtained during the experimental campaign around the semi circular bay of Quiberon, France, during two weeks in June 2006 (see Part 1). A reanalysis of numerical simulations performed with the Advanced Regional Prediction System model is presented. Three nested computational domains with increasing horizontal resolution down to 100 m, and a vertical resolution of 10 m at the lowest level, are used to reproduce the local-scale variations of the breeze close to the water surface of the bay. The Weather Research and Forecasting mesoscale model is used to assimilate the meteorological data. Comparisons of the simulations with the experimental data obtained at three sites reveal a good agreement of the flow over the bay and around the Quiberon peninsula during the daytime periods of sea-breeze development and weakening. In conditions of offshore synoptic flow, the simulations demonstrate that the semi-circular shape of the bay induces a corresponding circular shape in the offshore zones of stagnant flow preceding the sea-breeze onset, which move further offshore thereafter. The higher-resolution simulations are successful in reproducing the small-scale impacts of the peninsula and local coasts (breeze deviations, wakes, flow divergences), and in demonstrating the complexity of the breeze fields close to the surface over the bay. Our reanalysis also provides guidance for numerical simulation strategies for analyzing the structure and evolution of the near-surface breeze over a semi-circular bay, and for forecasting important flow details for use in upcoming sailing competitions.

  7. Numerical methods using Matlab

    CERN Document Server

    Lindfield, George

    2012-01-01

    Numerical Methods using MATLAB, 3e, is an extensive reference offering hundreds of useful and important numerical algorithms that can be implemented into MATLAB for a graphical interpretation to help researchers analyze a particular outcome. Many worked examples are given together with exercises and solutions to illustrate how numerical methods can be used to study problems that have applications in the biosciences, chaos, optimization, engineering and science across the board. Numerical Methods using MATLAB, 3e, is an extensive reference offering hundreds of use

  8. A Method of Spatial Mapping and Reclassification for High-Spatial-Resolution Remote Sensing Image Classification

    Directory of Open Access Journals (Sweden)

    Guizhou Wang

    2013-01-01

    Full Text Available This paper presents a new classification method for high-spatial-resolution remote sensing images based on a strategic mechanism of spatial mapping and reclassification. The proposed method includes four steps. First, the multispectral image is classified by a traditional pixel-based classification method (support vector machine. Second, the panchromatic image is subdivided by watershed segmentation. Third, the pixel-based multispectral image classification result is mapped to the panchromatic segmentation result based on a spatial mapping mechanism and the area dominant principle. During the mapping process, an area proportion threshold is set, and the regional property is defined as unclassified if the maximum area proportion does not surpass the threshold. Finally, unclassified regions are reclassified based on spectral information using the minimum distance to mean algorithm. Experimental results show that the classification method for high-spatial-resolution remote sensing images based on the spatial mapping mechanism and reclassification strategy can make use of both panchromatic and multispectral information, integrate the pixel- and object-based classification methods, and improve classification accuracy.

  9. Numerical simulation study for atomic-resolution x-ray fluorescence holography

    International Nuclear Information System (INIS)

    Xie Honglan; Gao Hongyi; Chen Jianwen; Xiong Shisheng; Xu Zhizhan; Wang Junyue; Zhu Peiping; Xian Dingchang

    2003-01-01

    Based on the principle of x-ray fluorescence holography, an iron single crystal model of a body-centred cubic lattice is numerically simulated. From the fluorescence hologram produced numerically, the Fe atomic images were reconstructed. The atomic images of the (001), (100), (010) crystallographic planes were consistent with the corresponding atomic positions of the model. The result indicates that one can obtain internal structure images of single crystals at atomic-resolution by using x-ray fluorescence holography

  10. Calibration of high resolution digital camera based on different photogrammetric methods

    International Nuclear Information System (INIS)

    Hamid, N F A; Ahmad, A

    2014-01-01

    This paper presents method of calibrating high-resolution digital camera based on different configuration which comprised of stereo and convergent. Both methods are performed in the laboratory and in the field calibration. Laboratory calibration is based on a 3D test field where a calibration plate of dimension 0.4 m × 0.4 m with grid of targets at different height is used. For field calibration, it uses the same concept of 3D test field which comprised of 81 target points located on a flat ground and the dimension is 9 m × 9 m. In this study, a non-metric high resolution digital camera called Canon Power Shot SX230 HS was calibrated in the laboratory and in the field using different configuration for data acquisition. The aim of the calibration is to investigate the behavior of the internal digital camera whether all the digital camera parameters such as focal length, principal point and other parameters remain the same or vice-versa. In the laboratory, a scale bar is placed in the test field for scaling the image and approximate coordinates were used for calibration process. Similar method is utilized in the field calibration. For both test fields, the digital images were acquired within short period using stereo and convergent configuration. For field calibration, aerial digital images were acquired using unmanned aerial vehicle (UAV) system. All the images were processed using photogrammetric calibration software. Different calibration results were obtained for both laboratory and field calibrations. The accuracy of the results is evaluated based on standard deviation. In general, for photogrammetric applications and other applications the digital camera must be calibrated for obtaining accurate measurement or results. The best method of calibration depends on the type of applications. Finally, for most applications the digital camera is calibrated on site, hence, field calibration is the best method of calibration and could be employed for obtaining accurate

  11. A high-resolution neutron spectra unfolding method using the Genetic Algorithm technique

    CERN Document Server

    Mukherjee, B

    2002-01-01

    The Bonner sphere spectrometers (BSS) are commonly used to determine the neutron spectra within various nuclear facilities. Sophisticated mathematical tools are used to unfold the neutron energy distribution from the output data of the BSS. This paper highlights a novel high-resolution neutron spectra-unfolding method using the Genetic Algorithm (GA) technique. The GA imitates the biological evolution process prevailing in the nature to solve complex optimisation problems. The GA method was utilised to evaluate the neutron energy distribution, average energy, fluence and equivalent dose rates at important work places of a DIDO class research reactor and a high-energy superconducting heavy ion cyclotron. The spectrometer was calibrated with a sup 2 sup 4 sup 1 Am/Be (alpha,n) neutron standard source. The results of the GA method agreed satisfactorily with the results obtained by using the well-known BUNKI neutron spectra unfolding code.

  12. New device based on the super spatial resolution (SSR) method

    International Nuclear Information System (INIS)

    Soluri, A.; Atzeni, G.; Ucci, A.; Bellone, T.; Cusanno, F.; Rodilossi, G.; Massari, R.

    2013-01-01

    Recently it have been described that innovative methods, namely Super Spatial Resolution (SSR), can be used to improve the scintigraphic imaging. The aim of SSR techniques is the enhancement of the resolution of an imaging system, using information from several images. In this paper we describe a new experimental apparatus that could be used for molecular imaging and small animal imaging. In fact we present a new device, completely automated, that uses the SSR method and provides images with better spatial resolution in comparison to the original resolution. Preliminary small animal imaging studies confirm the feasibility of a very high resolution system in scintigraphic imaging and the possibility to have gamma cameras using the SSR method, to perform the applications on functional imaging. -- Highlights: • Super spatial resolution brings a high resolution image from scintigraphic images. • Resolution improvement depends on the signal to noise ratio of the original images. • The SSR shows significant improvement on spatial resolution in scintigraphic images. • The SSR method is potentially utilizable for all scintigraphic devices

  13. Hourglass-ShapeNetwork Based Semantic Segmentation for High Resolution Aerial Imagery

    Directory of Open Access Journals (Sweden)

    Yu Liu

    2017-05-01

    Full Text Available A new convolution neural network (CNN architecture for semantic segmentation of high resolution aerial imagery is proposed in this paper. The proposed architecture follows an hourglass-shaped network (HSN design being structured into encoding and decoding stages. By taking advantage of recent advances in CNN designs, we use the composed inception module to replace common convolutional layers, providing the network with multi-scale receptive areas with rich context. Additionally, in order to reduce spatial ambiguities in the up-sampling stage, skip connections with residual units are also employed to feed forward encoding-stage information directly to the decoder. Moreover, overlap inference is employed to alleviate boundary effects occurring when high resolution images are inferred from small-sized patches. Finally, we also propose a post-processing method based on weighted belief propagation to visually enhance the classification results. Extensive experiments based on the Vaihingen and Potsdam datasets demonstrate that the proposed architectures outperform three reference state-of-the-art network designs both numerically and visually.

  14. Numerical solution of newton´s cooling differential equation by the methods of euler and runge-kutta

    Directory of Open Access Journals (Sweden)

    Andresa Pescador

    2016-04-01

    Full Text Available This article presents the first-order differential equations, which are a very important branch of mathematics as they have a wide applicability, in mathematics, as in physics, biology and economy. The objective of this study was to analyze the resolution of the equation that defines the cooling Newton's law. Verify its behavior using some applications that can be used in the classroom as an auxiliary instrument to the teacher in addressing these contents bringing answers to the questions of the students and motivating them to build their knowledge. It attempted to its resolution through two numerical methods, Euler method and Runge -Kutta method. Finally, there was a comparison of the approach of the solution given by the numerical solution with the analytical resolution whose solution is accurate.

  15. Polycrystalline magma behaviour in dykes: Insights from high-resolution numerical models

    Science.gov (United States)

    Yamato, Philippe; Duretz, Thibault; Tartèse, Romain; May, Dave

    2013-04-01

    The presence of a crystalline load in magmas modifies their effective rheology and thus their flow behaviour. In dykes, for instance, the presence of crystals denser than the melt reduces the ascent velocity and modifies the shape of the velocity profile from a Newtonian Poiseuille flow to a Bingham type flow. Nevertheless, several unresolved issues still remain poorly understood and need to be quantified: (1) What are the mechanisms controlling crystals segregation during magma ascent in dykes? (2) How does crystals transportation within a melt depend on their concentration, geometry, size and density? (3) Do crystals evolve in isolation to each other or as a cluster? (4) What is the influence of considering inertia of the melt within the system? In this study, we present numerical models following the setup previously used in Yamato et al. (2012). Our model setup simulates an effective pressure gradient between the base and the top of a channel (representing a dyke), by pushing a rigid piston into a magmatic mush that comprised crystals and melt and perforated by a hole. The initial resolution of the models (401x1551 nodes) has been doubled in order to ensure that the smallest crystalline fractions are sufficiently well resolved. Results show that the melt phase can be squeezed out from a crystal-rich magma when subjected to a given pressure gradient range and that clustering of crystals might be an important parameter controlling their behaviour. This demonstrates that crystal-melt segregation in dykes during magma ascent constitutes a viable mechanism for magmatic differentiation of residual melts. These results also explain how isolated crystal clusters and melt pockets, with different chemistry, can be formed. In addition, we discuss the impact of taking into account inertia in our models. Reference: Yamato, P., Tartèse, R., Duretz, T., May, D.A., 2012. Numerical modelling of magma transport in dykes. Tectonophysics 526-529, 97-109.

  16. Numerical analysis in electromagnetics the TLM method

    CERN Document Server

    Saguet, Pierre

    2013-01-01

    The aim of this book is to give a broad overview of the TLM (Transmission Line Matrix) method, which is one of the "time-domain numerical methods". These methods are reputed for their significant reliance on computer resources. However, they have the advantage of being highly general.The TLM method has acquired a reputation for being a powerful and effective tool by numerous teams and still benefits today from significant theoretical developments. In particular, in recent years, its ability to simulate various situations with excellent precision, including complex materials, has been

  17. High resolution through-the-wall radar image based on beamspace eigenstructure subspace methods

    Science.gov (United States)

    Yoon, Yeo-Sun; Amin, Moeness G.

    2008-04-01

    Through-the-wall imaging (TWI) is a challenging problem, even if the wall parameters and characteristics are known to the system operator. Proper target classification and correct imaging interpretation require the application of high resolution techniques using limited array size. In inverse synthetic aperture radar (ISAR), signal subspace methods such as Multiple Signal Classification (MUSIC) are used to obtain high resolution imaging. In this paper, we adopt signal subspace methods and apply them to the 2-D spectrum obtained from the delay-andsum beamforming image. This is in contrast to ISAR, where raw data, in frequency and angle, is directly used to form the estimate of the covariance matrix and array response vector. Using beams rather than raw data has two main advantages, namely, it improves the signal-to-noise ratio (SNR) and can correctly image typical indoor extended targets, such as tables and cabinets, as well as point targets. The paper presents both simulated and experimental results using synthesized and real data. It compares the performance of beam-space MUSIC and Capon beamformer. The experimental data is collected at the test facility in the Radar Imaging Laboratory, Villanova University.

  18. Extraction Method for Earthquake-Collapsed Building Information Based on High-Resolution Remote Sensing

    International Nuclear Information System (INIS)

    Chen, Peng; Wu, Jian; Liu, Yaolin; Wang, Jing

    2014-01-01

    At present, the extraction of earthquake disaster information from remote sensing data relies on visual interpretation. However, this technique cannot effectively and quickly obtain precise and efficient information for earthquake relief and emergency management. Collapsed buildings in the town of Zipingpu after the Wenchuan earthquake were used as a case study to validate two kinds of rapid extraction methods for earthquake-collapsed building information based on pixel-oriented and object-oriented theories. The pixel-oriented method is based on multi-layer regional segments that embody the core layers and segments of the object-oriented method. The key idea is to mask layer by layer all image information, including that on the collapsed buildings. Compared with traditional techniques, the pixel-oriented method is innovative because it allows considerably rapid computer processing. As for the object-oriented method, a multi-scale segment algorithm was applied to build a three-layer hierarchy. By analyzing the spectrum, texture, shape, location, and context of individual object classes in different layers, the fuzzy determined rule system was established for the extraction of earthquake-collapsed building information. We compared the two sets of results using three variables: precision assessment, visual effect, and principle. Both methods can extract earthquake-collapsed building information quickly and accurately. The object-oriented method successfully overcomes the pepper salt noise caused by the spectral diversity of high-resolution remote sensing data and solves the problem of same object, different spectrums and that of same spectrum, different objects. With an overall accuracy of 90.38%, the method achieves more scientific and accurate results compared with the pixel-oriented method (76.84%). The object-oriented image analysis method can be extensively applied in the extraction of earthquake disaster information based on high-resolution remote sensing

  19. Numerical modeling of permafrost dynamics in Alaska using a high spatial resolution dataset

    Directory of Open Access Journals (Sweden)

    E. E. Jafarov

    2012-06-01

    Full Text Available Climate projections for the 21st century indicate that there could be a pronounced warming and permafrost degradation in the Arctic and sub-Arctic regions. Climate warming is likely to cause permafrost thawing with subsequent effects on surface albedo, hydrology, soil organic matter storage and greenhouse gas emissions.

    To assess possible changes in the permafrost thermal state and active layer thickness, we implemented the GIPL2-MPI transient numerical model for the entire Alaska permafrost domain. The model input parameters are spatial datasets of mean monthly air temperature and precipitation, prescribed thermal properties of the multilayered soil column, and water content that are specific for each soil class and geographical location. As a climate forcing, we used the composite of five IPCC Global Circulation Models that has been downscaled to 2 by 2 km spatial resolution by Scenarios Network for Alaska Planning (SNAP group.

    In this paper, we present the modeling results based on input of a five-model composite with A1B carbon emission scenario. The model has been calibrated according to the annual borehole temperature measurements for the State of Alaska. We also performed more detailed calibration for fifteen shallow borehole stations where high quality data are available on daily basis. To validate the model performance, we compared simulated active layer thicknesses with observed data from Circumpolar Active Layer Monitoring (CALM stations. The calibrated model was used to address possible ground temperature changes for the 21st century. The model simulation results show widespread permafrost degradation in Alaska could begin between 2040–2099 within the vast area southward from the Brooks Range, except for the high altitude regions of the Alaska Range and Wrangell Mountains.

  20. Mathematical properties of numerical inversion for jet calibrations

    Energy Technology Data Exchange (ETDEWEB)

    Cukierman, Aviv [Physics Department, Stanford University, Stanford, CA 94305 (United States); SLAC National Accelerator Laboratory, Stanford University, Menlo Park, CA 94025 (United States); Nachman, Benjamin, E-mail: bnachman@cern.ch [Physics Department, Stanford University, Stanford, CA 94305 (United States); SLAC National Accelerator Laboratory, Stanford University, Menlo Park, CA 94025 (United States); Physics Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94704 (United States)

    2017-06-21

    Numerical inversion is a general detector calibration technique that is independent of the underlying spectrum. This procedure is formalized and important statistical properties are presented, using high energy jets at the Large Hadron Collider as an example setting. In particular, numerical inversion is inherently biased and common approximations to the calibrated jet energy tend to over-estimate the resolution. Analytic approximations to the closure and calibrated resolutions are demonstrated to effectively predict the full forms under realistic conditions. Finally, extensions of numerical inversion are presented which can reduce the inherent biases. These methods will be increasingly important to consider with degraded resolution at low jet energies due to a much higher instantaneous luminosity in the near future.

  1. Approximate solutions for the two-dimensional integral transport equation. The critically mixed methods of resolution

    International Nuclear Information System (INIS)

    Sanchez, Richard.

    1980-11-01

    This work is divided into two part the first part (note CEA-N-2165) deals with the solution of complex two-dimensional transport problems, the second one treats the critically mixed methods of resolution. These methods are applied for one-dimensional geometries with highly anisotropic scattering. In order to simplify the set of integral equation provided by the integral transport equation, the integro-differential equation is used to obtain relations that allow to lower the number of integral equation to solve; a general mathematical and numerical study is presented [fr

  2. Ultra-high resolution coded wavefront sensor

    KAUST Repository

    Wang, Congli

    2017-06-08

    Wavefront sensors and more general phase retrieval methods have recently attracted a lot of attention in a host of application domains, ranging from astronomy to scientific imaging and microscopy. In this paper, we introduce a new class of sensor, the Coded Wavefront Sensor, which provides high spatio-temporal resolution using a simple masked sensor under white light illumination. Specifically, we demonstrate megapixel spatial resolution and phase accuracy better than 0.1 wavelengths at reconstruction rates of 50 Hz or more, thus opening up many new applications from high-resolution adaptive optics to real-time phase retrieval in microscopy.

  3. Accessing High Spatial Resolution in Astronomy Using Interference Methods

    Science.gov (United States)

    Carbonel, Cyril; Grasset, Sébastien; Maysonnave, Jean

    2018-01-01

    In astronomy, methods such as direct imaging or interferometry-based techniques (Michelson stellar interferometry for example) are used for observations. A particular advantage of interferometry is that it permits greater spatial resolution compared to direct imaging with a single telescope, which is limited by diffraction owing to the aperture of…

  4. Numerical Solutions for Nonlinear High Damping Rubber Bearing Isolators: Newmark's Method with Netwon-Raphson Iteration Revisited

    Science.gov (United States)

    Markou, A. A.; Manolis, G. D.

    2018-03-01

    Numerical methods for the solution of dynamical problems in engineering go back to 1950. The most famous and widely-used time stepping algorithm was developed by Newmark in 1959. In the present study, for the first time, the Newmark algorithm is developed for the case of the trilinear hysteretic model, a model that was used to describe the shear behaviour of high damping rubber bearings. This model is calibrated against free-vibration field tests implemented on a hybrid base isolated building, namely the Solarino project in Italy, as well as against laboratory experiments. A single-degree-of-freedom system is used to describe the behaviour of a low-rise building isolated with a hybrid system comprising high damping rubber bearings and low friction sliding bearings. The behaviour of the high damping rubber bearings is simulated by the trilinear hysteretic model, while the description of the behaviour of the low friction sliding bearings is modeled by a linear Coulomb friction model. In order to prove the effectiveness of the numerical method we compare the analytically solved trilinear hysteretic model calibrated from free-vibration field tests (Solarino project) against the same model solved with the Newmark method with Netwon-Raphson iteration. Almost perfect agreement is observed between the semi-analytical solution and the fully numerical solution with Newmark's time integration algorithm. This will allow for extension of the trilinear mechanical models to bidirectional horizontal motion, to time-varying vertical loads, to multi-degree-of-freedom-systems, as well to generalized models connected in parallel, where only numerical solutions are possible.

  5. Comparison of online and offline based merging methods for high resolution rainfall intensities

    Science.gov (United States)

    Shehu, Bora; Haberlandt, Uwe

    2016-04-01

    Accurate rainfall intensities with high spatial and temporal resolution are crucial for urban flow prediction. Commonly, raw or bias corrected radar fields are used for forecasting, while different merging products are employed for simulation. The merging products are proven to be adequate for rainfall intensities estimation, however their application in forecasting is limited as they are developed for offline mode. This study aims at adapting and refining the offline merging techniques for the online implementation, and at comparing the performance of these methods for high resolution rainfall data. Radar bias correction based on mean fields and quantile mapping are analyzed individually and also are implemented in conditional merging. Special attention is given to the impact of different spatial and temporal filters on the predictive skill of all methods. Raw radar data and kriging interpolation of station data are considered as a reference to check the benefit of the merged products. The methods are applied for several extreme events in the time period 2006-2012 caused by different meteorological conditions, and their performance is evaluated by split sampling. The study area is located within the 112 km radius of Hannover radar in Lower Saxony, Germany and the data set constitutes of 80 recording stations in 5 min time steps. The results of this study reveal how the performance of the methods is affected by the adjustment of radar data, choice of merging method and selected event. Merging techniques can be used to improve the performance of online rainfall estimation, which gives way to the application of merging products in forecasting.

  6. Development and features of an X-ray detector with high spatial resolution

    International Nuclear Information System (INIS)

    Hartmann, H.

    1979-09-01

    A laboratory model of an X-ray detector with high spatial resolution was developed and constructed. It has no spectral resolution, but a local resolution of 20 μm which is about ten times as high as that of position-sensitive proportional counters and satisfies the requirements of the very best Wolter telescopes with regard to spatial resolution. The detector will be used for laboratory tests of the 80 cm Wolter telescope which is being developed for Spacelab flights. The theory of the wire grid detector and the physics of the photoelectric effect has been developed, and model calculations and numerical calculations have been carried out. (orig./WB) [de

  7. Use of a New High Resolution Melting Method for Genotyping Pathogenic Leptospira spp.

    Directory of Open Access Journals (Sweden)

    Florence Naze

    Full Text Available Leptospirosis is a worldwide zoonosis that is endemic in tropical areas, such as Reunion Island. The species Leptospira interrogans is the primary agent in human infections, but other pathogenic species, such as L. kirschner and L. borgpetersenii, are also associated with human leptospirosis.In this study, a melting curve analysis of the products that were amplified with the primer pairs lfb1 F/R and G1/G2 facilitated an accurate species classification of Leptospira reference strains. Next, we combined an unsupervised high resolution melting (HRM method with a new statistical approach using primers to amplify a two variable-number tandem-repeat (VNTR for typing at the subspecies level. The HRM analysis, which was performed with ScreenClust Software, enabled the identification of genotypes at the serovar level with high resolution power (Hunter-Gaston index 0.984. This method was also applied to Leptospira DNA from blood samples that were obtained from Reunion Island after 1998. We were able to identify a unique genotype that is identical to that of the L. interrogans serovars Copenhageni and Icterohaemorrhagiae, suggesting that this genotype is the major cause of leptospirosis on Reunion Island.Our simple, rapid, and robust genotyping method enables the identification of Leptospira strains at the species and subspecies levels and supports the direct genotyping of Leptospira in biological samples without requiring cultures.

  8. High-resolution seismic wave propagation using local time stepping

    KAUST Repository

    Peter, Daniel

    2017-03-13

    High-resolution seismic wave simulations often require local refinements in numerical meshes to accurately capture e.g. steep topography or complex fault geometry. Together with explicit time schemes, this dramatically reduces the global time step size for ground-motion simulations due to numerical stability conditions. To alleviate this problem, local time stepping (LTS) algorithms allow an explicit time stepping scheme to adapt the time step to the element size, allowing nearoptimal time steps everywhere in the mesh. This can potentially lead to significantly faster simulation runtimes.

  9. Development of parallel implementation of adaptive numerical methods with industrial applications in fluid mechanics

    International Nuclear Information System (INIS)

    Laucoin, E.

    2008-10-01

    Numerical resolution of partial differential equations can be made reliable and efficient through the use of adaptive numerical methods.We present here the work we have done for the design, the implementation and the validation of such a method within an industrial software platform with applications in thermohydraulics. From the geometric point of view, this method can deal both with mesh refinement and mesh coarsening, while ensuring the quality of the mesh cells. Numerically, we use the mortar elements formalism in order to extend the Finite Volumes-Elements method implemented in the Trio-U platform and to deal with the non-conforming meshes arising from the adaptation procedure. Finally, we present an implementation of this method using concepts from domain decomposition methods for ensuring its efficiency while running in a parallel execution context. (author)

  10. NUMERICAL METHODS FOR THE SIMULATION OF HIGH INTENSITY HADRON SYNCHROTRONS.

    Energy Technology Data Exchange (ETDEWEB)

    LUCCIO, A.; D' IMPERIO, N.; MALITSKY, N.

    2005-09-12

    Numerical algorithms for PIC simulation of beam dynamics in a high intensity synchrotron on a parallel computer are presented. We introduce numerical solvers of the Laplace-Poisson equation in the presence of walls, and algorithms to compute tunes and twiss functions in the presence of space charge forces. The working code for the simulation here presented is SIMBAD, that can be run as stand alone or as part of the UAL (Unified Accelerator Libraries) package.

  11. Proposed Use of the NASA Ames Nebula Cloud Computing Platform for Numerical Weather Prediction and the Distribution of High Resolution Satellite Imagery

    Science.gov (United States)

    Limaye, Ashutosh S.; Molthan, Andrew L.; Srikishen, Jayanthi

    2010-01-01

    The development of the Nebula Cloud Computing Platform at NASA Ames Research Center provides an open-source solution for the deployment of scalable computing and storage capabilities relevant to the execution of real-time weather forecasts and the distribution of high resolution satellite data to the operational weather community. Two projects at Marshall Space Flight Center may benefit from use of the Nebula system. The NASA Short-term Prediction Research and Transition (SPoRT) Center facilitates the use of unique NASA satellite data and research capabilities in the operational weather community by providing datasets relevant to numerical weather prediction, and satellite data sets useful in weather analysis. SERVIR provides satellite data products for decision support, emphasizing environmental threats such as wildfires, floods, landslides, and other hazards, with interests in numerical weather prediction in support of disaster response. The Weather Research and Forecast (WRF) model Environmental Modeling System (WRF-EMS) has been configured for Nebula cloud computing use via the creation of a disk image and deployment of repeated instances. Given the available infrastructure within Nebula and the "infrastructure as a service" concept, the system appears well-suited for the rapid deployment of additional forecast models over different domains, in response to real-time research applications or disaster response. Future investigations into Nebula capabilities will focus on the development of a web mapping server and load balancing configuration to support the distribution of high resolution satellite data sets to users within the National Weather Service and international partners of SERVIR.

  12. Immersion Gratings for Infrared High-resolution Spectroscopy

    Science.gov (United States)

    Sarugaku, Yuki; Ikeda, Yuji; Kobayashi, Naoto; Kaji, Sayumi; Sukegawa, Takashi; Sugiyama, Shigeru; Nakagawa, Takao; Arasaki, Takayuki; Kondo, Sohei; Nakanishi, Kenshi; Yasui, Chikako; Kawakita, Hideyo

    2016-10-01

    High-resolution spectroscopy in the infrared wavelength range is essential for observations of minor isotopologues, such as HDO for water, and prebiotic organic molecules like hydrocarbons/P-bearing molecules because numerous vibrational molecular bands (including non-polar molecules) are located in this wavelength range. High spectral resolution enables us to detect weak lines without spectral line confusion. This technique has been widely used in planetary sciences, e.g., cometary coma (H2O, CO, and organic molecules), the martian atmosphere (CH4, CO2, H2O and HDO), and the upper atmosphere of gas giants (H3+ and organic molecules such as C2H6). Spectrographs with higher resolution (and higher sensitivity) still have a potential to provide a plenty of findings. However, because the size of spectrographs scales with the spectral resolution, it is difficult to realize it.Immersion grating (IG), which is a diffraction grating wherein the diffraction surface is immersed in a material with a high refractive index (n > 2), provides n times higher spectral resolution compared to a reflective grating of the same size. Because IG reduces the size of spectrograph to 1/n compared to the spectrograph with the same spectral resolution using a conventional reflective grating, it is widely acknowledged as a key optical device to realize compact spectrographs with high spectral resolution.Recently, we succeeded in fabricating a CdZnTe immersion grating with the theoretically predicted diffraction efficiency by machining process using an ultrahigh-precision five-axis processing machine developed by Canon Inc. Using the same technique, we completed a practical germanium (Ge) immersion grating with both a reflection coating on the grating surface and the an AR coating on the entrance surface. It is noteworthy that the wide wavelength range from 2 to 20 um can be covered by the two immersion gratings.In this paper, we present the performances and the applications of the immersion

  13. High resolution modelling of extreme precipitation events in urban areas

    Science.gov (United States)

    Siemerink, Martijn; Volp, Nicolette; Schuurmans, Wytze; Deckers, Dave

    2015-04-01

    The present day society needs to adjust to the effects of climate change. More extreme weather conditions are expected, which can lead to longer periods of drought, but also to more extreme precipitation events. Urban water systems are not designed for such extreme events. Most sewer systems are not able to drain the excessive storm water, causing urban flooding. This leads to high economic damage. In order to take appropriate measures against extreme urban storms, detailed knowledge about the behaviour of the urban water system above and below the streets is required. To investigate the behaviour of urban water systems during extreme precipitation events new assessment tools are necessary. These tools should provide a detailed and integral description of the flow in the full domain of overland runoff, sewer flow, surface water flow and groundwater flow. We developed a new assessment tool, called 3Di, which provides detailed insight in the urban water system. This tool is based on a new numerical methodology that can accurately deal with the interaction between overland runoff, sewer flow and surface water flow. A one-dimensional model for the sewer system and open channel flow is fully coupled to a two-dimensional depth-averaged model that simulates the overland flow. The tool uses a subgrid-based approach in order to take high resolution information of the sewer system and of the terrain into account [1, 2]. The combination of using the high resolution information and the subgrid based approach results in an accurate and efficient modelling tool. It is now possible to simulate entire urban water systems using extreme high resolution (0.5m x 0.5m) terrain data in combination with a detailed sewer and surface water network representation. The new tool has been tested in several Dutch cities, such as Rotterdam, Amsterdam and The Hague. We will present the results of an extreme precipitation event in the city of Schiedam (The Netherlands). This city deals with

  14. Merging thermal and microwave satellite observations for a high-resolution soil moisture data product

    Science.gov (United States)

    Many societal applications of soil moisture data products require high spatial resolution and numerical accuracy. Current thermal geostationary satellite sensors (GOES Imager and GOES-R ABI) could produce 2-16km resolution soil moisture proxy data. Passive microwave satellite radiometers (e.g. AMSR...

  15. High resolution imaging of vadose zone transport using crosswell radar and seismic methods; TOPICAL

    International Nuclear Information System (INIS)

    Majer, Ernest L.; Williams, Kenneth H.; Peterson, John E.; Daley, Thomas E.

    2001-01-01

    The summary and conclusions are that overall the radar and seismic results were excellent. At the time of design of the experiments we did not know how well these two methods could penetrate or resolve the moisture content and structure. It appears that the radar could easily go up to 5, even 10 meters between boreholes at 200 Mhz and even father (up to 20 to 40 m) at 50 Mhz. The seismic results indicate that at several hundred hertz propagation of 20 to 30 meters giving high resolution is possible. One of the most important results, however is that together the seismic and radar are complementary in their properties estimation. The radar being primarily sensitive to changes in moisture content, and the seismic being primarily sensitive to porosity. Taken in a time lapse sense the radar can show the moisture content changes to a high resolution, with the seismic showing high resolution lithology. The significant results for each method are: Radar: (1) Delineated geological layers 0.25 to 3.5 meters thick with 0.25 m resolution; (2) Delineated moisture movement and content with 0.25 m resolution; (3) Compared favorably with neutron probe measurements; and (4) Penetration up to 30 m. Radar results indicate that the transport of the riverwater is different from that of the heavier and more viscous sodium thiosulfate. It appears that the heavier fluids are not mixing readily with the in-situ fluids and the transport may be influenced by them. Seismic: (1) Delineated lithology at .25 m resolution; (2) Penetration over 20 meters, with a possibility of up to 30 or more meters; and (3) Maps porosity and density differences of the sediments. Overall the seismic is mapping the porosity and density distribution. The results are consistent with the flow field mapped by the radar, there is a change in flow properties at the 10 to 11 meter depth in the flow cell. There also appears to be break through by looking at the radar data with the denser sodium thiosulfate finally

  16. FBG Interrogation Method with High Resolution and Response Speed Based on a Reflective-Matched FBG Scheme

    Science.gov (United States)

    Cui, Jiwen; Hu, Yang; Feng, Kunpeng; Li, Junying; Tan, Jiubin

    2015-01-01

    In this paper, a high resolution and response speed interrogation method based on a reflective-matched Fiber Bragg Grating (FBG) scheme is investigated in detail. The nonlinear problem of the reflective-matched FBG sensing interrogation scheme is solved by establishing and optimizing the mathematical model. A mechanical adjustment to optimize the interrogation method by tuning the central wavelength of the reference FBG to improve the stability and anti-temperature perturbation performance is investigated. To satisfy the measurement requirements of optical and electric signal processing, a well- designed acquisition circuit board is prepared, and experiments on the performance of the interrogation method are carried out. The experimental results indicate that the optical power resolution of the acquisition circuit border is better than 8 pW, and the stability of the interrogation method with the mechanical adjustment can reach 0.06%. Moreover, the nonlinearity of the interrogation method is 3.3% in the measurable range of 60 pm; the influence of temperature is significantly reduced to 9.5%; the wavelength resolution and response speed can achieve values of 0.3 pm and 500 kHz, respectively. PMID:26184195

  17. High-resolution analysis of the mechanical behavior of tissue

    Science.gov (United States)

    Hudnut, Alexa W.; Armani, Andrea M.

    2017-06-01

    The mechanical behavior and properties of biomaterials, such as tissue, have been directly and indirectly connected to numerous malignant physiological states. For example, an increase in the Young's Modulus of tissue can be indicative of cancer. Due to the heterogeneity of biomaterials, it is extremely important to perform these measurements using whole or unprocessed tissue because the tissue matrix contains important information about the intercellular interactions and the structure. Thus, developing high-resolution approaches that can accurately measure the elasticity of unprocessed tissue samples is of great interest. Unfortunately, conventional elastography methods such as atomic force microscopy, compression testing, and ultrasound elastography either require sample processing or have poor resolution. In the present work, we demonstrate the characterization of unprocessed salmon muscle using an optical polarimetric elastography system. We compare the results of compression testing within different samples of salmon skeletal muscle with different numbers of collagen membranes to characterize differences in heterogeneity. Using the intrinsic collagen membranes as markers, we determine the resolution of the system when testing biomaterials. The device reproducibly measures the stiffness of the tissues at variable strains. By analyzing the amount of energy lost by the sample during compression, collagen membranes that are 500 μm in size are detected.

  18. Simulation of high-resolution X-ray microscopic images for improved alignment

    International Nuclear Information System (INIS)

    Song Xiangxia; Zhang Xiaobo; Liu Gang; Cheng Xianchao; Li Wenjie; Guan Yong; Liu Ying; Xiong Ying; Tian Yangchao

    2011-01-01

    The introduction of precision optical elements to X-ray microscopes necessitates fine realignment to achieve optimal high-resolution imaging. In this paper, we demonstrate a numerical method for simulating image formation that facilitates alignment of the source, condenser, objective lens, and CCD camera. This algorithm, based on ray-tracing and Rayleigh-Sommerfeld diffraction theory, is applied to simulate the X-ray microscope beamline U7A of National Synchrotron Radiation Laboratory (NSRL). The simulations and imaging experiments show that the algorithm is useful for guiding experimental adjustments. Our alignment simulation method is an essential tool for the transmission X-ray microscope (TXM) with optical elements and may also be useful for the alignment of optical components in other modes of microscopy.

  19. Abstracts of International Conference on Experimental and Computing Methods in High Resolution Diffraction Applied for Structure Characterization of Modern Materials - HREDAMM

    International Nuclear Information System (INIS)

    2004-01-01

    The conference addressed all aspects of high resolution diffraction. The topics of meeting include advanced experimental diffraction methods and computer data analysis for characterization of modern materials as well as the progress and new achievements in high resolution diffraction (X-ray, electrons, neutrons). Application of these methods for characterization of modern materials are widely presented among the invited, oral and poster contributions

  20. Numerical study of the lateral resolution in electrostatic force microscopy for dielectric samples

    International Nuclear Information System (INIS)

    Riedel, C; AlegrIa, A; Colmenero, J; Schwartz, G A; Saenz, J J

    2011-01-01

    We present a study of the lateral resolution in electrostatic force microscopy for dielectric samples in both force and gradient modes. Whereas previous studies have reported expressions for metallic surfaces having potential heterogeneities (Kelvin probe force microscopy), in this work we take into account the presence of a dielectric medium. We introduce a definition of the lateral resolution based on the force due to a test particle being either a point charge or a polarizable particle on the dielectric surface. The behaviour has been studied over a wide range of typical experimental parameters: tip-sample distance (1-20) nm, sample thickness (0-5) μm and dielectric constant (1-20), using the numerical simulation of the equivalent charge method. For potential heterogeneities on metallic surfaces expressions are in agreement with the bibliography. The lateral resolution of samples having a dielectric constant of more than 10 tends to metallic behaviour. We found a characteristic thickness of 100 nm, above which the lateral resolution measured on the dielectric surface is close to that of an infinite medium. As previously reported, the lateral resolution is better in the gradient mode than in the force mode. Finally, we showed that for the same experimental conditions, the lateral resolution is better for a polarizable particle than for a charge, i.e. dielectric heterogeneities should always look 'sharper' (better resolved) than inhomogeneous charge distributions. This fact should be taken into account when interpreting images of heterogeneous samples.

  1. Numerical study of the lateral resolution in electrostatic force microscopy for dielectric samples

    Energy Technology Data Exchange (ETDEWEB)

    Riedel, C; AlegrIa, A; Colmenero, J [Departamento de Fisica de Materiales UPV/EHU, Facultad de Quimica, Apartado 1072, 20080 San Sebastian (Spain); Schwartz, G A [Centro de Fisica de Materiales CSIC-UPV/EHU, Paseo Manuel de Lardizabal 5, 20018 San Sebastian (Spain); Saenz, J J, E-mail: riedel@ies.univ-montp2.fr [Donostia International Physics Center, Paseo Manuel de Lardizabal 4, 20018 San Sebastian (Spain)

    2011-07-15

    We present a study of the lateral resolution in electrostatic force microscopy for dielectric samples in both force and gradient modes. Whereas previous studies have reported expressions for metallic surfaces having potential heterogeneities (Kelvin probe force microscopy), in this work we take into account the presence of a dielectric medium. We introduce a definition of the lateral resolution based on the force due to a test particle being either a point charge or a polarizable particle on the dielectric surface. The behaviour has been studied over a wide range of typical experimental parameters: tip-sample distance (1-20) nm, sample thickness (0-5) {mu}m and dielectric constant (1-20), using the numerical simulation of the equivalent charge method. For potential heterogeneities on metallic surfaces expressions are in agreement with the bibliography. The lateral resolution of samples having a dielectric constant of more than 10 tends to metallic behaviour. We found a characteristic thickness of 100 nm, above which the lateral resolution measured on the dielectric surface is close to that of an infinite medium. As previously reported, the lateral resolution is better in the gradient mode than in the force mode. Finally, we showed that for the same experimental conditions, the lateral resolution is better for a polarizable particle than for a charge, i.e. dielectric heterogeneities should always look 'sharper' (better resolved) than inhomogeneous charge distributions. This fact should be taken into account when interpreting images of heterogeneous samples.

  2. Towards high resolution polarisation analysis using double polarisation and ellipsoidal analysers

    CERN Document Server

    Martin-Y-Marero, D

    2002-01-01

    Classical polarisation analysis methods lack the combination of high resolution and high count rate necessary to cope with the demand of modern condensed-matter experiments. In this work, we present a method to achieve high resolution polarisation analysis based on a double polarisation system. Coupling this method with an ellipsoidal wavelength analyser, a high count rate can be achieved whilst delivering a resolution of around 10 mu eV. This method is ideally suited to pulsed sources, although it can be adapted to continuous sources as well. (orig.)

  3. Lagrangian numerical methods for ocean biogeochemical simulations

    Science.gov (United States)

    Paparella, Francesco; Popolizio, Marina

    2018-05-01

    We propose two closely-related Lagrangian numerical methods for the simulation of physical processes involving advection, reaction and diffusion. The methods are intended to be used in settings where the flow is nearly incompressible and the Péclet numbers are so high that resolving all the scales of motion is unfeasible. This is commonplace in ocean flows. Our methods consist in augmenting the method of characteristics, which is suitable for advection-reaction problems, with couplings among nearby particles, producing fluxes that mimic diffusion, or unresolved small-scale transport. The methods conserve mass, obey the maximum principle, and allow to tune the strength of the diffusive terms down to zero, while avoiding unwanted numerical dissipation effects.

  4. Numerical methods and analysis of the nonlinear Vlasov equation on unstructured meshes of phase space

    International Nuclear Information System (INIS)

    Besse, Nicolas

    2003-01-01

    This work is dedicated to the mathematical and numerical studies of the Vlasov equation on phase-space unstructured meshes. In the first part, new semi-Lagrangian methods are developed to solve the Vlasov equation on unstructured meshes of phase space. As the Vlasov equation describes multi-scale phenomena, we also propose original methods based on a wavelet multi-resolution analysis. The resulting algorithm leads to an adaptive mesh-refinement strategy. The new massively-parallel computers allow to use these methods with several phase-space dimensions. Particularly, these numerical schemes are applied to plasma physics and charged particle beams in the case of two-, three-, and four-dimensional Vlasov-Poisson systems. In the second part we prove the convergence and give error estimates for several numerical schemes applied to the Vlasov-Poisson system when strong and classical solutions are considered. First we show the convergence of a semi-Lagrangian scheme on an unstructured mesh of phase space, when the regularity hypotheses for the initial data are minimal. Then we demonstrate the convergence of classes of high-order semi-Lagrangian schemes in the framework of the regular classical solution. In order to reconstruct the distribution function, we consider symmetrical Lagrange polynomials, B-Splines and wavelets bases. Finally we prove the convergence of a semi-Lagrangian scheme with propagation of gradients yielding a high-order and stable reconstruction of the solution. (author) [fr

  5. The new high resolution method of Godunov`s type for 3D viscous flow calculations

    Energy Technology Data Exchange (ETDEWEB)

    Yershov, S.V.; Rusanov, A.V. [Ukranian National Academy of Sciences, Kahrkov (Ukraine)

    1996-12-31

    The numerical method is suggested for the calculations of the 3D viscous compressible flows described by the thin-layer Reynolds-averaged Navier-Stokes equations. The method is based on the Godunov`s finite-difference scheme and it uses the ENO reconstruction suggested by Harten to achieve the uniformly high-order accuracy. The computational efficiency is provided with the simplified multi grid approach and the implicit step written in {delta} -form. The turbulent effects are simulated with the Baldwin - Lomax turbulence model. The application package FlowER is developed to calculate the 3D turbulent flows within complex-shape channels. The numerical results for the 3D flow around a cylinder and through the complex-shaped channels show the accuracy and the reliability of the suggested method. (author)

  6. The new high resolution method of Godunov`s type for 3D viscous flow calculations

    Energy Technology Data Exchange (ETDEWEB)

    Yershov, S V; Rusanov, A V [Ukranian National Academy of Sciences, Kahrkov (Ukraine)

    1997-12-31

    The numerical method is suggested for the calculations of the 3D viscous compressible flows described by the thin-layer Reynolds-averaged Navier-Stokes equations. The method is based on the Godunov`s finite-difference scheme and it uses the ENO reconstruction suggested by Harten to achieve the uniformly high-order accuracy. The computational efficiency is provided with the simplified multi grid approach and the implicit step written in {delta} -form. The turbulent effects are simulated with the Baldwin - Lomax turbulence model. The application package FlowER is developed to calculate the 3D turbulent flows within complex-shape channels. The numerical results for the 3D flow around a cylinder and through the complex-shaped channels show the accuracy and the reliability of the suggested method. (author)

  7. How far away is far enough for extracting numerical waveforms, and how much do they depend on the extraction method?

    International Nuclear Information System (INIS)

    Pazos, Enrique; Dorband, Ernst Nils; Nagar, Alessandro; Palenzuela, Carlos; Schnetter, Erik; Tiglio, Manuel

    2007-01-01

    We present a method for extracting gravitational waves from numerical spacetimes which generalizes and refines one of the standard methods based on the Regge-Wheeler-Zerilli perturbation formalism. At the analytical level, this generalization allows a much more general class of slicing conditions for the background geometry, and is thus not restricted to Schwarzschild-like coordinates. At the numerical level, our approach uses high-order multi-block methods, which improve both the accuracy of our simulations and of our extraction procedure. In particular, the latter is simplified since there is no need for interpolation, and we can afford to extract accurate waves at large radii with only little additional computational effort. We then present fully nonlinear three-dimensional numerical evolutions of a distorted Schwarzschild black hole in Kerr-Schild coordinates with an odd parity perturbation and analyse the improvement that we gain from our generalized wave extraction, comparing our new method to the standard one. In particular, we analyse in detail the quasinormal frequencies of the extracted waves, using both methods. We do so by comparing the extracted waves with one-dimensional high resolution solutions of the corresponding generalized Regge-Wheeler equation. We explicitly see that the errors in the waveforms extracted with the standard method at fixed, finite extraction radii do not converge to zero with increasing resolution. We find that even with observers as far out as R = 80M-which is larger than what is commonly used in state-of-the-art simulations-the assumption in the standard method that the background is close to having Schwarzschild-like coordinates increases the error in the extracted waves considerably. Furthermore, those errors are dominated by the extraction method itself and not by the accuracy of our simulations. For extraction radii between 20M and 80M and for the resolutions that we use in this paper, our new method decreases the errors

  8. How far away is far enough for extracting numerical waveforms, and how much do they depend on the extraction method?

    Energy Technology Data Exchange (ETDEWEB)

    Pazos, Enrique [Department of Physics and Astronomy, 202 Nicholson Hall, Louisiana State University, Baton Rouge, LA 70803 (United States); Dorband, Ernst Nils [Department of Physics and Astronomy, 202 Nicholson Hall, Louisiana State University, Baton Rouge, LA 70803 (United States); Nagar, Alessandro [Dipartimento di Fisica, Politecnico di Torino, Corso Duca Degli Abruzzi 24, 10129 Torino (Italy); Palenzuela, Carlos [Department of Physics and Astronomy, 202 Nicholson Hall, Louisiana State University, Baton Rouge, LA 70803 (United States); Schnetter, Erik [Center for Computation and Technology, 216 Johnston Hall, Louisiana State University, Baton Rouge, LA 70803 (United States); Tiglio, Manuel [Department of Physics and Astronomy, 202 Nicholson Hall, Louisiana State University, Baton Rouge, LA 70803 (United States)

    2007-06-21

    We present a method for extracting gravitational waves from numerical spacetimes which generalizes and refines one of the standard methods based on the Regge-Wheeler-Zerilli perturbation formalism. At the analytical level, this generalization allows a much more general class of slicing conditions for the background geometry, and is thus not restricted to Schwarzschild-like coordinates. At the numerical level, our approach uses high-order multi-block methods, which improve both the accuracy of our simulations and of our extraction procedure. In particular, the latter is simplified since there is no need for interpolation, and we can afford to extract accurate waves at large radii with only little additional computational effort. We then present fully nonlinear three-dimensional numerical evolutions of a distorted Schwarzschild black hole in Kerr-Schild coordinates with an odd parity perturbation and analyse the improvement that we gain from our generalized wave extraction, comparing our new method to the standard one. In particular, we analyse in detail the quasinormal frequencies of the extracted waves, using both methods. We do so by comparing the extracted waves with one-dimensional high resolution solutions of the corresponding generalized Regge-Wheeler equation. We explicitly see that the errors in the waveforms extracted with the standard method at fixed, finite extraction radii do not converge to zero with increasing resolution. We find that even with observers as far out as R = 80M-which is larger than what is commonly used in state-of-the-art simulations-the assumption in the standard method that the background is close to having Schwarzschild-like coordinates increases the error in the extracted waves considerably. Furthermore, those errors are dominated by the extraction method itself and not by the accuracy of our simulations. For extraction radii between 20M and 80M and for the resolutions that we use in this paper, our new method decreases the errors

  9. Numerical simulation of realistic high-temperature superconductors

    International Nuclear Information System (INIS)

    1997-01-01

    One of the main obstacles in the development of practical high-temperature superconducting (HTS) materials is dissipation, caused by the motion of magnetic flux quanta called vortices. Numerical simulations provide a promising new approach for studying these vortices. By exploiting the extraordinary memory and speed of massively parallel computers, researchers can obtain the extremely fine temporal and spatial resolution needed to model complex vortex behavior. The results may help identify new mechanisms to increase the current-capability capabilities and to predict the performance characteristics of HTS materials intended for industrial applications

  10. Introduction to precise numerical methods

    CERN Document Server

    Aberth, Oliver

    2007-01-01

    Precise numerical analysis may be defined as the study of computer methods for solving mathematical problems either exactly or to prescribed accuracy. This book explains how precise numerical analysis is constructed. The book also provides exercises which illustrate points from the text and references for the methods presented. All disc-based content for this title is now available on the Web. · Clearer, simpler descriptions and explanations ofthe various numerical methods· Two new types of numerical problems; accurately solving partial differential equations with the included software and computing line integrals in the complex plane.

  11. Numerical Simulation of Flows about a Stationary and a Free-Falling Cylinder Using Immersed Boundary-Finite Difference Lattice Boltzmann Method

    Directory of Open Access Journals (Sweden)

    Roberto Rojas

    2013-03-01

    Full Text Available The applicability of the immersed boundary-finite difference lattice Boltzmann method (IB-FDLBM to high Reynolds number flows about a circular cylinder is examined. Two-dimensional simulations of flows past a stationary circular cylinder are carried out for a wide range of the Reynolds number, Re, i.e., 1 ≤ Re ≤ 1×105. An immersed boundary-lattice Boltzmann method (IB-LBM is also used for comparison. Then free-falling circular cylinders are simulated to demonstrate the feasibility of predicting moving particles at high Reynolds numbers. The main conclusions obtained are as follows: (1 steady and unsteady flows about a stationary cylinder are well predicted with IB-LBM and IB-FDLBM, provided that the spatial resolution is high enough to satisfy the conditions of numerical stability, (2 high spatial resolution is required for stable IB-LBM simulation of high Reynolds number flows, (3 IB-FDLBM can stably simulate flows at very high Reynolds numbers without increasing the spatial resolution, (4 IB-FDLBM gives reasonable predictions of the drag coefficient for 1 ≤ Re ≤ 1×105, and (5 IB-FDLBM gives accurate predictions for the motion of free-falling cylinders at intermediate Reynolds numbers.

  12. Recent applications of gas chromatography with high-resolution mass spectrometry.

    Science.gov (United States)

    Špánik, Ivan; Machyňáková, Andrea

    2018-01-01

    Gas chromatography coupled to high-resolution mass spectrometry is a powerful analytical method that combines excellent separation power of gas chromatography with improved identification based on an accurate mass measurement. These features designate gas chromatography with high-resolution mass spectrometry as the first choice for identification and structure elucidation of unknown volatile and semi-volatile organic compounds. Gas chromatography with high-resolution mass spectrometry quantitative analyses was previously focused on the determination of dioxins and related compounds using magnetic sector type analyzers, a standing requirement of many international standards. The introduction of a quadrupole high-resolution time-of-flight mass analyzer broadened interest in this method and novel applications were developed, especially for multi-target screening purposes. This review is focused on the development and the most interesting applications of gas chromatography coupled to high-resolution mass spectrometry towards analysis of environmental matrices, biological fluids, and food safety since 2010. The main attention is paid to various approaches and applications of gas chromatography coupled to high-resolution mass spectrometry for non-target screening to identify contaminants and to characterize the chemical composition of environmental, food, and biological samples. The most interesting quantitative applications, where a significant contribution of gas chromatography with high-resolution mass spectrometry over the currently used methods is expected, will be discussed as well. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. The streamline upwind Petrov-Galerkin stabilising method for the numerical solution of highly advective problems

    Directory of Open Access Journals (Sweden)

    Carlos Humberto Galeano Urueña

    2009-05-01

    Full Text Available This article describes the streamline upwind Petrov-Galerkin (SUPG method as being a stabilisation technique for resolving the diffusion-advection-reaction equation by finite elements. The first part of this article has a short analysis of the importance of this type of differential equation in modelling physical phenomena in multiple fields. A one-dimensional description of the SUPG me- thod is then given to extend this basis to two and three dimensions. The outcome of a strongly advective and a high numerical complexity experiment is presented. The results show how the version of the implemented SUPG technique allowed stabilised approaches in space, even for high Peclet numbers. Additional graphs of the numerical experiments presented here can be downloaded from www.gnum.unal.edu.co.

  14. An Object-Oriented Classification Method on High Resolution Satellite Data

    National Research Council Canada - National Science Library

    Xiaoxia, Sun; Jixian, Zhang; Zhengjun, Liu

    2004-01-01

    .... Thereby only the spectral information is used for the classification. High spatial resolution sensors involves a general increase of spatial information and the accuracy of results may decrease on a per-pixel basis...

  15. Numerical Solutions for Nonlinear High Damping Rubber Bearing Isolators: Newmark’s Method with Netwon-Raphson Iteration Revisited

    Directory of Open Access Journals (Sweden)

    Markou A.A.

    2018-03-01

    Full Text Available Numerical methods for the solution of dynamical problems in engineering go back to 1950. The most famous and widely-used time stepping algorithm was developed by Newmark in 1959. In the present study, for the first time, the Newmark algorithm is developed for the case of the trilinear hysteretic model, a model that was used to describe the shear behaviour of high damping rubber bearings. This model is calibrated against free-vibration field tests implemented on a hybrid base isolated building, namely the Solarino project in Italy, as well as against laboratory experiments. A single-degree-of-freedom system is used to describe the behaviour of a low-rise building isolated with a hybrid system comprising high damping rubber bearings and low friction sliding bearings. The behaviour of the high damping rubber bearings is simulated by the trilinear hysteretic model, while the description of the behaviour of the low friction sliding bearings is modeled by a linear Coulomb friction model. In order to prove the effectiveness of the numerical method we compare the analytically solved trilinear hysteretic model calibrated from free-vibration field tests (Solarino project against the same model solved with the Newmark method with Netwon-Raphson iteration. Almost perfect agreement is observed between the semi-analytical solution and the fully numerical solution with Newmark’s time integration algorithm. This will allow for extension of the trilinear mechanical models to bidirectional horizontal motion, to time-varying vertical loads, to multi-degree-of-freedom-systems, as well to generalized models connected in parallel, where only numerical solutions are possible.

  16. High-resolution X-ray television and high-resolution video recorders

    International Nuclear Information System (INIS)

    Haendle, J.; Horbaschek, H.; Alexandrescu, M.

    1977-01-01

    The improved transmission properties of the high-resolution X-ray television chain described here make it possible to transmit more information per television image. The resolution in the fluoroscopic image, which is visually determined, depends on the dose rate and the inertia of the television pick-up tube. This connection is discussed. In the last few years, video recorders have been increasingly used in X-ray diagnostics. The video recorder is a further quality-limiting element in X-ray television. The development of function patterns of high-resolution magnetic video recorders shows that this quality drop may be largely overcome. The influence of electrical band width and number of lines on the resolution in the X-ray television image stored is explained in more detail. (orig.) [de

  17. Resolution enhancement for ultrasonic echographic technique in non destructive testing with an adaptive deconvolution method

    International Nuclear Information System (INIS)

    Vivet, L.

    1989-01-01

    The ultrasonic echographic technique has specific advantages which makes it essential in a lot of Non Destructive Testing (NDT) investigations. However, the high acoustic power necessary to propagate through highly attenuating media can only be transmitted by resonant transducers, which induces severe limitations of the resolution on the received echograms. This resolution may be improved with deconvolution methods. But one-dimensional deconvolution methods come up against problems in non destructive testing when the investigated medium is highly anisotropic and inhomogeneous (i.e. austenitic steel). Numerous deconvolution techniques are well documented in the NDT literature. But they often come from other application fields (biomedical engineering, geophysics) and we show they do not apply well to specific NDT problems: frequency-dependent attenuation and non-minimum phase of the emitted wavelet. We therefore introduce a new time-domain approach which takes into account the wavelet features. Our method solves the deconvolution problem as an estimation one and is performed in two steps: (i) A phase correction step which takes into account the phase of the wavelet and estimates a phase-corrected echogram. The phase of the wavelet is only due to the transducer and is assumed time-invariant during the propagation. (ii) A band equalization step which restores the spectral content of the ideal reflectivity. The two steps of the method are performed using fast Kalman filters which allow a significant reduction of the computational effort. Synthetic and actual results are given to prove that this is a good approach for resolution improvement in attenuating media [fr

  18. Coincidental match of numerical simulation and physics

    Science.gov (United States)

    Pierre, B.; Gudmundsson, J. S.

    2010-08-01

    Consequences of rapid pressure transients in pipelines range from increased fatigue to leakages and to complete ruptures of pipeline. Therefore, accurate predictions of rapid pressure transients in pipelines using numerical simulations are critical. State of the art modelling of pressure transient in general, and water hammer in particular include unsteady friction in addition to the steady frictional pressure drop, and numerical simulations rely on the method of characteristics. Comparison of rapid pressure transient calculations by the method of characteristics and a selected high resolution finite volume method highlights issues related to modelling of pressure waves and illustrates that matches between numerical simulations and physics are purely coincidental.

  19. A stable high-order perturbation of surfaces method for numerical simulation of diffraction problems in triply layered media

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Youngjoon, E-mail: hongy@uic.edu; Nicholls, David P., E-mail: davidn@uic.edu

    2017-02-01

    The accurate numerical simulation of linear waves interacting with periodic layered media is a crucial capability in engineering applications. In this contribution we study the stable and high-order accurate numerical simulation of the interaction of linear, time-harmonic waves with a periodic, triply layered medium with irregular interfaces. In contrast with volumetric approaches, High-Order Perturbation of Surfaces (HOPS) algorithms are inexpensive interfacial methods which rapidly and recursively estimate scattering returns by perturbation of the interface shape. In comparison with Boundary Integral/Element Methods, the stable HOPS algorithm we describe here does not require specialized quadrature rules, periodization strategies, or the solution of dense non-symmetric positive definite linear systems. In addition, the algorithm is provably stable as opposed to other classical HOPS approaches. With numerical experiments we show the remarkable efficiency, fidelity, and accuracy one can achieve with an implementation of this algorithm.

  20. Detector Motion Method to Increase Spatial Resolution in Photon-Counting Detectors

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Daehee; Park, Kyeongjin; Lim, Kyung Taek; Cho, Gyuseong [Korea Advanced Institute of Science and Technology, Daejon (Korea, Republic of)

    2017-03-15

    Medical imaging requires high spatial resolution of an image to identify fine lesions. Photoncounting detectors in medical imaging have recently been rapidly replacing energy-integrating detectors due to the former's high spatial resolution, high efficiency and low noise. Spatial resolution in a photon counting image is determined by the pixel size. Therefore, the smaller the pixel size, the higher the spatial resolution that can be obtained in an image. However, detector redesigning is required to reduce pixel size, and an expensive fine process is required to integrate a signal processing unit with reduced pixel size. Furthermore, as the pixel size decreases, charge sharing severely deteriorates spatial resolution. To increase spatial resolution, we propose a detector motion method using a large pixel detector that is less affected by charge sharing. To verify the proposed method, we utilized a UNO-XRI photon-counting detector (1-mm CdTe, Timepix chip) at the maximum X-ray tube voltage of 80 kVp. A similar spatial resolution of a 55-μm-pixel image was achieved by application of the proposed method to a 110-μm-pixel detector with a higher signal-to-noise ratio. The proposed method could be a way to increase spatial resolution without a pixel redesign when pixels severely suffer from charge sharing as pixel size is reduced.

  1. Nonlinear ordinary differential equations analytical approximation and numerical methods

    CERN Document Server

    Hermann, Martin

    2016-01-01

    The book discusses the solutions to nonlinear ordinary differential equations (ODEs) using analytical and numerical approximation methods. Recently, analytical approximation methods have been largely used in solving linear and nonlinear lower-order ODEs. It also discusses using these methods to solve some strong nonlinear ODEs. There are two chapters devoted to solving nonlinear ODEs using numerical methods, as in practice high-dimensional systems of nonlinear ODEs that cannot be solved by analytical approximate methods are common. Moreover, it studies analytical and numerical techniques for the treatment of parameter-depending ODEs. The book explains various methods for solving nonlinear-oscillator and structural-system problems, including the energy balance method, harmonic balance method, amplitude frequency formulation, variational iteration method, homotopy perturbation method, iteration perturbation method, homotopy analysis method, simple and multiple shooting method, and the nonlinear stabilized march...

  2. High resolution tsunami inversion for 2010 Chile earthquake

    Directory of Open Access Journals (Sweden)

    T.-R. Wu

    2011-12-01

    Full Text Available We investigate the feasibility of inverting high-resolution vertical seafloor displacement from tsunami waveforms. An inversion method named "SUTIM" (small unit tsunami inversion method is developed to meet this goal. In addition to utilizing the conventional least-square inversion, this paper also enhances the inversion resolution by Grid-Shifting method. A smooth constraint is adopted to gain stability. After a series of validation and performance tests, SUTIM is used to study the 2010 Chile earthquake. Based upon data quality and azimuthal distribution, we select tsunami waveforms from 6 GLOSS stations and 1 DART buoy record. In total, 157 sub-faults are utilized for the high-resolution inversion. The resolution reaches 10 sub-faults per wavelength. The result is compared with the distribution of the aftershocks and waveforms at each gauge location with very good agreement. The inversion result shows that the source profile features a non-uniform distribution of the seafloor displacement. The highly elevated vertical seafloor is mainly concentrated in two areas: one is located in the northern part of the epicentre, between 34° S and 36° S; the other is in the southern part, between 37° S and 38° S.

  3. High resolution tsunami inversion for 2010 Chile earthquake

    Science.gov (United States)

    Wu, T.-R.; Ho, T.-C.

    2011-12-01

    We investigate the feasibility of inverting high-resolution vertical seafloor displacement from tsunami waveforms. An inversion method named "SUTIM" (small unit tsunami inversion method) is developed to meet this goal. In addition to utilizing the conventional least-square inversion, this paper also enhances the inversion resolution by Grid-Shifting method. A smooth constraint is adopted to gain stability. After a series of validation and performance tests, SUTIM is used to study the 2010 Chile earthquake. Based upon data quality and azimuthal distribution, we select tsunami waveforms from 6 GLOSS stations and 1 DART buoy record. In total, 157 sub-faults are utilized for the high-resolution inversion. The resolution reaches 10 sub-faults per wavelength. The result is compared with the distribution of the aftershocks and waveforms at each gauge location with very good agreement. The inversion result shows that the source profile features a non-uniform distribution of the seafloor displacement. The highly elevated vertical seafloor is mainly concentrated in two areas: one is located in the northern part of the epicentre, between 34° S and 36° S; the other is in the southern part, between 37° S and 38° S.

  4. The development of high-resolution spectroscopic methods and their use in atomic structure studies

    International Nuclear Information System (INIS)

    Poulsen, O.

    1984-01-01

    This thesis discusses work performed during the last nine years in the field of atomic spectroscopy. Several high-resolution techniques, ranging from quantum beats, level crossings, rf-laser double resonances to nonlinear field atom interactions, have been employed. In particular, these methods have been adopted and developed to deal with fast accelerated atomic or ionic beams, allowing studies of problems in atomic-structure theory. Fine- and hyperfine-structure determinations in the He I and Li I isoelectronic sequences, in 51 V I, and in 235 U I, II have permitted a detailed comparison with ab initio calculations, demonstrating the change in problems when going towards heavier elements or higher ionization stage. The last part of the thesis is concerned with the fundamental question of obtaining very high optical resolution in the interaction between a fast accelerated atom or ion beam and a laser field, this problem being the core in the continuing development of atomic spectroscopy necessary to challenge the more precise and sophisticated theories advanced. (Auth.)

  5. Multi-resolution voxel phantom modeling: a high-resolution eye model for computational dosimetry.

    Science.gov (United States)

    Caracappa, Peter F; Rhodes, Ashley; Fiedler, Derek

    2014-09-21

    Voxel models of the human body are commonly used for simulating radiation dose with a Monte Carlo radiation transport code. Due to memory limitations, the voxel resolution of these computational phantoms is typically too large to accurately represent the dimensions of small features such as the eye. Recently reduced recommended dose limits to the lens of the eye, which is a radiosensitive tissue with a significant concern for cataract formation, has lent increased importance to understanding the dose to this tissue. A high-resolution eye model is constructed using physiological data for the dimensions of radiosensitive tissues, and combined with an existing set of whole-body models to form a multi-resolution voxel phantom, which is used with the MCNPX code to calculate radiation dose from various exposure types. This phantom provides an accurate representation of the radiation transport through the structures of the eye. Two alternate methods of including a high-resolution eye model within an existing whole-body model are developed. The accuracy and performance of each method is compared against existing computational phantoms.

  6. High-resolution wavefront shaping with a photonic crystal fiber for multimode fiber imaging

    NARCIS (Netherlands)

    Amitonova, L. V.; Descloux, A.; Petschulat, J.; Frosz, M. H.; Ahmed, G.; Babic, F.; Jiang, X.; Mosk, A. P.; Russell, P. S. J.; Pinkse, P.W.H.

    2016-01-01

    We demonstrate that a high-numerical-aperture photonic crystal fiber allows lensless focusing at an unparalleled res- olution by complex wavefront shaping. This paves the way toward high-resolution imaging exceeding the capabilities of imaging with multi-core single-mode optical fibers. We analyze

  7. FBG Interrogation Method with High Resolution and Response Speed Based on a Reflective-Matched FBG Scheme.

    Science.gov (United States)

    Cui, Jiwen; Hu, Yang; Feng, Kunpeng; Li, Junying; Tan, Jiubin

    2015-07-08

    In this paper, a high resolution and response speed interrogation method based on a reflective-matched Fiber Bragg Grating (FBG) scheme is investigated in detail. The nonlinear problem of the reflective-matched FBG sensing interrogation scheme is solved by establishing and optimizing the mathematical model. A mechanical adjustment to optimize the interrogation method by tuning the central wavelength of the reference FBG to improve the stability and anti-temperature perturbation performance is investigated. To satisfy the measurement requirements of optical and electric signal processing, a well- designed acquisition circuit board is prepared, and experiments on the performance of the interrogation method are carried out. The experimental results indicate that the optical power resolution of the acquisition circuit border is better than 8 pW, and the stability of the interrogation method with the mechanical adjustment can reach 0.06%. Moreover, the nonlinearity of the interrogation method is 3.3% in the measurable range of 60 pm; the influence of temperature is significantly reduced to 9.5%; the wavelength resolution and response speed can achieve values of 0.3 pm and 500 kHz, respectively.

  8. Operator theory and numerical methods

    CERN Document Server

    Fujita, H; Suzuki, T

    2001-01-01

    In accordance with the developments in computation, theoretical studies on numerical schemes are now fruitful and highly needed. In 1991 an article on the finite element method applied to evolutionary problems was published. Following the method, basically this book studies various schemes from operator theoretical points of view. Many parts are devoted to the finite element method, but other schemes and problems (charge simulation method, domain decomposition method, nonlinear problems, and so forth) are also discussed, motivated by the observation that practically useful schemes have fine mathematical structures and the converses are also true. This book has the following chapters: 1. Boundary Value Problems and FEM. 2. Semigroup Theory and FEM. 3. Evolution Equations and FEM. 4. Other Methods in Time Discretization. 5. Other Methods in Space Discretization. 6. Nonlinear Problems. 7. Domain Decomposition Method.

  9. High-resolution numerical modeling of meteorological and hydrological conditions during May 2014 floods in Serbia

    Science.gov (United States)

    Vujadinovic, Mirjam; Vukovic, Ana; Cvetkovic, Bojan; Pejanovic, Goran; Nickovic, Slobodan; Djurdjevic, Vladimir; Rajkovic, Borivoj; Djordjevic, Marija

    2015-04-01

    In May 2014 west Balkan region was affected by catastrophic floods in Serbia, Bosnia and Herzegovina and eastern parts of Croatia. Observed precipitation amount were extremely high, on many stations largest ever recorded. In the period from 12th to 18th of May, most of Serbia received between 50 to 100 mm of rainfall, while western parts of the country, which were influenced the most, had over 200 mm of rainfall, locally even more than 300 mm. This very intense precipitation came when the soil was already saturated after a very wet period during the second half of April and beginning of May, when most of Serbia received between 120 i 170 mm of rainfall. New abundant precipitation on already saturated soil increased surface and underground water flow, caused floods, soil erosion and landslides. High water levels, most of them record breaking, were measured on the Sava, Drina, Dunav, Kolubara, Ljig, Ub, Toplica, Tamnava, Jadar, Zapadna Morava, Velika Morava, Mlava and Pek river. Overall, two cities and 17 municipals were severely affected by the floods, 32000 people were evacuated from their homes, while 51 died. Material damage to the infrastructure, energy power system, crops, livestock funds and houses is estimated to more than 2 billion euro. Although the operational numerical weather forecast gave in generally good precipitation prediction, flood forecasting in this case was mainly done through the expert judgment rather than relying on dynamic hydrological modeling. We applied an integrated atmospheric-hydrologic modelling system to some of the most impacted catchments in order to timely simulate hydrological response, and examine its potentials as a flood warning system. The system is based on the Non-hydrostatic Multiscale Model NMMB, which is a numerical weather prediction model that can be used on a broad range of spatial and temporal scales. Its non-hydrostatic module allows high horizontal resolution and resolving cloud systems as well as large

  10. High Resolution DNS of Turbulent Flows using an Adaptive, Finite Volume Method

    Science.gov (United States)

    Trebotich, David

    2014-11-01

    We present a new computational capability for high resolution simulation of incompressible viscous flows. Our approach is based on cut cell methods where an irregular geometry such as a bluff body is intersected with a rectangular Cartesian grid resulting in cut cells near the boundary. In the cut cells we use a conservative discretization based on a discrete form of the divergence theorem to approximate fluxes for elliptic and hyperbolic terms in the Navier-Stokes equations. Away from the boundary the method reduces to a finite difference method. The algorithm is implemented in the Chombo software framework which supports adaptive mesh refinement and massively parallel computations. The code is scalable to 200,000 + processor cores on DOE supercomputers, resulting in DNS studies at unprecedented scale and resolution. For flow past a cylinder in transition (Re = 300) we observe a number of secondary structures in the far wake in 2D where the wake is over 120 cylinder diameters in length. These are compared with the more regularized wake structures in 3D at the same scale. For flow past a sphere (Re = 600) we resolve an arrowhead structure in the velocity in the near wake. The effectiveness of AMR is further highlighted in a simulation of turbulent flow (Re = 6000) in the contraction of an oil well blowout preventer. This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, Applied Mathematics program under Contract Number DE-AC02-05-CH11231.

  11. Extension of least squares spectral resolution algorithm to high-resolution lipidomics data

    Energy Technology Data Exchange (ETDEWEB)

    Zeng, Ying-Xu [Department of Chemistry, University of Bergen, PO Box 7803, N-5020 Bergen (Norway); Mjøs, Svein Are, E-mail: svein.mjos@kj.uib.no [Department of Chemistry, University of Bergen, PO Box 7803, N-5020 Bergen (Norway); David, Fabrice P.A. [Bioinformatics and Biostatistics Core Facility, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL) and Swiss Institute of Bioinformatics (SIB), Lausanne (Switzerland); Schmid, Adrien W. [Proteomics Core Facility, Ecole Polytechnique Fédérale de Lausanne (EPFL), 1015 Lausanne (Switzerland)

    2016-03-31

    Lipidomics, which focuses on the global study of molecular lipids in biological systems, has been driven tremendously by technical advances in mass spectrometry (MS) instrumentation, particularly high-resolution MS. This requires powerful computational tools that handle the high-throughput lipidomics data analysis. To address this issue, a novel computational tool has been developed for the analysis of high-resolution MS data, including the data pretreatment, visualization, automated identification, deconvolution and quantification of lipid species. The algorithm features the customized generation of a lipid compound library and mass spectral library, which covers the major lipid classes such as glycerolipids, glycerophospholipids and sphingolipids. Next, the algorithm performs least squares resolution of spectra and chromatograms based on the theoretical isotope distribution of molecular ions, which enables automated identification and quantification of molecular lipid species. Currently, this methodology supports analysis of both high and low resolution MS as well as liquid chromatography-MS (LC-MS) lipidomics data. The flexibility of the methodology allows it to be expanded to support more lipid classes and more data interpretation functions, making it a promising tool in lipidomic data analysis. - Highlights: • A flexible strategy for analyzing MS and LC-MS data of lipid molecules is proposed. • Isotope distribution spectra of theoretically possible compounds were generated. • High resolution MS and LC-MS data were resolved by least squares spectral resolution. • The method proposed compounds that are likely to occur in the analyzed samples. • The proposed compounds matched results from manual interpretation of fragment spectra.

  12. Extension of least squares spectral resolution algorithm to high-resolution lipidomics data

    International Nuclear Information System (INIS)

    Zeng, Ying-Xu; Mjøs, Svein Are; David, Fabrice P.A.; Schmid, Adrien W.

    2016-01-01

    Lipidomics, which focuses on the global study of molecular lipids in biological systems, has been driven tremendously by technical advances in mass spectrometry (MS) instrumentation, particularly high-resolution MS. This requires powerful computational tools that handle the high-throughput lipidomics data analysis. To address this issue, a novel computational tool has been developed for the analysis of high-resolution MS data, including the data pretreatment, visualization, automated identification, deconvolution and quantification of lipid species. The algorithm features the customized generation of a lipid compound library and mass spectral library, which covers the major lipid classes such as glycerolipids, glycerophospholipids and sphingolipids. Next, the algorithm performs least squares resolution of spectra and chromatograms based on the theoretical isotope distribution of molecular ions, which enables automated identification and quantification of molecular lipid species. Currently, this methodology supports analysis of both high and low resolution MS as well as liquid chromatography-MS (LC-MS) lipidomics data. The flexibility of the methodology allows it to be expanded to support more lipid classes and more data interpretation functions, making it a promising tool in lipidomic data analysis. - Highlights: • A flexible strategy for analyzing MS and LC-MS data of lipid molecules is proposed. • Isotope distribution spectra of theoretically possible compounds were generated. • High resolution MS and LC-MS data were resolved by least squares spectral resolution. • The method proposed compounds that are likely to occur in the analyzed samples. • The proposed compounds matched results from manual interpretation of fragment spectra.

  13. Ultra-high resolution HLA genotyping and allele discovery by highly multiplexed cDNA amplicon pyrosequencing

    Directory of Open Access Journals (Sweden)

    Lank Simon M

    2012-08-01

    Full Text Available Abstract Background High-resolution HLA genotyping is a critical diagnostic and research assay. Current methods rarely achieve unambiguous high-resolution typing without making population-specific frequency inferences due to a lack of locus coverage and difficulty in exon-phase matching. Achieving high-resolution typing is also becoming more challenging with traditional methods as the database of known HLA alleles increases. Results We designed a cDNA amplicon-based pyrosequencing method to capture 94% of the HLA class I open-reading-frame with only two amplicons per sample, and an analogous method for class II HLA genes, with a primary focus on sequencing the DRB loci. We present a novel Galaxy server-based analysis workflow for determining genotype. During assay validation, we performed two GS Junior sequencing runs to determine the accuracy of the HLA class I amplicons and DRB amplicon at different levels of multiplexing. When 116 amplicons were multiplexed, we unambiguously resolved 99%of class I alleles to four- or six-digit resolution, as well as 100% unambiguous DRB calls. The second experiment, with 271 multiplexed amplicons, missed some alleles, but generated high-resolution, concordant typing for 93% of class I alleles, and 96% for DRB1 alleles. In a third, preliminary experiment we attempted to sequence novel amplicons for other class II loci with mixed success. Conclusions The presented assay is higher-throughput and higher-resolution than existing HLA genotyping methods, and suitable for allele discovery or large cohort sampling. The validated class I and DRB primers successfully generated unambiguously high-resolution genotypes, while further work is needed to validate additional class II genotyping amplicons.

  14. Object-based methods for individual tree identification and tree species classification from high-spatial resolution imagery

    Science.gov (United States)

    Wang, Le

    2003-10-01

    Modern forest management poses an increasing need for detailed knowledge of forest information at different spatial scales. At the forest level, the information for tree species assemblage is desired whereas at or below the stand level, individual tree related information is preferred. Remote Sensing provides an effective tool to extract the above information at multiple spatial scales in the continuous time domain. To date, the increasing volume and readily availability of high-spatial-resolution data have lead to a much wider application of remotely sensed products. Nevertheless, to make effective use of the improving spatial resolution, conventional pixel-based classification methods are far from satisfactory. Correspondingly, developing object-based methods becomes a central challenge for researchers in the field of Remote Sensing. This thesis focuses on the development of methods for accurate individual tree identification and tree species classification. We develop a method in which individual tree crown boundaries and treetop locations are derived under a unified framework. We apply a two-stage approach with edge detection followed by marker-controlled watershed segmentation. Treetops are modeled from radiometry and geometry aspects. Specifically, treetops are assumed to be represented by local radiation maxima and to be located near the center of the tree-crown. As a result, a marker image was created from the derived treetop to guide a watershed segmentation to further differentiate overlapping trees and to produce a segmented image comprised of individual tree crowns. The image segmentation method developed achieves a promising result for a 256 x 256 CASI image. Then further effort is made to extend our methods to the multiscales which are constructed from a wavelet decomposition. A scale consistency and geometric consistency are designed to examine the gradients along the scale-space for the purpose of separating true crown boundary from unwanted

  15. Refinement procedure for the image alignment in high-resolution electron tomography

    International Nuclear Information System (INIS)

    Houben, L.; Bar Sadan, M.

    2011-01-01

    High-resolution electron tomography from a tilt series of transmission electron microscopy images requires an accurate image alignment procedure in order to maximise the resolution of the tomogram. This is the case in particular for ultra-high resolution where even very small misalignments between individual images can dramatically reduce the fidelity of the resultant reconstruction. A tomographic-reconstruction based and marker-free method is proposed, which uses an iterative optimisation of the tomogram resolution. The method utilises a search algorithm that maximises the contrast in tomogram sub-volumes. Unlike conventional cross-correlation analysis it provides the required correlation over a large tilt angle separation and guarantees a consistent alignment of images for the full range of object tilt angles. An assessment based on experimental reconstructions shows that the marker-free procedure is competitive to the reference of marker-based procedures at lower resolution and yields sub-pixel accuracy even for simulated high-resolution data. -- Highlights: → Alignment procedure for electron tomography based on iterative tomogram contrast optimisation. → Marker-free, independent of object, little user interaction. → Accuracy competitive with fiducial marker methods and suited for high-resolution tomography.

  16. High resolution optical DNA mapping

    Science.gov (United States)

    Baday, Murat

    Many types of diseases including cancer and autism are associated with copy-number variations in the genome. Most of these variations could not be identified with existing sequencing and optical DNA mapping methods. We have developed Multi-color Super-resolution technique, with potential for high throughput and low cost, which can allow us to recognize more of these variations. Our technique has made 10--fold improvement in the resolution of optical DNA mapping. Using a 180 kb BAC clone as a model system, we resolved dense patterns from 108 fluorescent labels of two different colors representing two different sequence-motifs. Overall, a detailed DNA map with 100 bp resolution was achieved, which has the potential to reveal detailed information about genetic variance and to facilitate medical diagnosis of genetic disease.

  17. Numerical method for analysis of temperature rises and thermal stresses around high level radioactive waste repository in granite

    International Nuclear Information System (INIS)

    Shimooka, Hiroshi

    1982-01-01

    The disposal of high-level radioactive waste should result in temperature rises and thermal stresses which change the hydraulic conductivity of the rock around the repository. For safety analysis on disposal of high-level radioactive waste into hard rock, it is necessary to find the temperature rises and thermal stresses distributions around the repository. In this paper, these distribution changes are analyzed by the use of the finite difference method. In advance of numerical analysis, it is required to simplify the shapes and properties of the repository and the rock. Several kinds of numerical models are prepared, and the results of this analysis are examined. And, the waste disposal methods are discussed from the stand-points of the temperature rise and thermal stress analysis. (author)

  18. Automated aberration correction of arbitrary laser modes in high numerical aperture systems.

    Science.gov (United States)

    Hering, Julian; Waller, Erik H; Von Freymann, Georg

    2016-12-12

    Controlling the point-spread-function in three-dimensional laser lithography is crucial for fabricating structures with highest definition and resolution. In contrast to microscopy, aberrations have to be physically corrected prior to writing, to create well defined doughnut modes, bottlebeams or multi foci modes. We report on a modified Gerchberg-Saxton algorithm for spatial-light-modulator based automated aberration compensation to optimize arbitrary laser-modes in a high numerical aperture system. Using circularly polarized light for the measurement and first-guess initial conditions for amplitude and phase of the pupil function our scalar approach outperforms recent algorithms with vectorial corrections. Besides laser lithography also applications like optical tweezers and microscopy might benefit from the method presented.

  19. Final Progress Report: Collaborative Research: Decadal-to-Centennial Climate & Climate Change Studies with Enhanced Variable and Uniform Resolution GCMs Using Advanced Numerical Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Fox-Rabinovitz, M; Cote, J

    2009-06-05

    The joint U.S-Canadian project has been devoted to: (a) decadal climate studies using developed state-of-the-art GCMs (General Circulation Models) with enhanced variable and uniform resolution; (b) development and implementation of advanced numerical techniques; (c) research in parallel computing and associated numerical methods; (d) atmospheric chemistry experiments related to climate issues; (e) validation of regional climate modeling strategies for nested- and stretched-grid models. The variable-resolution stretched-grid (SG) GCMs produce accurate and cost-efficient regional climate simulations with mesoscale resolution. The advantage of the stretched grid approach is that it allows us to preserve the high quality of both global and regional circulations while providing consistent interactions between global and regional scales and phenomena. The major accomplishment for the project has been the successful international SGMIP-1 and SGMIP-2 (Stretched-Grid Model Intercomparison Project, phase-1 and phase-2) based on this research developments and activities. The SGMIP provides unique high-resolution regional and global multi-model ensembles beneficial for regional climate modeling and broader modeling community. The U.S SGMIP simulations have been produced using SciDAC ORNL supercomputers. Collaborations with other international participants M. Deque (Meteo-France) and J. McGregor (CSIRO, Australia) and their centers and groups have been beneficial for the strong joint effort, especially for the SGMIP activities. The WMO/WCRP/WGNE endorsed the SGMIP activities in 2004-2008. This project reflects a trend in the modeling and broader communities to move towards regional and sub-regional assessments and applications important for the U.S. and Canadian public, business and policy decision makers, as well as for international collaborations on regional, and especially climate related issues.

  20. Spot auto-focusing and spot auto-stigmation methods with high-definition auto-correlation function in high-resolution TEM.

    Science.gov (United States)

    Isakozawa, Shigeto; Fuse, Taishi; Amano, Junpei; Baba, Norio

    2018-04-01

    As alternatives to the diffractogram-based method in high-resolution transmission electron microscopy, a spot auto-focusing (AF) method and a spot auto-stigmation (AS) method are presented with a unique high-definition auto-correlation function (HD-ACF). The HD-ACF clearly resolves the ACF central peak region in small amorphous-thin-film images, reflecting the phase contrast transfer function. At a 300-k magnification for a 120-kV transmission electron microscope, the smallest areas used are 64 × 64 pixels (~3 nm2) for the AF and 256 × 256 pixels for the AS. A useful advantage of these methods is that the AF function has an allowable accuracy even for a low s/n (~1.0) image. A reference database on the defocus dependency of the HD-ACF by the pre-acquisition of through-focus amorphous-thin-film images must be prepared to use these methods. This can be very beneficial because the specimens are not limited to approximations of weak phase objects but can be extended to objects outside such approximations.

  1. Resolution of the neutron transport equation by a three-dimensional least square method

    International Nuclear Information System (INIS)

    Varin, Elisabeth

    2001-01-01

    The knowledge of space and time distribution of neutrons with a certain energy or speed allows the exploitation and control of a nuclear reactor and the assessment of the irradiation dose about an irradiated nuclear fuel storage site. The neutron density is described by a transport equation. The objective of this research thesis is to develop a software for the resolution of this stationary equation in a three-dimensional Cartesian domain by means of a deterministic method. After a presentation of the transport equation, the author gives an overview of the different deterministic resolution approaches, identifies their benefits and drawbacks, and discusses the choice of the Ressel method. The least square method is precisely described and then applied. Numerical benchmarks are reported for validation purposes

  2. SPECTRA OF STRONG MAGNETOHYDRODYNAMIC TURBULENCE FROM HIGH-RESOLUTION SIMULATIONS

    International Nuclear Information System (INIS)

    Beresnyak, Andrey

    2014-01-01

    Magnetohydrodynamic (MHD) turbulence is present in a variety of solar and astrophysical environments. Solar wind fluctuations with frequencies lower than 0.1 Hz are believed to be mostly governed by Alfvénic turbulence with particle transport depending on the power spectrum and the anisotropy of such turbulence. Recently, conflicting spectral slopes for the inertial range of MHD turbulence have been reported by different groups. Spectral shapes from earlier simulations showed that MHD turbulence is less scale-local compared with hydrodynamic turbulence. This is why higher-resolution simulations, and careful and rigorous numerical analysis is especially needed for the MHD case. In this Letter, we present two groups of simulations with resolution up to 4096 3 , which are numerically well-resolved and have been analyzed with an exact and well-tested method of scaling study. Our results from both simulation groups indicate that the asymptotic power spectral slope for all energy-related quantities, such as total energy and residual energy, is around –1.7, close to Kolmogorov's –5/3. This suggests that residual energy is a constant fraction of the total energy and that in the asymptotic regime of Alfvénic turbulence magnetic and kinetic spectra have the same scaling. The –1.5 slope for energy and the –2 slope for residual energy, which have been suggested earlier, are incompatible with our numerics

  3. Experimental and numerical studies of high-velocity impact fragmentation

    Energy Technology Data Exchange (ETDEWEB)

    Kipp, M.E.; Grady, D.E.; Swegle, J.W.

    1993-08-01

    Developments are reported in both experimental and numerical capabilities for characterizing the debris spray produced in penetration events. We have performed a series of high-velocity experiments specifically designed to examine the fragmentation of the projectile during impact. High-strength, well-characterized steel spheres (6.35 mm diameter) were launched with a two-stage light-gas gun to velocities in the range of 3 to 5 km/s. Normal impact with PMMA plates, thicknesses of 0.6 to 11 mm, applied impulsive loads of various amplitudes and durations to the steel sphere. Multiple flash radiography diagnostics and recovery techniques were used to assess size, velocity, trajectory and statistics of the impact-induced fragment debris. Damage modes to the primary target plate (plastic) and to a secondary target plate (aluminum) were also evaluated. Dynamic fragmentation theories, based on energy-balance principles, were used to evaluate local material deformation and fracture state information from CTH, a three-dimensional Eulerian solid dynamics shock wave propagation code. The local fragment characterization of the material defines a weighted fragment size distribution, and the sum of these distributions provides a composite particle size distribution for the steel sphere. The calculated axial and radial velocity changes agree well with experimental data, and the calculated fragment sizes are in qualitative agreement with the radiographic data. A secondary effort involved the experimental and computational analyses of normal and oblique copper ball impacts on steel target plates. High-resolution radiography and witness plate diagnostics provided impact motion and statistical fragment size data. CTH simulations were performed to test computational models and numerical methods.

  4. Numerical computation of linear instability of detonations

    Science.gov (United States)

    Kabanov, Dmitry; Kasimov, Aslan

    2017-11-01

    We propose a method to study linear stability of detonations by direct numerical computation. The linearized governing equations together with the shock-evolution equation are solved in the shock-attached frame using a high-resolution numerical algorithm. The computed results are processed by the Dynamic Mode Decomposition technique to generate dispersion relations. The method is applied to the reactive Euler equations with simple-depletion chemistry as well as more complex multistep chemistry. The results are compared with those known from normal-mode analysis. We acknowledge financial support from King Abdullah University of Science and Technology.

  5. An Improved Method for Producing High Spatial-Resolution NDVI Time Series Datasets with Multi-Temporal MODIS NDVI Data and Landsat TM/ETM+ Images

    OpenAIRE

    Rao, Yuhan; Zhu, Xiaolin; Chen, Jin; Wang, Jianmin

    2015-01-01

    Due to technical limitations, it is impossible to have high resolution in both spatial and temporal dimensions for current NDVI datasets. Therefore, several methods are developed to produce high resolution (spatial and temporal) NDVI time-series datasets, which face some limitations including high computation loads and unreasonable assumptions. In this study, an unmixing-based method, NDVI Linear Mixing Growth Model (NDVI-LMGM), is proposed to achieve the goal of accurately and efficiently bl...

  6. ANL high resolution injector

    International Nuclear Information System (INIS)

    Minehara, E.; Kutschera, W.; Hartog, P.D.; Billquist, P.

    1985-01-01

    The ANL (Argonne National Laboratory) high-resolution injector has been installed to obtain higher mass resolution and higher preacceleration, and to utilize effectively the full mass range of ATLAS (Argonne Tandem Linac Accelerator System). Preliminary results of the first beam test are reported briefly. The design and performance, in particular a high-mass-resolution magnet with aberration compensation, are discussed. 7 refs., 5 figs., 2 tabs

  7. An Optofluidic Lens Array Microchip for High Resolution Stereo Microscopy

    Directory of Open Access Journals (Sweden)

    Mayurachat Ning Gulari

    2014-08-01

    Full Text Available We report the development of an add-on, chip-based, optical module—termed the Microfluidic-based Oil-immersion Lenses (μOIL chip—which transforms any stereo microscope into a high-resolution, large field of view imaging platform. The μOIL chip consists of an array of ball mini-lenses that are assembled onto a microfluidic silicon chip. The mini-lenses are made out of high refractive index material (sapphire and they are half immersed in oil. Those two key features enable submicron resolution and a maximum numerical aperture of ~1.2. The μOIL chip is reusable and easy to operate as it can be placed directly on top of any biological sample. It improves the resolution of a stereo microscope by an order of magnitude without compromising the field of view; therefore, we believe it could become a versatile tool for use in various research studies and clinical applications.

  8. Reduced material model for closed cell metal foam infiltrated with phase change material based on high resolution numerical studies

    International Nuclear Information System (INIS)

    Ohsenbrügge, Christoph; Marth, Wieland; Navarro y de Sosa, Iñaki; Drossel, Welf-Guntram; Voigt, Axel

    2016-01-01

    Highlights: • Closed cell metal foam sandwich structures were investigated. • High resolution numerical studies were conducted using CT scan data. • A reduced model for use in commercial FE software reduces needed degrees of freedom. • Thermal inertia is increased about 4 to 5 times in PCM filled structures. • The reduced material model was verified using experimental data. - Abstract: The thermal behaviour of closed cell metal foam infiltrated with paraffin wax as latent heat storage for application in high precision tool machines was examined. Aluminium foam sandwiches with metallically bound cover layers were prepared in a powder metallurgical process and cross-sectional images of the structures were generated with X-ray computed tomography. Based on the image data a three dimensional highly detailed model was derived and prepared for simulation with the adaptive FE-library AMDiS. The pores were assumed to be filled with paraffin wax. The thermal conductivity and the transient thermal behaviour in the phase-change region were investigated. Based on the results from the highly detailed simulations a reduced model for use in commercial FE-software (ANSYS) was derived. It incorporates the properties of the matrix and the phase change material into a homogenized material. A sandwich-structure with and without paraffin was investigated experimentally under constant thermal load. The results were used to verify the reduced material model in ANSYS.

  9. Excel spreadsheet in teaching numerical methods

    Science.gov (United States)

    Djamila, Harimi

    2017-09-01

    One of the important objectives in teaching numerical methods for undergraduates’ students is to bring into the comprehension of numerical methods algorithms. Although, manual calculation is important in understanding the procedure, it is time consuming and prone to error. This is specifically the case when considering the iteration procedure used in many numerical methods. Currently, many commercial programs are useful in teaching numerical methods such as Matlab, Maple, and Mathematica. These are usually not user-friendly by the uninitiated. Excel spreadsheet offers an initial level of programming, which it can be used either in or off campus. The students will not be distracted with writing codes. It must be emphasized that general commercial software is required to be introduced later to more elaborated questions. This article aims to report on a teaching numerical methods strategy for undergraduates engineering programs. It is directed to students, lecturers and researchers in engineering field.

  10. High-resolution observations of the near-surface wind field over an isolated mountain and in a steep river canyon

    Science.gov (United States)

    B. W. Butler; N. S. Wagenbrenner; J. M. Forthofer; B. K. Lamb; K. S. Shannon; D. Finn; R. M. Eckman; K. Clawson; L. Bradshaw; P. Sopko; S. Beard; D. Jimenez; C. Wold; M. Vosburgh

    2015-01-01

    A number of numerical wind flow models have been developed for simulating wind flow at relatively fine spatial resolutions (e.g., 100 m); however, there are very limited observational data available for evaluating these high-resolution models. This study presents high-resolution surface wind data sets collected from an isolated mountain and a steep river canyon. The...

  11. Towards high-resolution positron emission tomography for small volumes

    International Nuclear Information System (INIS)

    McKee, B.T.A.

    1982-01-01

    Some arguments are made regarding the medical usefulness of high spatial resolution in positron imaging, even if limited to small imaged volumes. Then the intrinsic limitations to spatial resolution in positron imaging are discussed. The project to build a small-volume, high resolution animal research prototype (SHARP) positron imaging system is described. The components of the system, particularly the detectors, are presented and brief mention is made of data acquisition and image reconstruction methods. Finally, some preliminary imaging results are presented; a pair of isolated point sources and 18 F in the bones of a rabbit. Although the detector system is not fully completed, these first results indicate that the goals of high sensitivity and high resolution (4 mm) have been realized. (Auth.)

  12. A method of incident angle estimation for high resolution spectral recovery in filter-array-based spectrometers

    Science.gov (United States)

    Kim, Cheolsun; Lee, Woong-Bi; Ju, Gun Wu; Cho, Jeonghoon; Kim, Seongmin; Oh, Jinkyung; Lim, Dongsung; Lee, Yong Tak; Lee, Heung-No

    2017-02-01

    In recent years, there has been an increasing interest in miniature spectrometers for research and development. Especially, filter-array-based spectrometers have advantages of low cost and portability, and can be applied in various fields such as biology, chemistry and food industry. Miniaturization in optical filters causes degradation of spectral resolution due to limitations on spectral responses and the number of filters. Nowadays, many studies have been reported that the filter-array-based spectrometers have achieved resolution improvements by using digital signal processing (DSP) techniques. The performance of the DSP-based spectral recovery highly depends on the prior information of transmission functions (TFs) of the filters. The TFs vary with respect to an incident angle of light onto the filter-array. Conventionally, it is assumed that the incident angle of light on the filters is fixed and the TFs are known to the DSP. However, the incident angle is inconstant according to various environments and applications, and thus TFs also vary, which leads to performance degradation of spectral recovery. In this paper, we propose a method of incident angle estimation (IAE) for high resolution spectral recovery in the filter-array-based spectrometers. By exploiting sparse signal reconstruction of the L1- norm minimization, IAE estimates an incident angle among all possible incident angles which minimizes the error of the reconstructed signal. Based on IAE, DSP effectively provides a high resolution spectral recovery in the filter-array-based spectrometers.

  13. Computer simulation of high resolution transmission electron micrographs: theory and analysis

    International Nuclear Information System (INIS)

    Kilaas, R.

    1985-03-01

    Computer simulation of electron micrographs is an invaluable aid in their proper interpretation and in defining optimum conditions for obtaining images experimentally. Since modern instruments are capable of atomic resolution, simulation techniques employing high precision are required. This thesis makes contributions to four specific areas of this field. First, the validity of a new method for simulating high resolution electron microscope images has been critically examined. Second, three different methods for computing scattering amplitudes in High Resolution Transmission Electron Microscopy (HRTEM) have been investigated as to their ability to include upper Laue layer (ULL) interaction. Third, a new method for computing scattering amplitudes in high resolution transmission electron microscopy has been examined. Fourth, the effect of a surface layer of amorphous silicon dioxide on images of crystalline silicon has been investigated for a range of crystal thicknesses varying from zero to 2 1/2 times that of the surface layer

  14. A cloud mask methodology for high resolution remote sensing data combining information from high and medium resolution optical sensors

    Science.gov (United States)

    Sedano, Fernando; Kempeneers, Pieter; Strobl, Peter; Kucera, Jan; Vogt, Peter; Seebach, Lucia; San-Miguel-Ayanz, Jesús

    2011-09-01

    This study presents a novel cloud masking approach for high resolution remote sensing images in the context of land cover mapping. As an advantage to traditional methods, the approach does not rely on thermal bands and it is applicable to images from most high resolution earth observation remote sensing sensors. The methodology couples pixel-based seed identification and object-based region growing. The seed identification stage relies on pixel value comparison between high resolution images and cloud free composites at lower spatial resolution from almost simultaneously acquired dates. The methodology was tested taking SPOT4-HRVIR, SPOT5-HRG and IRS-LISS III as high resolution images and cloud free MODIS composites as reference images. The selected scenes included a wide range of cloud types and surface features. The resulting cloud masks were evaluated through visual comparison. They were also compared with ad-hoc independently generated cloud masks and with the automatic cloud cover assessment algorithm (ACCA). In general the results showed an agreement in detected clouds higher than 95% for clouds larger than 50 ha. The approach produced consistent results identifying and mapping clouds of different type and size over various land surfaces including natural vegetation, agriculture land, built-up areas, water bodies and snow.

  15. High-Resolution MRI in Rectal Cancer

    International Nuclear Information System (INIS)

    Dieguez, Adriana

    2010-01-01

    High-resolution MRI is the best method of assessing the relation of the rectal tumor with the potential circumferential resection margin (CRM). Therefore it is currently considered the method of choice for local staging of rectal cancer. The primary surgery of rectal cancer is total mesorectal excision (TME), which plane of dissection is formed by the mesorectal fascia surrounding mesorectal fat and rectum. This fascia will determine the circumferential margin of resection. At the same time, high resolution MRI allows adequate pre-operative identification of important prognostic risk factors, improving the selection and indication of therapy for each patient. This information includes, besides the circumferential margin of resection, tumor and lymph node staging, extramural vascular invasion and the description of lower rectal tumors. All these should be described in detail in the report, being part of the discussion in the multidisciplinary team, the place where the decisions involving the patient with rectal cancer will take place. The aim of this study is to provide the information necessary to understand the use of high resolution MRI in the identification of prognostic risk factors in rectal cancer. The technical requirements and standardized report for this study will be describe, as well as the anatomical landmarks of importance for the total mesorectal excision (TME), as we have said is the surgery of choice for rectal cancer. (authors) [es

  16. A multi-sample based method for identifying common CNVs in normal human genomic structure using high-resolution aCGH data.

    Directory of Open Access Journals (Sweden)

    Chihyun Park

    Full Text Available BACKGROUND: It is difficult to identify copy number variations (CNV in normal human genomic data due to noise and non-linear relationships between different genomic regions and signal intensity. A high-resolution array comparative genomic hybridization (aCGH containing 42 million probes, which is very large compared to previous arrays, was recently published. Most existing CNV detection algorithms do not work well because of noise associated with the large amount of input data and because most of the current methods were not designed to analyze normal human samples. Normal human genome analysis often requires a joint approach across multiple samples. However, the majority of existing methods can only identify CNVs from a single sample. METHODOLOGY AND PRINCIPAL FINDINGS: We developed a multi-sample-based genomic variations detector (MGVD that uses segmentation to identify common breakpoints across multiple samples and a k-means-based clustering strategy. Unlike previous methods, MGVD simultaneously considers multiple samples with different genomic intensities and identifies CNVs and CNV zones (CNVZs; CNVZ is a more precise measure of the location of a genomic variant than the CNV region (CNVR. CONCLUSIONS AND SIGNIFICANCE: We designed a specialized algorithm to detect common CNVs from extremely high-resolution multi-sample aCGH data. MGVD showed high sensitivity and a low false discovery rate for a simulated data set, and outperformed most current methods when real, high-resolution HapMap datasets were analyzed. MGVD also had the fastest runtime compared to the other algorithms evaluated when actual, high-resolution aCGH data were analyzed. The CNVZs identified by MGVD can be used in association studies for revealing relationships between phenotypes and genomic aberrations. Our algorithm was developed with standard C++ and is available in Linux and MS Windows format in the STL library. It is freely available at: http://embio.yonsei.ac.kr/~Park/mgvd.php.

  17. High resolution microphotonic needle for endoscopic imaging (Conference Presentation)

    Science.gov (United States)

    Tadayon, Mohammad Amin; Mohanty, Aseema; Roberts, Samantha P.; Barbosa, Felippe; Lipson, Michal

    2017-02-01

    GRIN (Graded index) lens have revolutionized micro endoscopy enabling deep tissue imaging with high resolution. The challenges of traditional GRIN lenses are their large size (when compared with the field of view) and their limited resolution. This is because of the relatively weak NA in standard graded index lenses. Here we introduce a novel micro-needle platform for endoscopy with much higher resolution than traditional GRIN lenses and a FOV that corresponds to the whole cross section of the needle. The platform is based on polymeric (SU-8) waveguide integrated with a microlens micro fabricated on a silicon substrate using a unique molding process. Due to the high index of refraction of the material the NA of the needle is much higher than traditional GRIN lenses. We tested the probe in a fluorescent dye solution (19.6 µM Alexa Flour 647 solution) and measured a numerical aperture of 0.25, focal length of about 175 µm and minimal spot size of about 1.6 µm. We show that the platform can image a sample with the field of view corresponding to the cross sectional area of the waveguide (80x100 µm2). The waveguide size can in principle be modified to vary size of the imaging field of view. This demonstration, combined with our previous work demonstrating our ability to implant the high NA needle in a live animal, shows that the proposed system can be used for deep tissue imaging with very high resolution and high field of view.

  18. Numerical methods operational at the French Meteorologie Nationale for nuclear accident situation

    International Nuclear Information System (INIS)

    Marais, C.; Musson-Genon, L.

    1990-01-01

    Since the Chernobyl accident, the Meteorologie Nationale has developed new numerical simulation methods to assist predictions provided as part of the meteorological support to the public authorities in the event of a nuclear accident. The present paper describes these new tools now operational at the Meteorologie Nationale. In the event of an accident, the first task of the forecaster is to anticipate the evolution of meteorological conditions at the site concerned. A fine scale, numerical forecasting model, PERIDOT, is used covering Western Europe with a resolution of 35 x 35 km. A comparison between PERIDOT wind forecasts and measurements at French NPS sites is presented which shows these forecasts to be of good overall quality, except for Chooz and Gravelines NPSs where the orographic complexity and the proximity of the sea require statistical corrections to be introduced. In all cases PERIDOT forecasts are clearly superior to those based on wind persistence. For accidents of any significance, the transport and dispersion of the atmopsheric polluants need to be evaluated as a matter of urgency. Again the forecaster has a vital role to play using numerical forecasting resources: in particular trajectory forecasts available by FAX within one hour of the meteorological Service Central d'Exploitation being alerted, and subsequently the Eulerian transport and diffusion code MEDIA which can be interfaced with either PERIDOT or EMERAUDE, a model operating on global meteorological conditions with a resolution of 150 x 150 km. This latter model has been tested against the Chernobyl accident with good results, the output is available in 4 to 5 hours after the alert and work is in hand to reduce the response time. Further studies are now in progress to provide a much finer regional resolution (5-10 km) and improved representation of wet and dry disposition at this resolution within MEDIA

  19. Science with High Spatial Resolution Far-Infrared Data

    Science.gov (United States)

    Terebey, Susan (Editor); Mazzarella, Joseph M. (Editor)

    1994-01-01

    The goal of this workshop was to discuss new science and techniques relevant to high spatial resolution processing of far-infrared data, with particular focus on high resolution processing of IRAS data. Users of the maximum correlation method, maximum entropy, and other resolution enhancement algorithms applicable to far-infrared data gathered at the Infrared Processing and Analysis Center (IPAC) for two days in June 1993 to compare techniques and discuss new results. During a special session on the third day, interested astronomers were introduced to IRAS HIRES processing, which is IPAC's implementation of the maximum correlation method to the IRAS data. Topics discussed during the workshop included: (1) image reconstruction; (2) random noise; (3) imagery; (4) interacting galaxies; (5) spiral galaxies; (6) galactic dust and elliptical galaxies; (7) star formation in Seyfert galaxies; (8) wavelet analysis; and (9) supernova remnants.

  20. A novel typing method for Listeria monocytogenes using high-resolution melting analysis (HRMA) of tandem repeat regions.

    Science.gov (United States)

    Ohshima, Chihiro; Takahashi, Hajime; Iwakawa, Ai; Kuda, Takashi; Kimura, Bon

    2017-07-17

    Listeria monocytogenes, which is responsible for causing food poisoning known as listeriosis, infects humans and animals. Widely distributed in the environment, this bacterium is known to contaminate food products after being transmitted to factories via raw materials. To minimize the contamination of products by food pathogens, it is critical to identify and eliminate factory entry routes and pathways for the causative bacteria. High resolution melting analysis (HRMA) is a method that takes advantage of differences in DNA sequences and PCR product lengths that are reflected by the disassociation temperature. Through our research, we have developed a multiple locus variable-number tandem repeat analysis (MLVA) using HRMA as a simple and rapid method to differentiate L. monocytogenes isolates. While evaluating our developed method, the ability of MLVA-HRMA, MLVA using capillary electrophoresis, and multilocus sequence typing (MLST) was compared for their ability to discriminate between strains. The MLVA-HRMA method displayed greater discriminatory ability than MLST and MLVA using capillary electrophoresis, suggesting that the variation in the number of repeat units, along with mutations within the DNA sequence, was accurately reflected by the melting curve of HRMA. Rather than relying on DNA sequence analysis or high-resolution electrophoresis, the MLVA-HRMA method employs the same process as PCR until the analysis step, suggesting a combination of speed and simplicity. The result of MLVA-HRMA method is able to be shared between different laboratories. There are high expectations that this method will be adopted for regular inspections at food processing facilities in the near future. Copyright © 2017. Published by Elsevier B.V.

  1. Gamma-line intensity difference method for sup 1 sup 1 sup 7 sup m Sn at high resolution

    CERN Document Server

    Remeikis, V; Mazeika, K

    1998-01-01

    The method for detection of small differences in the gamma-spectrum line intensity for the radionuclide in different environments has been developed for measurements at high resolution. The experiments were realized with the pure germanium planar detector. Solution of the methodical problems allowed to measure the relative difference DELTA IOTA subgamma/IOTA subgamma=(3.4+-1.5)*10 sup - sup 4 of the sup 1 sup 1 sup 7 sup m Sn 156.02 keV gamma-line intensity for the radionuclide in SnO sub 2 with respect to SnS from the difference in the gamma-spectra. The error of the result is caused mainly by the statistical accuracy. It is limited by the highest counting rate at sufficiently high energy resolution and relatively short half-life of sup 1 sup 1 sup 7 sup m Sn. (author)

  2. Numerical methods for hydrodynamic stability problems

    International Nuclear Information System (INIS)

    Fujimura, Kaoru

    1985-11-01

    Numerical methods for solving the Orr-Sommerfeld equation, which is the fundamental equation of the hydrodynamic stability theory for various shear flows, are reviewed and typical numerical results are presented. The methods of asymptotic solution, finite difference methods, initial value methods and expansions in orthogonal functions are compared. (author)

  3. High resolution Neutron and Synchrotron Powder Diffraction

    International Nuclear Information System (INIS)

    Hewat, A.W.

    1986-01-01

    The use of high-resolution powder diffraction has grown rapidly in the past years, with the development of Rietveld (1967) methods of data analysis and new high-resolution diffractometers and multidetectors. The number of publications in this area has increased from a handful per year until 1973 to 150 per year in 1984, with a ten-year total of over 1000. These papers cover a wide area of solid state-chemistry, physics and materials science, and have been grouped under 20 subject headings, ranging from catalysts to zeolites, and from battery electrode materials to pre-stressed superconducting wires. In 1985 two new high-resolution diffractometers are being commissioned, one at the SNS laboratory near Oxford, and one at the ILL in Grenoble. In different ways these machines represent perhaps the ultimate that can be achieved with neutrons and will permit refinement of complex structures with about 250 parameters and unit cell volumes of about 2500 Angstrom/sp3/. The new European Synchotron Facility will complement the Grenoble neutron diffractometers, and extend the role of high-resolution powder diffraction to the direct solution of crystal structures, pioneered in Sweden

  4. Setting up of a liquid chromatography-high resolution tandem mass spectrometry method for the detection of caseins in food. A comparison with ELISA method

    Directory of Open Access Journals (Sweden)

    Daniela Gastaldi

    2013-06-01

    Full Text Available Determination of caseins in food matrices is usually performed by using the competitiveenzyme- linked immunosorbent assay (ELISA technique. However such a technique suffers from a number of limitations. Among these, the applicability to a narrow concentration range, a non linear (logarithmic response, a non-negligible cross-reactivity and a high cost per kit. At the time of the completion of this study, in case of ELISA positive feedback, there was poor availability in the literature of finding reliable instrumental methods able to determine both qualitatively and quantitatively this class of substances. In the present study, a liquid chromatography-high resolution tandem mass spectrometry (HPLC-HRMS/MS instrumental method was developed with a high resolution mass spectrometer (Orbitrap. Real samples of sausages in which caseins were detected by ELISA technique were analysed. A casein-free sample of ham was used as a blank. The analytical characteristics of the instrumental method were compared with those of a commercial ELISA test, declared specific for α- and β-casein.

  5. An atlas of high-resolution IRAS maps on nearby galaxies

    Science.gov (United States)

    Rice, Walter

    1993-01-01

    An atlas of far-infrared IRAS maps with near 1 arcmin angular resolution of 30 optically large galaxies is presented. The high-resolution IRAS maps were produced with the Maximum Correlation Method (MCM) image construction and enhancement technique developed at IPAC. The MCM technique, which recovers the spatial information contained in the overlapping detector data samples of the IRAS all-sky survey scans, is outlined and tests to verify the structural reliability and photometric integrity of the high-resolution maps are presented. The infrared structure revealed in individual galaxies is discussed. The atlas complements the IRAS Nearby Galaxy High-Resolution Image Atlas, the high-resolution galaxy images encoded in FITS format, which is provided to the astronomical community as an IPAC product.

  6. Automated data processing of high-resolution mass spectra

    DEFF Research Database (Denmark)

    Hansen, Michael Adsetts Edberg; Smedsgaard, Jørn

    of the massive amounts of data. We present an automated data processing method to quantitatively compare large numbers of spectra from the analysis of complex mixtures, exploiting the full quality of high-resolution mass spectra. By projecting all detected ions - within defined intervals on both the time...... infusion of crude extracts into the source taking advantage of the high sensitivity, high mass resolution and accuracy and the limited fragmentation. Unfortunately, there has not been a comparable development in the data processing techniques to fully exploit gain in high resolution and accuracy...... infusion analyses of crude extract to find the relationship between species from several species terverticillate Penicillium, and also that the ions responsible for the segregation can be identified. Furthermore the process can automate the process of detecting unique species and unique metabolites....

  7. A Multi-stage Method to Extract Road from High Resolution Satellite Image

    International Nuclear Information System (INIS)

    Zhijian, Huang; Zhang, Jinfang; Xu, Fanjiang

    2014-01-01

    Extracting road information from high-resolution satellite images is complex and hardly achieves by exploiting only one or two modules. This paper presents a multi-stage method, consisting of automatic information extraction and semi-automatic post-processing. The Multi-scale Enhancement algorithm enlarges the contrast of human-made structures with the background. The Statistical Region Merging segments images into regions, whose skeletons are extracted and pruned according to geometry shape information. Setting the start and the end skeleton points, the shortest skeleton path is constructed as a road centre line. The Bidirectional Adaptive Smoothing technique smoothens the road centre line and adjusts it to right position. With the smoothed line and its average width, a Buffer algorithm reconstructs the road region easily. Seen from the last results, the proposed method eliminates redundant non-road regions, repairs incomplete occlusions, jumps over complete occlusions, and reserves accurate road centre lines and neat road regions. During the whole process, only a few interactions are needed

  8. Ring artifact correction for high-resolution micro CT

    International Nuclear Information System (INIS)

    Kyriakou, Yiannis; Prell, Daniel; Kalender, Willi A

    2009-01-01

    In high-resolution micro CT using flat detectors (FD), imperfect or defect detector elements may cause concentric-ring artifacts due to their continuous over- or underestimation of attenuation values, which often disturb image quality. We here present a dedicated image-based ring artifact correction method for high-resolution micro CT, based on median filtering of the reconstructed image and working on a transformed version of the reconstructed images in polar coordinates. This post-processing method reduced ring artifacts in the reconstructed images and improved image quality for phantom and in in vivo scans. Noise and artifacts were reduced both in transversal and in multi-planar reformations along the longitudinal axis. (note)

  9. Thermophysical modeling for high-resolution digital terrain models

    Science.gov (United States)

    Pelivan, I.

    2018-04-01

    A method is presented for efficiently calculating surface temperatures for highly resolved celestial body shapes. A thorough investigation of the necessary conditions leading to reach model convergence shows that the speed of surface temperature convergence depends on factors such as the quality of initial boundary conditions, thermal inertia, illumination conditions, and resolution of the numerical depth grid. The optimization process to shorten the simulation time while increasing or maintaining the accuracy of model results includes the introduction of facet-specific boundary conditions such as pre-computed temperature estimates and pre-evaluated simulation times. The individual facet treatment also allows for assigning other facet-specific properties such as local thermal inertia. The approach outlined in this paper is particularly useful for very detailed digital terrain models in combination with unfavorable illumination conditions such as little to no sunlight at all for a period of time as experienced locally on comet 67P/Churyumov-Gerasimenko. Possible science applications include thermal analysis of highly resolved local (landing) sites experiencing seasonal, environment and lander shadowing. In combination with an appropriate roughness model, the method is very suitable for application to disk-integrated and disk-resolved data. Further applications are seen where the complexity of the task has led to severe shape or thermophysical model simplifications such as in studying surface activity or thermal cracking.

  10. Advances in Numerical Methods

    CERN Document Server

    Mastorakis, Nikos E

    2009-01-01

    Features contributions that are focused on significant aspects of current numerical methods and computational mathematics. This book carries chapters that advanced methods and various variations on known techniques that can solve difficult scientific problems efficiently.

  11. Refinement procedure for the image alignment in high-resolution electron tomography.

    Science.gov (United States)

    Houben, L; Bar Sadan, M

    2011-01-01

    High-resolution electron tomography from a tilt series of transmission electron microscopy images requires an accurate image alignment procedure in order to maximise the resolution of the tomogram. This is the case in particular for ultra-high resolution where even very small misalignments between individual images can dramatically reduce the fidelity of the resultant reconstruction. A tomographic-reconstruction based and marker-free method is proposed, which uses an iterative optimisation of the tomogram resolution. The method utilises a search algorithm that maximises the contrast in tomogram sub-volumes. Unlike conventional cross-correlation analysis it provides the required correlation over a large tilt angle separation and guarantees a consistent alignment of images for the full range of object tilt angles. An assessment based on experimental reconstructions shows that the marker-free procedure is competitive to the reference of marker-based procedures at lower resolution and yields sub-pixel accuracy even for simulated high-resolution data. Copyright © 2011 Elsevier B.V. All rights reserved.

  12. 3D high-resolution radar imaging of small body interiors

    Science.gov (United States)

    Sava, Paul; Asphaug, Erik

    2017-10-01

    Answering fundamental questions about the origin and evolution of small planetary bodies hinges on our ability to image their interior structure in detail and at high resolution (Asphaug, 2009). We often infer internal structure from surface observations, e.g. that comet 67P/Churyumov-Gerasimenko is a primordial agglomeration of cometesimals (Massironi et al., 2015). However, the interior structure is not easily accessible without systematic imaging using, e.g., radar transmission and reflection data, as suggested by the CONSERT experiment on Rosetta. Interior imaging depends on observations from multiple viewpoints, as in medical tomography.We discuss radar imaging using methodology adapted from terrestrial exploration seismology (Sava et al., 2015). We primarily focus on full wavefield methods that facilitate high quality imaging of small body interiors characterized by complex structure and large contrasts of physical properties. We consider the case of a monostatic system (co-located transmitters and receivers) operated at two frequency bands, centered around 5 and 15 MHz, from a spacecraft in slow polar orbit around a spinning comet nucleus. Assuming that the spin period is significantly (e.g. 5x) faster than the orbital period, this configuration allows repeated views from multiple directions (Safaeinili et al., 2002)Using realistic numerical experiments, we argue that (1) the comet/asteroid imaging problem is intrinsically 3D and conventional SAR methodology does not satisfy imaging, sampling and resolution requirements; (2) imaging at different frequency bands can provide information about internal surfaces (through migration) and internal volumes (through tomography); (3) interior imaging can be accomplished progressively as data are being acquired through successive orbits around the studied object; (4) imaging resolution can go beyond the apparent radar frequency band by deconvolution of the point-spread-function characterizing the imaging system; and (5

  13. A high resolution interferometric method to measure local swelling due to CO2 exposure in coal and shale

    NARCIS (Netherlands)

    Pluymakers, A.; Liu, J.; Kohler, F.; Renard, F.; Dysthe, D.

    2018-01-01

    We present an experimental method to study time-dependent, CO2-induced, local topography changes in mm-sized composite samples, plus results showing heterogeneous swelling of coal and shale on the nano- to micrometer scale. These results were obtained using high resolution interferometry

  14. An ROI multi-resolution compression method for 3D-HEVC

    Science.gov (United States)

    Ti, Chunli; Guan, Yudong; Xu, Guodong; Teng, Yidan; Miao, Xinyuan

    2017-09-01

    3D High Efficiency Video Coding (3D-HEVC) provides a significant potential on increasing the compression ratio of multi-view RGB-D videos. However, the bit rate still rises dramatically with the improvement of the video resolution, which will bring challenges to the transmission network, especially the mobile network. This paper propose an ROI multi-resolution compression method for 3D-HEVC to better preserve the information in ROI on condition of limited bandwidth. This is realized primarily through ROI extraction and compression multi-resolution preprocessed video as alternative data according to the network conditions. At first, the semantic contours are detected by the modified structured forests to restrain the color textures inside objects. The ROI is then determined utilizing the contour neighborhood along with the face region and foreground area of the scene. Secondly, the RGB-D videos are divided into slices and compressed via 3D-HEVC under different resolutions for selection by the audiences and applications. Afterwards, the reconstructed low-resolution videos from 3D-HEVC encoder are directly up-sampled via Laplace transformation and used to replace the non-ROI areas of the high-resolution videos. Finally, the ROI multi-resolution compressed slices are obtained by compressing the ROI preprocessed videos with 3D-HEVC. The temporal and special details of non-ROI are reduced in the low-resolution videos, so the ROI will be better preserved by the encoder automatically. Experiments indicate that the proposed method can keep the key high-frequency information with subjective significance while the bit rate is reduced.

  15. Peculiar velocity effects in high-resolution microwave background experiments

    International Nuclear Information System (INIS)

    Challinor, Anthony; Leeuwen, Floor van

    2002-01-01

    We investigate the impact of peculiar velocity effects due to the motion of the solar system relative to the cosmic microwave background (CMB) on high resolution CMB experiments. It is well known that on the largest angular scales the combined effects of Doppler shifts and aberration are important; the lowest Legendre multipoles of total intensity receive power from the large CMB monopole in transforming from the CMB frame. On small angular scales aberration dominates and is shown here to lead to significant distortions of the total intensity and polarization multipoles in transforming from the rest frame of the CMB to the frame of the solar system. We provide convenient analytic results for the distortions as series expansions in the relative velocity of the two frames, but at the highest resolutions a numerical quadrature is required. Although many of the high resolution multipoles themselves are severely distorted by the frame transformations, we show that their statistical properties distort by only an insignificant amount. Therefore, the cosmological parameter estimation is insensitive to the transformation from the CMB frame (where theoretical predictions are calculated) to the rest frame of the experiment

  16. Using Adobe Acrobat to create high-resolution line art images.

    Science.gov (United States)

    Woo, Hyoun Sik; Lee, Jeong Min

    2009-08-01

    The purpose of this article is to introduce a method for using Adobe Acrobat to make high-resolution and high-quality line art images. High-resolution and high-quality line art images for radiology journal submission can be generated using Adobe Acrobat as a steppingstone, and the customized PDF conversion settings can be used for converting hybrid images, including both bitmap and vector components.

  17. A numerical method for a transient two-fluid model

    International Nuclear Information System (INIS)

    Le Coq, G.; Libmann, M.

    1978-01-01

    The transient boiling two-phase flow is studied. In nuclear reactors, the driving conditions for the transient boiling are a pump power decay or/and an increase in heating power. The physical model adopted for the two-phase flow is the two fluid model with the assumption that the vapor remains at saturation. The numerical method for solving the thermohydraulics problems is a shooting method, this method is highly implicit. A particular problem exists at the boiling and condensation front. A computer code using this numerical method allow the calculation of a transient boiling initiated by a steady state for a PWR or for a LMFBR

  18. A highly accurate spectral method for the Navier–Stokes equations in a semi-infinite domain with flexible boundary conditions

    Energy Technology Data Exchange (ETDEWEB)

    Matsushima, Toshiki; Ishioka, Keiichi, E-mail: matsushima@kugi.kyoto-u.ac.jp, E-mail: ishioka@gfd-dennou.org [Graduate School of Science, Kyoto University, Kitashirakawa Oiwake-cho, Sakyo-ku, Kyoto 606-8502 (Japan)

    2017-04-15

    This paper presents a spectral method for numerically solving the Navier–Stokes equations in a semi-infinite domain bounded by a flat plane: the aim is to obtain high accuracy with flexible boundary conditions. The proposed use is for numerical simulations of small-scale atmospheric phenomena near the ground. We introduce basis functions that fit the semi-infinite domain, and an integral condition for vorticity is used to reduce the computational cost when solving the partial differential equations that appear when the viscosity term is treated implicitly. Furthermore, in order to ensure high accuracy, two iteration techniques are applied when solving the system of linear equations and in determining boundary values. This significantly reduces numerical errors, and the proposed method enables high-resolution numerical experiments. This is demonstrated by numerical experiments showing the collision of a vortex ring into a wall; these were performed using numerical models based on the proposed method. It is shown that the time evolution of the flow field is successfully obtained not only near the boundary, but also in a region far from the boundary. The applicability of the proposed method and the integral condition is discussed. (paper)

  19. High-resolution regional climate model evaluation using variable-resolution CESM over California

    Science.gov (United States)

    Huang, X.; Rhoades, A.; Ullrich, P. A.; Zarzycki, C. M.

    2015-12-01

    Understanding the effect of climate change at regional scales remains a topic of intensive research. Though computational constraints remain a problem, high horizontal resolution is needed to represent topographic forcing, which is a significant driver of local climate variability. Although regional climate models (RCMs) have traditionally been used at these scales, variable-resolution global climate models (VRGCMs) have recently arisen as an alternative for studying regional weather and climate allowing two-way interaction between these domains without the need for nudging. In this study, the recently developed variable-resolution option within the Community Earth System Model (CESM) is assessed for long-term regional climate modeling over California. Our variable-resolution simulations will focus on relatively high resolutions for climate assessment, namely 28km and 14km regional resolution, which are much more typical for dynamically downscaled studies. For comparison with the more widely used RCM method, the Weather Research and Forecasting (WRF) model will be used for simulations at 27km and 9km. All simulations use the AMIP (Atmospheric Model Intercomparison Project) protocols. The time period is from 1979-01-01 to 2005-12-31 (UTC), and year 1979 was discarded as spin up time. The mean climatology across California's diverse climate zones, including temperature and precipitation, is analyzed and contrasted with the Weather Research and Forcasting (WRF) model (as a traditional RCM), regional reanalysis, gridded observational datasets and uniform high-resolution CESM at 0.25 degree with the finite volume (FV) dynamical core. The results show that variable-resolution CESM is competitive in representing regional climatology on both annual and seasonal time scales. This assessment adds value to the use of VRGCMs for projecting climate change over the coming century and improve our understanding of both past and future regional climate related to fine

  20. A method to characterize structure and symmetry in low-resolution images of colloidal thin films

    International Nuclear Information System (INIS)

    McDonald, Matthew J; Yethiraj, Anand; Beaulieu, L Y

    2012-01-01

    A method is presented for characterizing particle centres, particle size and crystal symmetries with sub-pixel resolution from 8-bit digital images of colloidal thin films taken with a scanning electron microscope (SEM). Digital images are converted to xyz data points by converting colour contrast to a numerical intensity. The data are then passed through a modified form of a Savitzky–Golay filter which allows particle centres to be determined. A subsequent routine is presented that, by analysing the weighted standard deviation and average intensity of the pixels along shifting rings, improves the accuracy of the detected particle centres and provides the radius of each particle. Obtaining the particle centres allows the symmetry of each particle (with respect to its neighbours) along with the mean crystal orientation to be obtained, all in one cohesive package. A key advantage of the method presented here is that it is very robust and works with both low- and high-resolution images—enabling, for example, routine quantitative analysis of SEM images. Because of the low level of user input, the method can be used to process a batch of images in order to characterize the evolution of samples. (paper)

  1. A multi-method high-resolution geophysical survey in the Machado de Castro museum, central Portugal

    International Nuclear Information System (INIS)

    Grangeia, Carlos; Matias, Manuel; Hermozilha, Hélder; Figueiredo, Fernando; Carvalho, Pedro; Silva, Ricardo

    2011-01-01

    Restoration of historical buildings is a delicate operation as they are often built over more ancient and important structures. The Machado de Castro Museum, Coimbra, Central Portugal, has suffered several interventions in historical times and lies over the ancient Roman forum of Coimbra. This building went through a restoration project. These works were preceded by an extensive geophysical survey that aimed at investigating subsurface stratigraphy, including archeological remains, and the internal structure of the actual walls. Owing to the needs of the project, geophysical data interpretation required not only integration but also high resolution. The study consisted of data acquisition over perpendicular planes and different levels that required detailed survey planning and integration of data from different locations that complement images of the surveyed area. Therefore a multi-method, resistivity imaging and a 3D ground probing radar (GPR), high-resolution geophysical survey was done inside the museum. Herein, radargrams are compared with the revealed stratigraphy so that signatures are interpreted, characterized and assigned to archeological structures. Although resistivity and GPR have different resolution capabilities, their data are overlapped and compared, bearing in mind the specific characteristics of this survey. It was also possible to unravel the inner structure of the actual walls, to establish connections between walls, foundations and to find older remains with the combined use and spatial integration of the GPR and resistivity imaging data

  2. The coupling of high-speed high resolution experimental data and LES through data assimilation techniques

    Science.gov (United States)

    Harris, S.; Labahn, J. W.; Frank, J. H.; Ihme, M.

    2017-11-01

    Data assimilation techniques can be integrated with time-resolved numerical simulations to improve predictions of transient phenomena. In this study, optimal interpolation and nudging are employed for assimilating high-speed high-resolution measurements obtained for an inert jet into high-fidelity large-eddy simulations. This experimental data set was chosen as it provides both high spacial and temporal resolution for the three-component velocity field in the shear layer of the jet. Our first objective is to investigate the impact that data assimilation has on the resulting flow field for this inert jet. This is accomplished by determining the region influenced by the data assimilation and corresponding effect on the instantaneous flow structures. The second objective is to determine optimal weightings for two data assimilation techniques. The third objective is to investigate how the frequency at which the data is assimilated affects the overall predictions. Graduate Research Assistant, Department of Mechanical Engineering.

  3. A high resolution large dynamic range TDC circuit implementation

    International Nuclear Information System (INIS)

    Lei Wuhu; Liu Songqiu; Ye Weiguo; Han Hui; Li Pengyu

    2003-01-01

    Time measurement technology is usually used in nuclear experimentation. There are many methods of time measurement. The implementation method of Time to Digital Conversion (TDC) by means of electronic is a classical technology. The range and resolution of TDC is different according with different usage. A wide range and high resolution TDC circuit, including its theory and implementation way, is introduced in this paper. The test result is also given. (authors)

  4. A high resolution large dynamic range TDC circuit implementation

    International Nuclear Information System (INIS)

    Lei Wuhu; Liu Songqiu; Li Pengyu; Han Hui; Ye Yanlin

    2005-01-01

    Time measurement technology is usually used in nuclear experimentation. There are many methods of time measurement. The implementation method of Time to Digital Conversion (TDC) by means of electronics is a classical technology. The range and resolution of TDC is different according with different usage. A wide range and high resolution TDC circuit, including its theory and implementation way, is introduced in this paper. The test result is also given. (authors)

  5. A Numerical Matrix-Based method in Harmonic Studies in Wind Power Plants

    DEFF Research Database (Denmark)

    Dowlatabadi, Mohammadkazem Bakhshizadeh; Hjerrild, Jesper; Kocewiak, Łukasz Hubert

    2016-01-01

    In the low frequency range, there are some couplings between the positive- and negative-sequence small-signal impedances of the power converter due to the nonlinear and low bandwidth control loops such as the synchronization loop. In this paper, a new numerical method which also considers...... these couplings will be presented. The numerical data are advantageous to the parametric differential equations, because analysing the high order and complex transfer functions is very difficult, and finally one uses the numerical evaluation methods. This paper proposes a numerical matrix-based method, which...

  6. High resolution and high sensitivity methods for oligosaccharide mapping and characterization by normal phase high performance liquid chromatography following derivatization with highly fluorescent anthranilic acid.

    Science.gov (United States)

    Anumula, K R; Dhume, S T

    1998-07-01

    Facile labeling of oligosaccharides (acidic and neutral) in a nonselective manner was achieved with highly fluorescent anthranilic acid (AA, 2-aminobenzoic acid) (more than twice the intensity of 2-aminobenzamide, AB) for specific detection at very high sensitivity. Quantitative labeling in acetate-borate buffered methanol (approximately pH 5.0) at 80 degreesC for 60 min resulted in negligible or no desialylation of the oligosaccharides. A high resolution high performance liquid chromatographic method was developed for quantitative oligosaccharide mapping on a polymeric-NH2bonded (Astec) column operating under normal phase and anion exchange (NP-HPAEC) conditions. For isolation of oligosaccharides from the map by simple evaporation, the chromatographic conditions developed use volatile acetic acid-triethylamine buffer (approximately pH 4.0) systems. The mapping and characterization technology was developed using well characterized standard glycoproteins. The fluorescent oligosaccharide maps were similar to the maps obtained by the high pH anion-exchange chromatography with pulsed amperometric detection (HPAEC-PAD), except that the fluorescent maps contained more defined peaks. In the map, the oligosaccharides separated into groups based on charge, size, linkage, and overall structure in a manner similar to HPAEC-PAD with contribution of -COOH function from the label, anthranilic acid. However, selectivity of the column for sialic acid linkages was different. A second dimension normal phase HPLC (NP-HPLC) method was developed on an amide column (TSK Gel amide-80) for separation of the AA labeled neutral complex type and isomeric structures of high mannose type oligosaccharides. The oligosaccharides labeled with AA are compatible with biochemical and biophysical techniques, and use of matrix assisted laser desorption mass spectrometry for rapid determination of oligosaccharide mass map of glycoproteins is demonstrated. High resolution of NP-HPAEC and NP-HPLC methods

  7. Numerical simulation methods to richtmyer-meshkov instabilities

    International Nuclear Information System (INIS)

    Zhou Ning; Yu Yan; Tang Weijun

    2003-01-01

    Front tracking algorithms have generally assumed that the computational medium is divided into piece-wise smooth subdomains bounded by interfaces and that strong wave interactions are solved via Riemann solutions. However, in multi-dimensional cases, the Riemann solution of multiple shock wave interactions are far more complicated and still subject to analytical study. For this reason, it is very desirable to be able to track contact discontinuities only. A new numerical algorithm to couple a tracked contact surface and an untracked strong shock wave are described. The new tracking algorithm reduces the complication of computation, and maintains the sharp resolution of the contact surface. The numerical results are good. (authors)

  8. Geothermal-Related Thermo-Elastic Fracture Analysis by Numerical Manifold Method

    OpenAIRE

    Jun He; Quansheng Liu; Zhijun Wu; Yalong Jiang

    2018-01-01

    One significant factor influencing geothermal energy exploitation is the variation of the mechanical properties of rock in high temperature environments. Since rock is typically a heterogeneous granular material, thermal fracturing frequently occurs in the rock when the ambient temperature changes, which can greatly influence the geothermal energy exploitation. A numerical method based on the numerical manifold method (NMM) is developed in this study to simulate the thermo-elastic fracturing ...

  9. The development of high performance numerical simulation code for transient groundwater flow and reactive solute transport problems based on local discontinuous Galerkin method

    International Nuclear Information System (INIS)

    Suzuki, Shunichi; Motoshima, Takayuki; Naemura, Yumi; Kubo, Shin; Kanie, Shunji

    2009-01-01

    The authors develop a numerical code based on Local Discontinuous Galerkin Method for transient groundwater flow and reactive solute transport problems in order to make it possible to do three dimensional performance assessment on radioactive waste repositories at the earliest stage possible. Local discontinuous Galerkin Method is one of mixed finite element methods which are more accurate ones than standard finite element methods. In this paper, the developed numerical code is applied to several problems which are provided analytical solutions in order to examine its accuracy and flexibility. The results of the simulations show the new code gives highly accurate numeric solutions. (author)

  10. Classification of high resolution satellite images

    OpenAIRE

    Karlsson, Anders

    2003-01-01

    In this thesis the Support Vector Machine (SVM)is applied on classification of high resolution satellite images. Sveral different measures for classification, including texture mesasures, 1st order statistics, and simple contextual information were evaluated. Additionnally, the image was segmented, using an enhanced watershed method, in order to improve the classification accuracy.

  11. Operational High Resolution Chemical Kinetics Simulation, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Numerical simulations of chemical kinetics are critical to addressing urgent issues in both the developed and developing world. Ongoing demand for higher resolution...

  12. Numerical methods: Analytical benchmarking in transport theory

    International Nuclear Information System (INIS)

    Ganapol, B.D.

    1988-01-01

    Numerical methods applied to reactor technology have reached a high degree of maturity. Certainly one- and two-dimensional neutron transport calculations have become routine, with several programs available on personal computer and the most widely used programs adapted to workstation and minicomputer computational environments. With the introduction of massive parallelism and as experience with multitasking increases, even more improvement in the development of transport algorithms can be expected. Benchmarking an algorithm is usually not a very pleasant experience for the code developer. Proper algorithmic verification by benchmarking involves the following considerations: (1) conservation of particles, (2) confirmation of intuitive physical behavior, and (3) reproduction of analytical benchmark results. By using today's computational advantages, new basic numerical methods have been developed that allow a wider class of benchmark problems to be considered

  13. P-CSI v1.0, an accelerated barotropic solver for the high-resolution ocean model component in the Community Earth System Model v2.0

    Directory of Open Access Journals (Sweden)

    X. Huang

    2016-11-01

    Full Text Available In the Community Earth System Model (CESM, the ocean model is computationally expensive for high-resolution grids and is often the least scalable component for high-resolution production experiments. The major bottleneck is that the barotropic solver scales poorly at high core counts. We design a new barotropic solver to accelerate the high-resolution ocean simulation. The novel solver adopts a Chebyshev-type iterative method to reduce the global communication cost in conjunction with an effective block preconditioner to further reduce the iterations. The algorithm and its computational complexity are theoretically analyzed and compared with other existing methods. We confirm the significant reduction of the global communication time with a competitive convergence rate using a series of idealized tests. Numerical experiments using the CESM 0.1° global ocean model show that the proposed approach results in a factor of 1.7 speed-up over the original method with no loss of accuracy, achieving 10.5 simulated years per wall-clock day on 16 875 cores.

  14. The clinical impact of high resolution computed tomography in patients with respiratory disease

    International Nuclear Information System (INIS)

    Screaton, Nicholas J.; Tasker, Angela D.; Flower, Christopher D.R.; Miller, Fiona N.A.C.; Patel, Bipen D.; Groves, Ashley; Lomas, David A.

    2011-01-01

    High resolution computed tomography is widely used to investigate patients with suspected diffuse lung disease. Numerous studies have assessed the diagnostic performance of this investigation, but the diagnostic and therapeutic impacts have received little attention. The diagnostic and therapeutic impacts of high resolution computed tomography in routine clinical practice were evaluated prospectively. All 507 referrals for high-resolution computed tomography over 12 months in two centres were included. Requesting clinicians completed questionnaires before and after the investigation detailing clinical indications, working diagnoses, confidence level in each diagnosis, planned investigations and treatments. Three hundred and fifty-four studies on 347 patients had complete data and were available for analysis. Following high-resolution computed tomography, a new leading diagnosis (the diagnosis with the highest confidence level) emerged in 204 (58%) studies; in 166 (47%) studies the new leading diagnosis was not in the original differential diagnosis. Mean confidence in the leading diagnosis increased from 6.7 to 8.5 out of 10 (p < 0.001). The invasiveness of planned investigations increased in 23 (7%) studies and decreased in 124 (35%) studies. The treatment plan was modified after 319 (90%) studies. Thoracic high-resolution computed tomography alters leading diagnosis, increases diagnostic confidence, and frequently changes investigation and management plans. (orig.)

  15. Cloud detection method for Chinese moderate high resolution satellite imagery (Conference Presentation)

    Science.gov (United States)

    Zhong, Bo; Chen, Wuhan; Wu, Shanlong; Liu, Qinhuo

    2016-10-01

    Cloud detection of satellite imagery is very important for quantitative remote sensing research and remote sensing applications. However, many satellite sensors don't have enough bands for a quick, accurate, and simple detection of clouds. Particularly, the newly launched moderate to high spatial resolution satellite sensors of China, such as the charge-coupled device on-board the Chinese Huan Jing 1 (HJ-1/CCD) and the wide field of view (WFV) sensor on-board the Gao Fen 1 (GF-1), only have four available bands including blue, green, red, and near infrared bands, which are far from the requirements of most could detection methods. In order to solve this problem, an improved and automated cloud detection method for Chinese satellite sensors called OCM (Object oriented Cloud and cloud-shadow Matching method) is presented in this paper. It firstly modified the Automatic Cloud Cover Assessment (ACCA) method, which was developed for Landsat-7 data, to get an initial cloud map. The modified ACCA method is mainly based on threshold and different threshold setting produces different cloud map. Subsequently, a strict threshold is used to produce a cloud map with high confidence and large amount of cloud omission and a loose threshold is used to produce a cloud map with low confidence and large amount of commission. Secondly, a corresponding cloud-shadow map is also produced using the threshold of near-infrared band. Thirdly, the cloud maps and cloud-shadow map are transferred to cloud objects and cloud-shadow objects. Cloud and cloud-shadow are usually in pairs; consequently, the final cloud and cloud-shadow maps are made based on the relationship between cloud and cloud-shadow objects. OCM method was tested using almost 200 HJ-1/CCD images across China and the overall accuracy of cloud detection is close to 90%.

  16. Solutions on high-resolution multiple configuration system sensors

    Science.gov (United States)

    Liu, Hua; Ding, Quanxin; Guo, Chunjie; Zhou, Liwei

    2014-11-01

    For aim to achieve an improved resolution in modern image domain, a method of continuous zoom multiple configuration, with a core optics is attempt to establish model by novel principle on energy transfer and high accuracy localization, by which the system resolution can be improved with a level in nano meters. A comparative study on traditional vs modern methods can demonstrate that the dialectical relationship and their balance is important, among Merit function, Optimization algorithms and Model parameterization. The effect of system evaluated criterion that MTF, REA, RMS etc. can support our arguments qualitatively.

  17. Ultra-high resolution protein crystallography

    International Nuclear Information System (INIS)

    Takeda, Kazuki; Hirano, Yu; Miki, Kunio

    2010-01-01

    Many protein structures have been determined by X-ray crystallography and deposited with the Protein Data Bank. However, these structures at usual resolution (1.5< d<3.0 A) are insufficient in their precision and quantity for elucidating the molecular mechanism of protein functions directly from structural information. Several studies at ultra-high resolution (d<0.8 A) have been performed with synchrotron radiation in the last decade. The highest resolution of the protein crystals was achieved at 0.54 A resolution for a small protein, crambin. In such high resolution crystals, almost all of hydrogen atoms of proteins and some hydrogen atoms of bound water molecules are experimentally observed. In addition, outer-shell electrons of proteins can be analyzed by the multipole refinement procedure. However, the influence of X-rays should be precisely estimated in order to derive meaningful information from the crystallographic results. In this review, we summarize refinement procedures, current status and perspectives for ultra high resolution protein crystallography. (author)

  18. Resolution enhancement of tri-stereo remote sensing images by super resolution methods

    Science.gov (United States)

    Tuna, Caglayan; Akoguz, Alper; Unal, Gozde; Sertel, Elif

    2016-10-01

    Super resolution (SR) refers to generation of a High Resolution (HR) image from a decimated, blurred, low-resolution (LR) image set, which can be either a single frame or multi-frame that contains a collection of several images acquired from slightly different views of the same observation area. In this study, we propose a novel application of tri-stereo Remote Sensing (RS) satellite images to the super resolution problem. Since the tri-stereo RS images of the same observation area are acquired from three different viewing angles along the flight path of the satellite, these RS images are properly suited to a SR application. We first estimate registration between the chosen reference LR image and other LR images to calculate the sub pixel shifts among the LR images. Then, the warping, blurring and down sampling matrix operators are created as sparse matrices to avoid high memory and computational requirements, which would otherwise make the RS-SR solution impractical. Finally, the overall system matrix, which is constructed based on the obtained operator matrices is used to obtain the estimate HR image in one step in each iteration of the SR algorithm. Both the Laplacian and total variation regularizers are incorporated separately into our algorithm and the results are presented to demonstrate an improved quantitative performance against the standard interpolation method as well as improved qualitative results due expert evaluations.

  19. Resolution-recovery-embedded image reconstruction for a high-resolution animal SPECT system.

    Science.gov (United States)

    Zeraatkar, Navid; Sajedi, Salar; Farahani, Mohammad Hossein; Arabi, Hossein; Sarkar, Saeed; Ghafarian, Pardis; Rahmim, Arman; Ay, Mohammad Reza

    2014-11-01

    The small-animal High-Resolution SPECT (HiReSPECT) is a dedicated dual-head gamma camera recently designed and developed in our laboratory for imaging of murine models. Each detector is composed of an array of 1.2 × 1.2 mm(2) (pitch) pixelated CsI(Na) crystals. Two position-sensitive photomultiplier tubes (H8500) are coupled to each head's crystal. In this paper, we report on a resolution-recovery-embedded image reconstruction code applicable to the system and present the experimental results achieved using different phantoms and mouse scans. Collimator-detector response functions (CDRFs) were measured via a pixel-driven method using capillary sources at finite distances from the head within the field of view (FOV). CDRFs were then fitted by independent Gaussian functions. Thereafter, linear interpolations were applied to the standard deviation (σ) values of the fitted Gaussians, yielding a continuous map of CDRF at varying distances from the head. A rotation-based maximum-likelihood expectation maximization (MLEM) method was used for reconstruction. A fast rotation algorithm was developed to rotate the image matrix according to the desired angle by means of pre-generated rotation maps. The experiments demonstrated improved resolution utilizing our resolution-recovery-embedded image reconstruction. While the full-width at half-maximum (FWHM) radial and tangential resolution measurements of the system were over 2 mm in nearly all positions within the FOV without resolution recovery, reaching around 2.5 mm in some locations, they fell below 1.8 mm everywhere within the FOV using the resolution-recovery algorithm. The noise performance of the system was also acceptable; the standard deviation of the average counts per voxel in the reconstructed images was 6.6% and 8.3% without and with resolution recovery, respectively. Copyright © 2014 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  20. High-resolution subgrid models: background, grid generation, and implementation

    Science.gov (United States)

    Sehili, Aissa; Lang, Günther; Lippert, Christoph

    2014-04-01

    The basic idea of subgrid models is the use of available high-resolution bathymetric data at subgrid level in computations that are performed on relatively coarse grids allowing large time steps. For that purpose, an algorithm that correctly represents the precise mass balance in regions where wetting and drying occur was derived by Casulli (Int J Numer Method Fluids 60:391-408, 2009) and Casulli and Stelling (Int J Numer Method Fluids 67:441-449, 2010). Computational grid cells are permitted to be wet, partially wet, or dry, and no drying threshold is needed. Based on the subgrid technique, practical applications involving various scenarios were implemented including an operational forecast model for water level, salinity, and temperature of the Elbe Estuary in Germany. The grid generation procedure allows a detailed boundary fitting at subgrid level. The computational grid is made of flow-aligned quadrilaterals including few triangles where necessary. User-defined grid subdivision at subgrid level allows a correct representation of the volume up to measurement accuracy. Bottom friction requires a particular treatment. Based on the conveyance approach, an appropriate empirical correction was worked out. The aforementioned features make the subgrid technique very efficient, robust, and accurate. Comparison of predicted water levels with the comparatively highly resolved classical unstructured grid model shows very good agreement. The speedup in computational performance due to the use of the subgrid technique is about a factor of 20. A typical daily forecast can be carried out in less than 10 min on a standard PC-like hardware. The subgrid technique is therefore a promising framework to perform accurate temporal and spatial large-scale simulations of coastal and estuarine flow and transport processes at low computational cost.

  1. A novel approach for multiple mobile objects path planning: Parametrization method and conflict resolution strategy

    International Nuclear Information System (INIS)

    Ma, Yong; Wang, Hongwei; Zamirian, M.

    2012-01-01

    We present a new approach containing two steps to determine conflict-free paths for mobile objects in two and three dimensions with moving obstacles. Firstly, the shortest path of each object is set as goal function which is subject to collision-avoidance criterion, path smoothness, and velocity and acceleration constraints. This problem is formulated as calculus of variation problem (CVP). Using parametrization method, CVP is converted to time-varying nonlinear programming problems (TNLPP) and then resolved. Secondly, move sequence of object is assigned by priority scheme; conflicts are resolved by multilevel conflict resolution strategy. Approach efficiency is confirmed by numerical examples. -- Highlights: ► Approach with parametrization method and conflict resolution strategy is proposed. ► Approach fits for multi-object paths planning in two and three dimensions. ► Single object path planning and multi-object conflict resolution are orderly used. ► Path of each object obtained with parameterization method in the first phase. ► Conflict-free paths gained by multi-object conflict resolution in the second phase.

  2. High Performance Numerical Computing for High Energy Physics: A New Challenge for Big Data Science

    International Nuclear Information System (INIS)

    Pop, Florin

    2014-01-01

    Modern physics is based on both theoretical analysis and experimental validation. Complex scenarios like subatomic dimensions, high energy, and lower absolute temperature are frontiers for many theoretical models. Simulation with stable numerical methods represents an excellent instrument for high accuracy analysis, experimental validation, and visualization. High performance computing support offers possibility to make simulations at large scale, in parallel, but the volume of data generated by these experiments creates a new challenge for Big Data Science. This paper presents existing computational methods for high energy physics (HEP) analyzed from two perspectives: numerical methods and high performance computing. The computational methods presented are Monte Carlo methods and simulations of HEP processes, Markovian Monte Carlo, unfolding methods in particle physics, kernel estimation in HEP, and Random Matrix Theory used in analysis of particles spectrum. All of these methods produce data-intensive applications, which introduce new challenges and requirements for ICT systems architecture, programming paradigms, and storage capabilities.

  3. A flexible and accurate digital volume correlation method applicable to high-resolution volumetric images

    Science.gov (United States)

    Pan, Bing; Wang, Bo

    2017-10-01

    Digital volume correlation (DVC) is a powerful technique for quantifying interior deformation within solid opaque materials and biological tissues. In the last two decades, great efforts have been made to improve the accuracy and efficiency of the DVC algorithm. However, there is still a lack of a flexible, robust and accurate version that can be efficiently implemented in personal computers with limited RAM. This paper proposes an advanced DVC method that can realize accurate full-field internal deformation measurement applicable to high-resolution volume images with up to billions of voxels. Specifically, a novel layer-wise reliability-guided displacement tracking strategy combined with dynamic data management is presented to guide the DVC computation from slice to slice. The displacements at specified calculation points in each layer are computed using the advanced 3D inverse-compositional Gauss-Newton algorithm with the complete initial guess of the deformation vector accurately predicted from the computed calculation points. Since only limited slices of interest in the reference and deformed volume images rather than the whole volume images are required, the DVC calculation can thus be efficiently implemented on personal computers. The flexibility, accuracy and efficiency of the presented DVC approach are demonstrated by analyzing computer-simulated and experimentally obtained high-resolution volume images.

  4. Exploring New Challenges of High-Resolution SWOT Satellite Altimetry with a Regional Model of the Solomon Sea

    Science.gov (United States)

    Brasseur, P.; Verron, J. A.; Djath, B.; Duran, M.; Gaultier, L.; Gourdeau, L.; Melet, A.; Molines, J. M.; Ubelmann, C.

    2014-12-01

    The upcoming high-resolution SWOT altimetry satellite will provide an unprecedented description of the ocean dynamic topography for studying sub- and meso-scale processes in the ocean. But there is still much uncertainty on the signal that will be observed. There are many scientific questions that are unresolved about the observability of altimetry at vhigh resolution and on the dynamical role of the ocean meso- and submesoscales. In addition, SWOT data will raise specific problems due to the size of the data flows. These issues will probably impact the data assimilation approaches for future scientific or operational oceanography applications. In this work, we propose to use a high-resolution numerical model of the Western Pacific Solomon Sea as a regional laboratory to explore such observability and dynamical issues, as well as new data assimilation challenges raised by SWOT. The Solomon Sea connects subtropical water masses to the equatorial ones through the low latitude western boundary currents and could potentially modulate the tropical Pacific climate. In the South Western Pacific, the Solomon Sea exhibits very intense eddy kinetic energy levels, while relatively little is known about the mesoscale and submesoscale activities in this region. The complex bathymetry of the region, complicated by the presence of narrow straits and numerous islands, raises specific challenges. So far, a Solomon sea model configuration has been set up at 1/36° resolution. Numerical simulations have been performed to explore the meso- and submesoscales dynamics. The numerical solutions which have been validated against available in situ data, show the development of small scale features, eddies, fronts and filaments. Spectral analysis reveals a behavior that is consistent with the SQG theory. There is a clear evidence of energy cascade from the small scales including the submesoscales, although those submesoscales are only partially resolved by the model. In parallel

  5. Methodology of high-resolution photography for mural condition database

    Science.gov (United States)

    Higuchi, R.; Suzuki, T.; Shibata, M.; Taniguchi, Y.

    2015-08-01

    Digital documentation is one of the most useful techniques to record the condition of cultural heritage. Recently, high-resolution images become increasingly useful because it is possible to show general views of mural paintings and also detailed mural conditions in a single image. As mural paintings are damaged by environmental stresses, it is necessary to record the details of painting condition on high-resolution base maps. Unfortunately, the cost of high-resolution photography and the difficulty of operating its instruments and software have commonly been an impediment for researchers and conservators. However, the recent development of graphic software makes its operation simpler and less expensive. In this paper, we suggest a new approach to make digital heritage inventories without special instruments, based on our recent our research project in Üzümlü church in Cappadocia, Turkey. This method enables us to achieve a high-resolution image database with low costs, short time, and limited human resources.

  6. Development of the numerical method for liquid metal magnetohydrodynamics (I). Investigation of the method and development of the 2D method

    International Nuclear Information System (INIS)

    Ohira, H.; Ara, K.

    2002-11-01

    Advanced electromagnetic components are investigated in Feasibility Studies on Commercialized FR Cycle System to apply to the main cooling systems of Liquid Metal Fast Reactor. Although a lot of experiments and numerical analysis were carried out on both high Reynolds numbers and high magnetic Reynolds numbers, the complex phenomena could not be evaluated in detail. As the first step of the development of the numerical methods for the liquid metal magnetohydrodynamics, we investigated numerical methods that could be applied to the electromagnetic components with both complex structures and high magnetic turbulent field. As a result, we selected GSMAC (Generalized-Simplified MArker and Cell) method for calculating the liquid metal fluid dynamics because it could be easily applied to the complex flow field. We also selected the vector-FEM for calculating the magnetic field of the large components because the method had no interaction procedure. In the high magnetic turbulent field, the dynamic-SGS models would be also a promising model for the good estimation, because it could calculate the field directly without any experimental constant. In order to verify the GSMAC and the vector-FEM, we developed the 2D numerical models and calculated the magnetohydrodynamics in the large electromagnetic pump. It was estimated from these results that the methods were basically reasonable, because the calculated pressure differences had the similar tendencies to the experimental ones. (author)

  7. Automated Segmentation of High-Resolution Photospheric Images of Active Regions

    Science.gov (United States)

    Yang, Meng; Tian, Yu; Rao, Changhui

    2018-02-01

    Due to the development of ground-based, large-aperture solar telescopes with adaptive optics (AO) resulting in increasing resolving ability, more accurate sunspot identifications and characterizations are required. In this article, we have developed a set of automated segmentation methods for high-resolution solar photospheric images. Firstly, a local-intensity-clustering level-set method is applied to roughly separate solar granulation and sunspots. Then reinitialization-free level-set evolution is adopted to adjust the boundaries of the photospheric patch; an adaptive intensity threshold is used to discriminate between umbra and penumbra; light bridges are selected according to their regional properties from candidates produced by morphological operations. The proposed method is applied to the solar high-resolution TiO 705.7-nm images taken by the 151-element AO system and Ground-Layer Adaptive Optics prototype system at the 1-m New Vacuum Solar Telescope of the Yunnan Observatory. Experimental results show that the method achieves satisfactory robustness and efficiency with low computational cost on high-resolution images. The method could also be applied to full-disk images, and the calculated sunspot areas correlate well with the data given by the National Oceanic and Atmospheric Administration (NOAA).

  8. New high resolution Random Telegraph Noise (RTN) characterization method for resistive RAM

    Science.gov (United States)

    Maestro, M.; Diaz, J.; Crespo-Yepes, A.; Gonzalez, M. B.; Martin-Martinez, J.; Rodriguez, R.; Nafria, M.; Campabadal, F.; Aymerich, X.

    2016-01-01

    Random Telegraph Noise (RTN) is one of the main reliability problems of resistive switching-based memories. To understand the physics behind RTN, a complete and accurate RTN characterization is required. The standard equipment used to analyse RTN has a typical time resolution of ∼2 ms which prevents evaluating fast phenomena. In this work, a new RTN measurement procedure, which increases the measurement time resolution to 2 μs, is proposed. The experimental set-up, together with the recently proposed Weighted Time Lag (W-LT) method for the analysis of RTN signals, allows obtaining a more detailed and precise information about the RTN phenomenon.

  9. High-Resolution Regional Reanalysis in China: Evaluation of 1 Year Period Experiments

    Science.gov (United States)

    Zhang, Qi; Pan, Yinong; Wang, Shuyu; Xu, Jianjun; Tang, Jianping

    2017-10-01

    Globally, reanalysis data sets are widely used in assessing climate change, validating numerical models, and understanding the interactions between the components of a climate system. However, due to the relatively coarse resolution, most global reanalysis data sets are not suitable to apply at the local and regional scales directly with the inadequate descriptions of mesoscale systems and climatic extreme incidents such as mesoscale convective systems, squall lines, tropical cyclones, regional droughts, and heat waves. In this study, by using a data assimilation system of Gridpoint Statistical Interpolation, and a mesoscale atmospheric model of Weather Research and Forecast model, we build a regional reanalysis system. This is preliminary and the first experimental attempt to construct a high-resolution reanalysis for China main land. Four regional test bed data sets are generated for year 2013 via three widely used methods (classical dynamical downscaling, spectral nudging, and data assimilation) and a hybrid method with data assimilation coupled with spectral nudging. Temperature at 2 m, precipitation, and upper level atmospheric variables are evaluated by comparing against observations for one-year-long tests. It can be concluded that the regional reanalysis with assimilation and nudging methods can better produce the atmospheric variables from surface to upper levels, and regional extreme events such as heat waves, than the classical dynamical downscaling. Compared to the ERA-Interim global reanalysis, the hybrid nudging method performs slightly better in reproducing upper level temperature and low-level moisture over China, which improves regional reanalysis data quality.

  10. Kinetic Energy from Supernova Feedback in High-resolution Galaxy Simulations

    Science.gov (United States)

    Simpson, Christine M.; Bryan, Greg L.; Hummels, Cameron; Ostriker, Jeremiah P.

    2015-08-01

    We describe a new method for adding a prescribed amount of kinetic energy to simulated gas modeled on a cartesian grid by directly altering grid cells’ mass and velocity in a distributed fashion. The method is explored in the context of supernova (SN) feedback in high-resolution (˜10 pc) hydrodynamic simulations of galaxy formation. Resolution dependence is a primary consideration in our application of the method, and simulations of isolated explosions (performed at different resolutions) motivate a resolution-dependent scaling for the injected fraction of kinetic energy that we apply in cosmological simulations of a 109 M⊙ dwarf halo. We find that in high-density media (≳50 cm-3) with coarse resolution (≳4 pc per cell), results are sensitive to the initial kinetic energy fraction due to early and rapid cooling. In our galaxy simulations, the deposition of small amounts of SN energy in kinetic form (as little as 1%) has a dramatic impact on the evolution of the system, resulting in an order-of-magnitude suppression of stellar mass. The overall behavior of the galaxy in the two highest resolution simulations we perform appears to converge. We discuss the resulting distribution of stellar metallicities, an observable sensitive to galactic wind properties, and find that while the new method demonstrates increased agreement with observed systems, significant discrepancies remain, likely due to simplistic assumptions that neglect contributions from SNe Ia and stellar winds.

  11. Numerical method for the nonlinear Fokker-Planck equation

    International Nuclear Information System (INIS)

    Zhang, D.S.; Wei, G.W.; Kouri, D.J.; Hoffman, D.K.

    1997-01-01

    A practical method based on distributed approximating functionals (DAFs) is proposed for numerically solving a general class of nonlinear time-dependent Fokker-Planck equations. The method relies on a numerical scheme that couples the usual path-integral concept to the DAF idea. The high accuracy and reliability of the method are illustrated by applying it to an exactly solvable nonlinear Fokker-Planck equation, and the method is compared with the accurate K-point Stirling interpolation formula finite-difference method. The approach is also used successfully to solve a nonlinear self-consistent dynamic mean-field problem for which both the cumulant expansion and scaling theory have been found by Drozdov and Morillo [Phys. Rev. E 54, 931 (1996)] to be inadequate to describe the occurrence of a long-lived transient bimodality. The standard interpretation of the transient bimodality in terms of the flat region in the kinetic potential fails for the present case. An alternative analysis based on the effective potential of the Schroedinger-like Fokker-Planck equation is suggested. Our analysis of the transient bimodality is strongly supported by two examples that are numerically much more challenging than other examples that have been previously reported for this problem. copyright 1997 The American Physical Society

  12. A subspace approach to high-resolution spectroscopic imaging.

    Science.gov (United States)

    Lam, Fan; Liang, Zhi-Pei

    2014-04-01

    To accelerate spectroscopic imaging using sparse sampling of (k,t)-space and subspace (or low-rank) modeling to enable high-resolution metabolic imaging with good signal-to-noise ratio. The proposed method, called SPectroscopic Imaging by exploiting spatiospectral CorrElation, exploits a unique property known as partial separability of spectroscopic signals. This property indicates that high-dimensional spectroscopic signals reside in a very low-dimensional subspace and enables special data acquisition and image reconstruction strategies to be used to obtain high-resolution spatiospectral distributions with good signal-to-noise ratio. More specifically, a hybrid chemical shift imaging/echo-planar spectroscopic imaging pulse sequence is proposed for sparse sampling of (k,t)-space, and a low-rank model-based algorithm is proposed for subspace estimation and image reconstruction from sparse data with the capability to incorporate prior information and field inhomogeneity correction. The performance of the proposed method has been evaluated using both computer simulations and phantom studies, which produced very encouraging results. For two-dimensional spectroscopic imaging experiments on a metabolite phantom, a factor of 10 acceleration was achieved with a minimal loss in signal-to-noise ratio compared to the long chemical shift imaging experiments and with a significant gain in signal-to-noise ratio compared to the accelerated echo-planar spectroscopic imaging experiments. The proposed method, SPectroscopic Imaging by exploiting spatiospectral CorrElation, is able to significantly accelerate spectroscopic imaging experiments, making high-resolution metabolic imaging possible. Copyright © 2014 Wiley Periodicals, Inc.

  13. Numerical Investigation of Vertical Cavity Lasers With High-Contrast Gratings Using the Fourier Modal Method

    DEFF Research Database (Denmark)

    Taghizadeh, Alireza; Mørk, Jesper; Chung, Il-Sug

    2016-01-01

    We explore the use of a modal expansion technique, Fourier modal method (FMM), for investigating the optical properties of vertical cavities employing high-contrast gratings (HCGs). Three techniques for determining the resonance frequency and quality factor (Q-factor) of a cavity mode are compared......, the scattering losses of several HCG-based vertical cavities with inplane heterostructures which have promising prospects for fundamental physics studies and on-chip laser applications, are investigated. This type of parametric study of 3D structures would be numerically very demanding using spatial...

  14. Ribbon scanning confocal for high-speed high-resolution volume imaging of brain.

    Directory of Open Access Journals (Sweden)

    Alan M Watson

    Full Text Available Whole-brain imaging is becoming a fundamental means of experimental insight; however, achieving subcellular resolution imagery in a reasonable time window has not been possible. We describe the first application of multicolor ribbon scanning confocal methods to collect high-resolution volume images of chemically cleared brains. We demonstrate that ribbon scanning collects images over ten times faster than conventional high speed confocal systems but with equivalent spectral and spatial resolution. Further, using this technology, we reconstruct large volumes of mouse brain infected with encephalitic alphaviruses and demonstrate that regions of the brain with abundant viral replication were inaccessible to vascular perfusion. This reveals that the destruction or collapse of large regions of brain micro vasculature may contribute to the severe disease caused by Venezuelan equine encephalitis virus. Visualization of this fundamental impact of infection would not be possible without sampling at subcellular resolution within large brain volumes.

  15. Numerical Methods for Partial Differential Equations

    CERN Document Server

    Guo, Ben-yu

    1987-01-01

    These Proceedings of the first Chinese Conference on Numerical Methods for Partial Differential Equations covers topics such as difference methods, finite element methods, spectral methods, splitting methods, parallel algorithm etc., their theoretical foundation and applications to engineering. Numerical methods both for boundary value problems of elliptic equations and for initial-boundary value problems of evolution equations, such as hyperbolic systems and parabolic equations, are involved. The 16 papers of this volume present recent or new unpublished results and provide a good overview of current research being done in this field in China.

  16. Production of solar radiation bankable datasets from high-resolution solar irradiance derived with dynamical downscaling Numerical Weather prediction model

    Directory of Open Access Journals (Sweden)

    Yassine Charabi

    2016-11-01

    Full Text Available A bankable solar radiation database is required for the financial viability of solar energy project. Accurate estimation of solar energy resources in a country is very important for proper siting, sizing and life cycle cost analysis of solar energy systems. During the last decade an important progress has been made to develop multiple solar irradiance database (Global Horizontal Irradiance (GHI and Direct Normal Irradiance (DNI, using satellite of different resolution and sophisticated models. This paper assesses the performance of High-resolution solar irradiance derived with dynamical downscaling Numerical Weather Prediction model with, GIS topographical solar radiation model, satellite data and ground measurements, for the production of bankable solar radiation datasets. For this investigation, NWP model namely Consortium for Small-scale Modeling (COSMO is used for the dynamical downscaling of solar radiation. The obtained results increase confidence in solar radiation data base obtained from dynamical downscaled NWP model. The mean bias of dynamical downscaled NWP model is small, on the order of a few percents for GHI, and it could be ranked as a bankable datasets. Fortunately, these data are usually archived in the meteorological department and gives a good idea of the hourly, monthly, and annual incident energy. Such short time-interval data are valuable in designing and operating the solar energy facility. The advantage of the NWP model is that it can be used for solar radiation forecast since it can estimate the weather condition within the next 72–120 hours. This gives a reasonable estimation of the solar radiation that in turns can be used to forecast the electric power generation by the solar power plant.

  17. A flexible spatiotemporal method for fusing satellite images with different resolutions

    Science.gov (United States)

    Xiaolin Zhu; Eileen H. Helmer; Feng Gao; Desheng Liu; Jin Chen; Michael A. Lefsky

    2016-01-01

    Studies of land surface dynamics in heterogeneous landscapes often require remote sensing datawith high acquisition frequency and high spatial resolution. However, no single sensor meets this requirement. This study presents a new spatiotemporal data fusion method, the Flexible Spatiotemporal DAta Fusion (FSDAF) method, to generate synthesized frequent high spatial...

  18. High resolution solar observations

    International Nuclear Information System (INIS)

    Title, A.

    1985-01-01

    Currently there is a world-wide effort to develop optical technology required for large diffraction limited telescopes that must operate with high optical fluxes. These developments can be used to significantly improve high resolution solar telescopes both on the ground and in space. When looking at the problem of high resolution observations it is essential to keep in mind that a diffraction limited telescope is an interferometer. Even a 30 cm aperture telescope, which is small for high resolution observations, is a big interferometer. Meter class and above diffraction limited telescopes can be expected to be very unforgiving of inattention to details. Unfortunately, even when an earth based telescope has perfect optics there are still problems with the quality of its optical path. The optical path includes not only the interior of the telescope, but also the immediate interface between the telescope and the atmosphere, and finally the atmosphere itself

  19. Computation of Nonlinear Backscattering Using a High-Order Numerical Method

    Science.gov (United States)

    Fibich, G.; Ilan, B.; Tsynkov, S.

    2001-01-01

    The nonlinear Schrodinger equation (NLS) is the standard model for propagation of intense laser beams in Kerr media. The NLS is derived from the nonlinear Helmholtz equation (NLH) by employing the paraxial approximation and neglecting the backscattered waves. In this study we use a fourth-order finite-difference method supplemented by special two-way artificial boundary conditions (ABCs) to solve the NLH as a boundary value problem. Our numerical methodology allows for a direct comparison of the NLH and NLS models and for an accurate quantitative assessment of the backscattered signal.

  20. A numerical method to compute interior transmission eigenvalues

    International Nuclear Information System (INIS)

    Kleefeld, Andreas

    2013-01-01

    In this paper the numerical calculation of eigenvalues of the interior transmission problem arising in acoustic scattering for constant contrast in three dimensions is considered. From the computational point of view existing methods are very expensive, and are only able to show the existence of such transmission eigenvalues. Furthermore, they have trouble finding them if two or more eigenvalues are situated closely together. We present a new method based on complex-valued contour integrals and the boundary integral equation method which is able to calculate highly accurate transmission eigenvalues. So far, this is the first paper providing such accurate values for various surfaces different from a sphere in three dimensions. Additionally, the computational cost is even lower than those of existing methods. Furthermore, the algorithm is capable of finding complex-valued eigenvalues for which no numerical results have been reported yet. Until now, the proof of existence of such eigenvalues is still open. Finally, highly accurate eigenvalues of the interior Dirichlet problem are provided and might serve as test cases to check newly derived Faber–Krahn type inequalities for larger transmission eigenvalues that are not yet available. (paper)

  1. Fluorescence photooxidation with eosin: a method for high resolution immunolocalization and in situ hybridization detection for light and electron microscopy

    Science.gov (United States)

    1994-01-01

    A simple method is described for high-resolution light and electron microscopic immunolocalization of proteins in cells and tissues by immunofluorescence and subsequent photooxidation of diaminobenzidine tetrahydrochloride into an insoluble osmiophilic polymer. By using eosin as the fluorescent marker, a substantial improvement in sensitivity is achieved in the photooxidation process over other conventional fluorescent compounds. The technique allows for precise correlative immunolocalization studies on the same sample using fluorescence, transmitted light and electron microscopy. Furthermore, because eosin is smaller in size than other conventional markers, this method results in improved penetration of labeling reagents compared to gold or enzyme based procedures. The improved penetration allows for three-dimensional immunolocalization using high voltage electron microscopy. Fluorescence photooxidation can also be used for high resolution light and electron microscopic localization of specific nucleic acid sequences by in situ hybridization utilizing biotinylated probes followed by an eosin-streptavidin conjugate. PMID:7519623

  2. Experimental High-Resolution Land Surface Prediction System for the Vancouver 2010 Winter Olympic Games

    Science.gov (United States)

    Belair, S.; Bernier, N.; Tong, L.; Mailhot, J.

    2008-05-01

    The 2010 Winter Olympic and Paralympic Games will take place in Vancouver, Canada, from 12 to 28 February 2010 and from 12 to 21 March 2010, respectively. In order to provide the best possible guidance achievable with current state-of-the-art science and technology, Environment Canada is currently setting up an experimental numerical prediction system for these special events. This system consists of a 1-km limited-area atmospheric model that will be integrated for 16h, twice a day, with improved microphysics compared with the system currently operational at the Canadian Meteorological Centre. In addition, several new and original tools will be used to adapt and refine predictions near and at the surface. Very high-resolution two-dimensional surface systems, with 100-m and 20-m grid size, will cover the Vancouver Olympic area. Using adaptation methods to improve the forcing from the lower-resolution atmospheric models, these 2D surface models better represent surface processes, and thus lead to better predictions of snow conditions and near-surface air temperature. Based on a similar strategy, a single-point model will be implemented to better predict surface characteristics at each station of an observing network especially installed for the 2010 events. The main advantage of this single-point system is that surface observations are used as forcing for the land surface models, and can even be assimilated (although this is not expected in the first version of this new tool) to improve initial conditions of surface variables such as snow depth and surface temperatures. Another adaptation tool, based on 2D stationnary solutions of a simple dynamical system, will be used to produce near-surface winds on the 100-m grid, coherent with the high- resolution orography. The configuration of the experimental numerical prediction system will be presented at the conference, together with preliminary results for winter 2007-2008.

  3. High-Resolution Remote Sensing Image Building Extraction Based on Markov Model

    Science.gov (United States)

    Zhao, W.; Yan, L.; Chang, Y.; Gong, L.

    2018-04-01

    With the increase of resolution, remote sensing images have the characteristics of increased information load, increased noise, more complex feature geometry and texture information, which makes the extraction of building information more difficult. To solve this problem, this paper designs a high resolution remote sensing image building extraction method based on Markov model. This method introduces Contourlet domain map clustering and Markov model, captures and enhances the contour and texture information of high-resolution remote sensing image features in multiple directions, and further designs the spectral feature index that can characterize "pseudo-buildings" in the building area. Through the multi-scale segmentation and extraction of image features, the fine extraction from the building area to the building is realized. Experiments show that this method can restrain the noise of high-resolution remote sensing images, reduce the interference of non-target ground texture information, and remove the shadow, vegetation and other pseudo-building information, compared with the traditional pixel-level image information extraction, better performance in building extraction precision, accuracy and completeness.

  4. High-resolution 3D imaging of polymerized photonic crystals by lab-based x-ray nanotomography with 50-nm resolution

    Science.gov (United States)

    Yin, Leilei; Chen, Ying-Chieh; Gelb, Jeff; Stevenson, Darren M.; Braun, Paul A.

    2010-09-01

    High resolution x-ray computed tomography is a powerful non-destructive 3-D imaging method. It can offer superior resolution on objects that are opaque or low contrast for optical microscopy. Synchrotron based x-ray computed tomography systems have been available for scientific research, but remain difficult to access for broader users. This work introduces a lab-based high-resolution x-ray nanotomography system with 50nm resolution in absorption and Zernike phase contrast modes. Using this system, we have demonstrated high quality 3-D images of polymerized photonic crystals which have been analyzed for band gap structures. The isotropic volumetric data shows excellent consistency with other characterization results.

  5. Assessment of modern spectral analysis methods to improve wavenumber resolution of F-K spectra

    International Nuclear Information System (INIS)

    Shirley, T.E.; Laster, S.J.; Meek, R.A.

    1987-01-01

    The improvement in wavenumber spectra obtained by using high resolution spectral estimators is examined. Three modern spectral estimators were tested, namely the Autoregressive/Maximum Entropy (AR/ME) method, the Extended Prony method, and an eigenstructure method. They were combined with the conventional Fourier method by first transforming each trace with a Fast Fourier Transform (FFT). A high resolution spectral estimator was applied to the resulting complex spatial sequence for each frequency. The collection of wavenumber spectra thus computed comprises a hybrid f-k spectrum with high wavenumber resolution and less spectral ringing. Synthetic and real data records containing 25 traces were analyzed by using the hybrid f-k method. The results show an FFT-AR/ME f-k spectrum has noticeably better wavenumber resolution and more spectral dynamic range than conventional spectra when the number of channels is small. The observed improvement suggests the hybrid technique is potentially valuable in seismic data analysis

  6. High-resolution investigations of edge effects in neutron imaging

    International Nuclear Information System (INIS)

    Strobl, M.; Kardjilov, N.; Hilger, A.; Kuehne, G.; Frei, G.; Manke, I.

    2009-01-01

    Edge enhancement is the main effect measured by the so-called inline or propagation-based neutron phase contrast imaging method. The effect has originally been explained by diffraction, and high spatial coherence has been claimed to be a necessary precondition. However, edge enhancement has also been found in conventional imaging with high resolution. In such cases the effects can produce artefacts and hinder quantification. In this letter the edge effects at cylindrical shaped samples and long straight edges have been studied in detail. The enhancement can be explained by refraction and total reflection. Using high-resolution imaging, where spatial resolutions better than 50 μm could be achieved, refraction and total reflection peaks - similar to diffraction patterns - could be separated and distinguished.

  7. High-resolution coherent three-dimensional spectroscopy of Br2.

    Science.gov (United States)

    Chen, Peter C; Wells, Thresa A; Strangfeld, Benjamin R

    2013-07-25

    In the past, high-resolution spectroscopy has been limited to small, simple molecules that yield relatively uncongested spectra. Larger and more complex molecules have a higher density of peaks and are susceptible to complications (e.g., effects from conical intersections) that can obscure the patterns needed to resolve and assign peaks. Recently, high-resolution coherent two-dimensional (2D) spectroscopy has been used to resolve and sort peaks into easily identifiable patterns for molecules where pattern-recognition has been difficult. For very highly congested spectra, however, the ability to resolve peaks using coherent 2D spectroscopy is limited by the bandwidth of instrumentation. In this article, we introduce and investigate high-resolution coherent three-dimensional spectroscopy (HRC3D) as a method for dealing with heavily congested systems. The resulting patterns are unlike those in high-resolution coherent 2D spectra. Analysis of HRC3D spectra could provide a means for exploring the spectroscopy of large and complex molecules that have previously been considered too difficult to study.

  8. Beam-transport system for high-resolution heavy-ion spectroscopy

    International Nuclear Information System (INIS)

    Roussel, P.; Kashy, E.

    1980-01-01

    A method is given to adjust a beam-transport system to the requirements of high-energy resolution heavy-ion spectroscopy. The results of a test experiment performed on a MP tandem with a 12 C beam are shown. A drastic improvement in energy resolution is obtained for a kinematical factor K=1/p dp/dtheta=0.12 [fr

  9. Generalized Nonlinear Chirp Scaling Algorithm for High-Resolution Highly Squint SAR Imaging.

    Science.gov (United States)

    Yi, Tianzhu; He, Zhihua; He, Feng; Dong, Zhen; Wu, Manqing

    2017-11-07

    This paper presents a modified approach for high-resolution, highly squint synthetic aperture radar (SAR) data processing. Several nonlinear chirp scaling (NLCS) algorithms have been proposed to solve the azimuth variance of the frequency modulation rates that are caused by the linear range walk correction (LRWC). However, the azimuth depth of focusing (ADOF) is not handled well by these algorithms. The generalized nonlinear chirp scaling (GNLCS) algorithm that is proposed in this paper uses the method of series reverse (MSR) to improve the ADOF and focusing precision. It also introduces a high order processing kernel to avoid the range block processing. Simulation results show that the GNLCS algorithm can enlarge the ADOF and focusing precision for high-resolution highly squint SAR data.

  10. Generalized Nonlinear Chirp Scaling Algorithm for High-Resolution Highly Squint SAR Imaging

    Directory of Open Access Journals (Sweden)

    Tianzhu Yi

    2017-11-01

    Full Text Available This paper presents a modified approach for high-resolution, highly squint synthetic aperture radar (SAR data processing. Several nonlinear chirp scaling (NLCS algorithms have been proposed to solve the azimuth variance of the frequency modulation rates that are caused by the linear range walk correction (LRWC. However, the azimuth depth of focusing (ADOF is not handled well by these algorithms. The generalized nonlinear chirp scaling (GNLCS algorithm that is proposed in this paper uses the method of series reverse (MSR to improve the ADOF and focusing precision. It also introduces a high order processing kernel to avoid the range block processing. Simulation results show that the GNLCS algorithm can enlarge the ADOF and focusing precision for high-resolution highly squint SAR data.

  11. High resolution x-ray CMT: Reconstruction methods

    Energy Technology Data Exchange (ETDEWEB)

    Brown, J.K.

    1997-02-01

    This paper qualitatively discusses the primary characteristics of methods for reconstructing tomographic images from a set of projections. These reconstruction methods can be categorized as either {open_quotes}analytic{close_quotes} or {open_quotes}iterative{close_quotes} techniques. Analytic algorithms are derived from the formal inversion of equations describing the imaging process, while iterative algorithms incorporate a model of the imaging process and provide a mechanism to iteratively improve image estimates. Analytic reconstruction algorithms are typically computationally more efficient than iterative methods; however, analytic algorithms are available for a relatively limited set of imaging geometries and situations. Thus, the framework of iterative reconstruction methods is better suited for high accuracy, tomographic reconstruction codes.

  12. High Spectral Resolution Lidar Based on a Potassium Faraday Dispersive Filter for Daytime Temperature Measurement

    Directory of Open Access Journals (Sweden)

    Abo Makoto

    2016-01-01

    Full Text Available In this paper, a new high-spectral-resolution lidar technique is proposed for measuring the profiles of atmospheric temperature in daytime. Based on the theory of high resolution Rayleigh scattering, the feasibility and advantages of using potassium (K Faraday dispersive optical filters as blocking filters for measuring atmospheric temperature are demonstrated with a numerical simulation. It was found that temperature profiles could be measured within 1K error for the height of 9 km with a 500 m range resolution in 60 min by using laser pulses with 1mJ/pulse and 1 kHz, and a 50 cm diameter telescope. Furthermore, we are developing compact pulsed laser system for temperature lidar transmitter.

  13. Numerical Methods and Turbulence Modeling for LES of Piston Engines: Impact on Flow Motion and Combustion

    Directory of Open Access Journals (Sweden)

    Misdariis A.

    2013-11-01

    Full Text Available In this article, Large Eddy Simulations (LES of Spark Ignition (SI engines are performed to evaluate the impact of the numerical set-upon the predictedflow motion and combustion process. Due to the high complexity and computational cost of such simulations, the classical set-up commonly includes “low” order numerical schemes (typically first or second-order accurate in time and space as well as simple turbulence models (such as the well known constant coefficient Smagorinsky model (Smagorinsky J. (1963 Mon. Weather Rev. 91, 99-164. The scope of this paper is to evaluate the feasibility and the potential benefits of using high precision methods for engine simulations, relying on higher order numerical methods and state-of-the-art Sub-Grid-Scale (SGS models. For this purpose, two high order convection schemes from the Two-step Taylor Galerkin (TTG family (Colin and Rudgyard (2000 J. Comput. Phys. 162, 338-371 and several SGS turbulence models, namely Dynamic Smagorinsky (Germano et al. (1991 Phys. Fluids 3, 1760-1765 and sigma (Baya Toda et al. (2010 Proc. Summer Program 2010, Stanford, Center for Turbulence Research, NASA Ames/Stanford Univ., pp. 193-202 are considered to improve the accuracy of the classically used Lax-Wendroff (LW (Lax and Wendroff (1964 Commun. Pure Appl. Math. 17, 381-398 - Smagorinsky set-up. This evaluation is performed considering two different engine configurations from IFP Energies nouvelles. The first one is the naturally aspirated four-valve spark-ignited F7P engine which benefits from an exhaustive experimental and numerical characterization. The second one, called Ecosural, is a highly supercharged spark-ignited engine. Unique realizations of engine cycles have been simulated for each set-up starting from the same initial conditions and the comparison is made with experimental and previous numerical results for the F7P configuration. For the Ecosural engine, experimental results are not available yet and only

  14. High-resolution electron microscopy and its applications.

    Science.gov (United States)

    Li, F H

    1987-12-01

    A review of research on high-resolution electron microscopy (HREM) carried out at the Institute of Physics, the Chinese Academy of Sciences, is presented. Apart from the direct observation of crystal and quasicrystal defects for some alloys, oxides, minerals, etc., and the structure determination for some minute crystals, an approximate image-contrast theory named pseudo-weak-phase object approximation (PWPOA), which shows the image contrast change with crystal thickness, is described. Within the framework of PWPOA, the image contrast of lithium ions in the crystal of R-Li2Ti3O7 has been observed. The usefulness of diffraction analysis techniques such as the direct method and Patterson method in HREM is discussed. Image deconvolution and resolution enhancement for weak-phase objects by use of the direct method are illustrated. In addition, preliminary results of image restoration for thick crystals are given.

  15. High-Resolution Sonars: What Resolution Do We Need for Target Recognition?

    Directory of Open Access Journals (Sweden)

    Pailhas Yan

    2010-01-01

    Full Text Available Target recognition in sonar imagery has long been an active research area in the maritime domain, especially in the mine-counter measure context. Recently it has received even more attention as new sensors with increased resolution have been developed; new threats to critical maritime assets and a new paradigm for target recognition based on autonomous platforms have emerged. With the recent introduction of Synthetic Aperture Sonar systems and high-frequency sonars, sonar resolution has dramatically increased and noise levels decreased. Sonar images are distance images but at high resolution they tend to appear visually as optical images. Traditionally algorithms have been developed specifically for imaging sonars because of their limited resolution and high noise levels. With high-resolution sonars, algorithms developed in the image processing field for natural images become applicable. However, the lack of large datasets has hampered the development of such algorithms. Here we present a fast and realistic sonar simulator enabling development and evaluation of such algorithms.We develop a classifier and then analyse its performances using our simulated synthetic sonar images. Finally, we discuss sensor resolution requirements to achieve effective classification of various targets and demonstrate that with high resolution sonars target highlight analysis is the key for target recognition.

  16. A detailed survey of numerical methods for unconstrained minimization. Pt. 1

    International Nuclear Information System (INIS)

    Mika, K.; Chaves, T.

    1980-01-01

    A detailed description of numerical methods for unconstrained minimization is presented. This first part surveys in particular conjugate direction and gradient methods, whereas variable metric methods will be the subject of the second part. Among the results of special interest we quote the following. The conjugate direction methods of Powell, Zangwill and Sutti can be best interpreted if the Smith approach is adopted. The conditions for quadratic termination of Powell's first procedure are analyzed. Numerical results based on nonlinear least squares problems are presented for the following conjugate direction codes: VA04AD from Harwell Subroutine Library and ZXPOW from IMSL, both implementations of Powell's second procedure, DFMND from IBM-SILMATH (Zangwill's method) and Brent's algorithm PRAXIS. VA04AD turns out to be superior in all cases, PRAXIS improves for high-dimensional problems. All codes clearly exhibit superlinear convergence. Akaike's result for the method of steepest descent is derived directly from a set of nonlinear recurrence relations. Numerical results obtained with the highly ill conditioned Hilbert function confirm the theoretical predictions. Several properties of the conjugate gradient method are presented and a new derivation of the equivalence of steepest descent partan and the CG method is given. A comparison of numerical results from the CG codes VA08AD (Fletcher-Reeves), DFMCG (the SSP version of the Fletcher-Reevens algorithm) and VA14AD (Powell's implementation of the Polak-Ribiere formula) reveals that VA14AD is clearly superior in all cases, but that the convergence rate of these codes is only weakly superlinear such that high accuracy solutions require extremely large numbers of function calls. (orig.)

  17. Development of numerical simulation technology for high resolution thermal hydraulic analysis

    International Nuclear Information System (INIS)

    Yoon, Han Young; Kim, K. D.; Kim, B. J.; Kim, J. T.; Park, I. K.; Bae, S. W.; Song, C. H.; Lee, S. W.; Lee, S. J.; Lee, J. R.; Chung, S. K.; Chung, B. D.; Cho, H. K.; Choi, S. K.; Ha, K. S.; Hwang, M. K.; Yun, B. J.; Jeong, J. J.; Sul, A. S.; Lee, H. D.; Kim, J. W.

    2012-04-01

    A realistic simulation of two phase flows is essential for the advanced design and safe operation of a nuclear reactor system. The need for a multi dimensional analysis of thermal hydraulics in nuclear reactor components is further increasing with advanced design features, such as a direct vessel injection system, a gravity driven safety injection system, and a passive secondary cooling system. These features require more detailed analysis with enhanced accuracy. In this regard, KAERI has developed a three dimensional thermal hydraulics code, CUPID, for the analysis of transient, multi dimensional, two phase flows in nuclear reactor components. The code was designed for use as a component scale code, and/or a three dimensional component, which can be coupled with a system code. This report presents an overview of the CUPID code development and preliminary assessment, mainly focusing on the numerical solution method and its verification and validation. It was shown that the CUPID code was successfully verified. The results of the validation calculations show that the CUPID code is very promising, but a systematic approach for the validation and improvement of the physical models is still needed

  18. Temporal super resolution using variational methods

    DEFF Research Database (Denmark)

    Keller, Sune Høgild; Lauze, Francois Bernard; Nielsen, Mads

    2010-01-01

    Temporal super resolution (TSR) is the ability to convert video from one frame rate to another and is as such a key functionality in modern video processing systems. A higher frame rate than what is recorded is desired for high frame rate displays, for super slow-motion, and for video/film format...... observed when watching video on large and bright displays where the motion of high contrast edges often seem jerky and unnatural. A novel motion compensated (MC) TSR algorithm using variational methods for both optical flow calculation and the actual new frame interpolation is presented. The flow...

  19. A Residential Area Extraction Method for High Resolution Remote Sensing Imagery by Using Visual Saliency and Perceptual Organization

    Directory of Open Access Journals (Sweden)

    CHEN Yixiang

    2017-12-01

    Full Text Available Inspired by human visual cognitive mechanism,a method of residential area extraction from high-resolution remote sensing images was proposed based on visual saliency and perceptual organization. Firstly,the data field theory of cognitive physics was introduced to model the visual saliency and the candidate residential areas were produced by adaptive thresholding. Then,the exact residential areas were obtained and refined by perceptual organization based on the high-frequency features of multi-scale wavelet transform. Finally,the validity of the proposed method was verified by experiments conducted on ZY-3 and Quickbird image data sets.

  20. An object-oriented classification method of high resolution imagery based on improved AdaTree

    International Nuclear Information System (INIS)

    Xiaohe, Zhang; Liang, Zhai; Jixian, Zhang; Huiyong, Sang

    2014-01-01

    With the popularity of the application using high spatial resolution remote sensing image, more and more studies paid attention to object-oriented classification on image segmentation as well as automatic classification after image segmentation. This paper proposed a fast method of object-oriented automatic classification. First, edge-based or FNEA-based segmentation was used to identify image objects and the values of most suitable attributes of image objects for classification were calculated. Then a certain number of samples from the image objects were selected as training data for improved AdaTree algorithm to get classification rules. Finally, the image objects could be classified easily using these rules. In the AdaTree, we mainly modified the final hypothesis to get classification rules. In the experiment with WorldView2 image, the result of the method based on AdaTree showed obvious accuracy and efficient improvement compared with the method based on SVM with the kappa coefficient achieving 0.9242

  1. Retinal blood vessel segmentation in high resolution fundus photographs using automated feature parameter estimation

    Science.gov (United States)

    Orlando, José Ignacio; Fracchia, Marcos; del Río, Valeria; del Fresno, Mariana

    2017-11-01

    Several ophthalmological and systemic diseases are manifested through pathological changes in the properties and the distribution of the retinal blood vessels. The characterization of such alterations requires the segmentation of the vasculature, which is a tedious and time-consuming task that is infeasible to be performed manually. Numerous attempts have been made to propose automated methods for segmenting the retinal vasculature from fundus photographs, although their application in real clinical scenarios is usually limited by their ability to deal with images taken at different resolutions. This is likely due to the large number of parameters that have to be properly calibrated according to each image scale. In this paper we propose to apply a novel strategy for automated feature parameter estimation, combined with a vessel segmentation method based on fully connected conditional random fields. The estimation model is learned by linear regression from structural properties of the images and known optimal configurations, that were previously obtained for low resolution data sets. Our experiments in high resolution images show that this approach is able to estimate appropriate configurations that are suitable for performing the segmentation task without requiring to re-engineer parameters. Furthermore, our combined approach reported state of the art performance on the benchmark data set HRF, as measured in terms of the F1-score and the Matthews correlation coefficient.

  2. Enhanced spectral resolution by high-dimensional NMR using the filter diagonalization method and "hidden" dimensions.

    Science.gov (United States)

    Meng, Xi; Nguyen, Bao D; Ridge, Clark; Shaka, A J

    2009-01-01

    High-dimensional (HD) NMR spectra have poorer digital resolution than low-dimensional (LD) spectra, for a fixed amount of experiment time. This has led to "reduced-dimensionality" strategies, in which several LD projections of the HD NMR spectrum are acquired, each with higher digital resolution; an approximate HD spectrum is then inferred by some means. We propose a strategy that moves in the opposite direction, by adding more time dimensions to increase the information content of the data set, even if only a very sparse time grid is used in each dimension. The full HD time-domain data can be analyzed by the filter diagonalization method (FDM), yielding very narrow resonances along all of the frequency axes, even those with sparse sampling. Integrating over the added dimensions of HD FDM NMR spectra reconstitutes LD spectra with enhanced resolution, often more quickly than direct acquisition of the LD spectrum with a larger number of grid points in each of the fewer dimensions. If the extra-dimensions do not appear in the final spectrum, and are used solely to boost information content, we propose the moniker hidden-dimension NMR. This work shows that HD peaks have unmistakable frequency signatures that can be detected as single HD objects by an appropriate algorithm, even though their patterns would be tricky for a human operator to visualize or recognize, and even if digital resolution in an HD FT spectrum is very coarse compared with natural line widths.

  3. 3D high spectral and spatial resolution imaging of ex vivo mouse brain

    International Nuclear Information System (INIS)

    Foxley, Sean; Karczmar, Gregory S.; Domowicz, Miriam; Schwartz, Nancy

    2015-01-01

    Purpose: Widely used MRI methods show brain morphology both in vivo and ex vivo at very high resolution. Many of these methods (e.g., T 2 * -weighted imaging, phase-sensitive imaging, or susceptibility-weighted imaging) are sensitive to local magnetic susceptibility gradients produced by subtle variations in tissue composition. However, the spectral resolution of commonly used methods is limited to maintain reasonable run-time combined with very high spatial resolution. Here, the authors report on data acquisition at increased spectral resolution, with 3-dimensional high spectral and spatial resolution MRI, in order to analyze subtle variations in water proton resonance frequency and lineshape that reflect local anatomy. The resulting information compliments previous studies based on T 2 * and resonance frequency. Methods: The proton free induction decay was sampled at high resolution and Fourier transformed to produce a high-resolution water spectrum for each image voxel in a 3D volume. Data were acquired using a multigradient echo pulse sequence (i.e., echo-planar spectroscopic imaging) with a spatial resolution of 50 × 50 × 70 μm 3 and spectral resolution of 3.5 Hz. Data were analyzed in the spectral domain, and images were produced from the various Fourier components of the water resonance. This allowed precise measurement of local variations in water resonance frequency and lineshape, at the expense of significantly increased run time (16–24 h). Results: High contrast T 2 * -weighted images were produced from the peak of the water resonance (peak height image), revealing a high degree of anatomical detail, specifically in the hippocampus and cerebellum. In images produced from Fourier components of the water resonance at −7.0 Hz from the peak, the contrast between deep white matter tracts and the surrounding tissue is the reverse of the contrast in water peak height images. This indicates the presence of a shoulder in the water resonance that is not

  4. THE AGORA HIGH-RESOLUTION GALAXY SIMULATIONS COMPARISON PROJECT. II. ISOLATED DISK TEST

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ji-hoon [Kavli Institute for Particle Astrophysics and Cosmology, SLAC National Accelerator Laboratory, Menlo Park, CA 94025 (United States); Agertz, Oscar [Department of Physics, University of Surrey, Guildford, Surrey, GU2 7XH (United Kingdom); Teyssier, Romain; Feldmann, Robert [Centre for Theoretical Astrophysics and Cosmology, Institute for Computational Science, University of Zurich, Zurich, 8057 (Switzerland); Butler, Michael J. [Max-Planck-Institut für Astronomie, D-69117 Heidelberg (Germany); Ceverino, Daniel [Zentrum für Astronomie der Universität Heidelberg, Institut für Theoretische Astrophysik, D-69120 Heidelberg (Germany); Choi, Jun-Hwan [Department of Astronomy, University of Texas, Austin, TX 78712 (United States); Keller, Ben W. [Department of Physics and Astronomy, McMaster University, Hamilton, ON L8S 4M1 (Canada); Lupi, Alessandro [Institut d’Astrophysique de Paris, Sorbonne Universites, UPMC Univ Paris 6 et CNRS, F-75014 Paris (France); Quinn, Thomas; Wallace, Spencer [Department of Astronomy, University of Washington, Seattle, WA 98195 (United States); Revaz, Yves [Institute of Physics, Laboratoire d’Astrophysique, École Polytechnique Fédérale de Lausanne, CH-1015 Lausanne (Switzerland); Gnedin, Nickolay Y. [Particle Astrophysics Center, Fermi National Accelerator Laboratory, Batavia, IL 60510 (United States); Leitner, Samuel N. [Department of Astronomy, University of Maryland, College Park, MD 20742 (United States); Shen, Sijing [Kavli Institute for Cosmology, University of Cambridge, Cambridge, CB3 0HA (United Kingdom); Smith, Britton D., E-mail: me@jihoonkim.org [Institute for Astronomy, University of Edinburgh, Royal Observatory, Edinburgh EH9 3HJ (United Kingdom); Collaboration: AGORA Collaboration; and others

    2016-12-20

    Using an isolated Milky Way-mass galaxy simulation, we compare results from nine state-of-the-art gravito-hydrodynamics codes widely used in the numerical community. We utilize the infrastructure we have built for the AGORA High-resolution Galaxy Simulations Comparison Project. This includes the common disk initial conditions, common physics models (e.g., radiative cooling and UV background by the standardized package Grackle) and common analysis toolkit yt, all of which are publicly available. Subgrid physics models such as Jeans pressure floor, star formation, supernova feedback energy, and metal production are carefully constrained across code platforms. With numerical accuracy that resolves the disk scale height, we find that the codes overall agree well with one another in many dimensions including: gas and stellar surface densities, rotation curves, velocity dispersions, density and temperature distribution functions, disk vertical heights, stellar clumps, star formation rates, and Kennicutt–Schmidt relations. Quantities such as velocity dispersions are very robust (agreement within a few tens of percent at all radii) while measures like newly formed stellar clump mass functions show more significant variation (difference by up to a factor of ∼3). Systematic differences exist, for example, between mesh-based and particle-based codes in the low-density region, and between more diffusive and less diffusive schemes in the high-density tail of the density distribution. Yet intrinsic code differences are generally small compared to the variations in numerical implementations of the common subgrid physics such as supernova feedback. Our experiment reassures that, if adequately designed in accordance with our proposed common parameters, results of a modern high-resolution galaxy formation simulation are more sensitive to input physics than to intrinsic differences in numerical schemes.

  5. Numerical method for two phase flow with a unstable interface

    International Nuclear Information System (INIS)

    Glimm, J.; Marchesin, D.; McBryan, O.

    1981-01-01

    The random choice method is used to compute the oil-water interface for two dimensional porous media equations. The equations used are a pair of coupled equations; the (elliptic) pressure equation and the (hyperbolic) saturation equation. The equations do not include the dispersive capillary pressure term and the computation does not introduce numerical diffusion. The method resolves saturation discontinuities sharply. The main conclusion of this paper is that the random choice is a correct numerical procedure for this problem even in the highly fingered case. Two methods of inducing fingers are considered: deterministically, through choice of Cauchy data and heterogeneity, through maximizing the randomness of the random choice method

  6. Experimental demonstration of high resolution three-dimensional x-ray holography

    International Nuclear Information System (INIS)

    McNulty, I.; Trebes, J.E.; Brase, J.M.; Yorkey, T.J.; Levesque, R.; Szoke, H.; Anderson, E.H.; Jacobsen, C.

    1992-01-01

    Tomographic x-ray holography may make possible the imaging of biological objects at high resolution in three dimensions. We performed a demonstration experiment with soft x-rays to explore the feasibility of this technique. Coherent 3.2-nm undulator radiation was used to record Fourier transform holograms of a microfabricated test object from various illumination angles. The holograms were numerically reconstructed according to the principles of diffraction tomography, yielding images of the object that are well resolved in three dimensions

  7. An outline review of numerical transport methods

    International Nuclear Information System (INIS)

    Budd, C.

    1981-01-01

    A brief review is presented of numerical methods for solving the neutron transport equation in the context of reactor physics. First the various forms of transport equation are given. Second, the various ways of classifying numerical transport methods are discussed. Finally each method (or class of methods) is outlined in turn. (U.K.)

  8. Isotope specific resolution recovery image reconstruction in high resolution PET imaging

    Energy Technology Data Exchange (ETDEWEB)

    Kotasidis, Fotis A. [Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva, Switzerland and Wolfson Molecular Imaging Centre, MAHSC, University of Manchester, M20 3LJ, Manchester (United Kingdom); Angelis, Georgios I. [Faculty of Health Sciences, Brain and Mind Research Institute, University of Sydney, NSW 2006, Sydney (Australia); Anton-Rodriguez, Jose; Matthews, Julian C. [Wolfson Molecular Imaging Centre, MAHSC, University of Manchester, Manchester M20 3LJ (United Kingdom); Reader, Andrew J. [Montreal Neurological Institute, McGill University, Montreal QC H3A 2B4, Canada and Department of Biomedical Engineering, Division of Imaging Sciences and Biomedical Engineering, King' s College London, St. Thomas’ Hospital, London SE1 7EH (United Kingdom); Zaidi, Habib [Division of Nuclear Medicine and Molecular Imaging, Geneva University Hospital, CH-1211 Geneva (Switzerland); Geneva Neuroscience Centre, Geneva University, CH-1205 Geneva (Switzerland); Department of Nuclear Medicine and Molecular Imaging, University of Groningen, University Medical Center Groningen, PO Box 30 001, Groningen 9700 RB (Netherlands)

    2014-05-15

    Purpose: Measuring and incorporating a scanner-specific point spread function (PSF) within image reconstruction has been shown to improve spatial resolution in PET. However, due to the short half-life of clinically used isotopes, other long-lived isotopes not used in clinical practice are used to perform the PSF measurements. As such, non-optimal PSF models that do not correspond to those needed for the data to be reconstructed are used within resolution modeling (RM) image reconstruction, usually underestimating the true PSF owing to the difference in positron range. In high resolution brain and preclinical imaging, this effect is of particular importance since the PSFs become more positron range limited and isotope-specific PSFs can help maximize the performance benefit from using resolution recovery image reconstruction algorithms. Methods: In this work, the authors used a printing technique to simultaneously measure multiple point sources on the High Resolution Research Tomograph (HRRT), and the authors demonstrated the feasibility of deriving isotope-dependent system matrices from fluorine-18 and carbon-11 point sources. Furthermore, the authors evaluated the impact of incorporating them within RM image reconstruction, using carbon-11 phantom and clinical datasets on the HRRT. Results: The results obtained using these two isotopes illustrate that even small differences in positron range can result in different PSF maps, leading to further improvements in contrast recovery when used in image reconstruction. The difference is more pronounced in the centre of the field-of-view where the full width at half maximum (FWHM) from the positron range has a larger contribution to the overall FWHM compared to the edge where the parallax error dominates the overall FWHM. Conclusions: Based on the proposed methodology, measured isotope-specific and spatially variant PSFs can be reliably derived and used for improved spatial resolution and variance performance in resolution

  9. Isotope specific resolution recovery image reconstruction in high resolution PET imaging

    International Nuclear Information System (INIS)

    Kotasidis, Fotis A.; Angelis, Georgios I.; Anton-Rodriguez, Jose; Matthews, Julian C.; Reader, Andrew J.; Zaidi, Habib

    2014-01-01

    Purpose: Measuring and incorporating a scanner-specific point spread function (PSF) within image reconstruction has been shown to improve spatial resolution in PET. However, due to the short half-life of clinically used isotopes, other long-lived isotopes not used in clinical practice are used to perform the PSF measurements. As such, non-optimal PSF models that do not correspond to those needed for the data to be reconstructed are used within resolution modeling (RM) image reconstruction, usually underestimating the true PSF owing to the difference in positron range. In high resolution brain and preclinical imaging, this effect is of particular importance since the PSFs become more positron range limited and isotope-specific PSFs can help maximize the performance benefit from using resolution recovery image reconstruction algorithms. Methods: In this work, the authors used a printing technique to simultaneously measure multiple point sources on the High Resolution Research Tomograph (HRRT), and the authors demonstrated the feasibility of deriving isotope-dependent system matrices from fluorine-18 and carbon-11 point sources. Furthermore, the authors evaluated the impact of incorporating them within RM image reconstruction, using carbon-11 phantom and clinical datasets on the HRRT. Results: The results obtained using these two isotopes illustrate that even small differences in positron range can result in different PSF maps, leading to further improvements in contrast recovery when used in image reconstruction. The difference is more pronounced in the centre of the field-of-view where the full width at half maximum (FWHM) from the positron range has a larger contribution to the overall FWHM compared to the edge where the parallax error dominates the overall FWHM. Conclusions: Based on the proposed methodology, measured isotope-specific and spatially variant PSFs can be reliably derived and used for improved spatial resolution and variance performance in resolution

  10. Automated aberration correction of arbitrary laser modes in high numerical aperture systems

    OpenAIRE

    Hering, Julian; Waller, Erik H.; Freymann, Georg von

    2016-01-01

    Controlling the point-spread-function in three-dimensional laser lithography is crucial for fabricating structures with highest definition and resolution. In contrast to microscopy, aberrations have to be physically corrected prior to writing, to create well defined doughnut modes, bottlebeams or multi foci modes. We report on a modified Gerchberg-Saxton algorithm for spatial-light-modulator based automated aberration compensation to optimize arbitrary laser-modes in a high numerical aperture...

  11. High energy resolution off-resonant X-ray spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Wojciech, Blachucki [Univ. of Fribourg (Switzerland). Dept. of Physics

    2015-10-16

    This work treats of the high energy resolution off-resonant X-ray spectroscopy (HEROS) method of determining the density of unoccupied electronic states in the vicinity of the absorption edge. HEROS is an alternative to the existing X-ray absorption spectroscopy (XAS) methods and opens the way for new studies not achievable before.

  12. Numerical Continuation Methods for Intrusive Uncertainty Quantification Studies

    Energy Technology Data Exchange (ETDEWEB)

    Safta, Cosmin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Najm, Habib N. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Phipps, Eric Todd [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-09-01

    Rigorous modeling of engineering systems relies on efficient propagation of uncertainty from input parameters to model outputs. In recent years, there has been substantial development of probabilistic polynomial chaos (PC) Uncertainty Quantification (UQ) methods, enabling studies in expensive computational models. One approach, termed ”intrusive”, involving reformulation of the governing equations, has been found to have superior computational performance compared to non-intrusive sampling-based methods in relevant large-scale problems, particularly in the context of emerging architectures. However, the utility of intrusive methods has been severely limited due to detrimental numerical instabilities associated with strong nonlinear physics. Previous methods for stabilizing these constructions tend to add unacceptably high computational costs, particularly in problems with many uncertain parameters. In order to address these challenges, we propose to adapt and improve numerical continuation methods for the robust time integration of intrusive PC system dynamics. We propose adaptive methods, starting with a small uncertainty for which the model has stable behavior and gradually moving to larger uncertainty where the instabilities are rampant, in a manner that provides a suitable solution.

  13. Efficient numerical methods for fluid- and electrodynamics on massively parallel systems

    Energy Technology Data Exchange (ETDEWEB)

    Zudrop, Jens

    2016-07-01

    In the last decade, computer technology has evolved rapidly. Modern high performance computing systems offer a tremendous amount of computing power in the range of a few peta floating point operations per second. In contrast, numerical software development is much slower and most existing simulation codes cannot exploit the full computing power of these systems. Partially, this is due to the numerical methods themselves and partially it is related to bottlenecks within the parallelization concept and its data structures. The goal of the thesis is the development of numerical algorithms and corresponding data structures to remedy both kinds of parallelization bottlenecks. The approach is based on a co-design of the numerical schemes (including numerical analysis) and their realizations in algorithms and software. Various kinds of applications, from multicomponent flows (Lattice Boltzmann Method) to electrodynamics (Discontinuous Galerkin Method) to embedded geometries (Octree), are considered and efficiency of the developed approaches is demonstrated for large scale simulations.

  14. Numerical perturbative methods in the quantum theory of physical systems

    International Nuclear Information System (INIS)

    Adam, G.

    1980-01-01

    During the last two decades, development of digital electronic computers has led to the deployment of new, distinct methods in theoretical physics. These methods, based on the advances of modern numerical analysis as well as on specific equations describing physical processes, enabled to perform precise calculations of high complexity which have completed and sometimes changed our image of many physical phenomena. Our efforts have concentrated on the development of numerical methods with such intrinsic performances as to allow a successful approach of some Key issues in present theoretical physics on smaller computation systems. The basic principle of such methods is to translate, in numerical analysis language, the theory of perturbations which is suited to numerical rather than to analytical computation. This idea has been illustrated by working out two problems which arise from the time independent Schroedinger equation in the non-relativistic approximation, within both quantum systems with a small number of particles and systems with a large number of particles, respectively. In the first case, we are led to the numerical solution of some quadratic ordinary differential equations (first section of the thesis) and in the second case, to the solution of some secular equations in the Brillouin area (second section). (author)

  15. A New Method to Solve Numeric Solution of Nonlinear Dynamic System

    Directory of Open Access Journals (Sweden)

    Min Hu

    2016-01-01

    Full Text Available It is well known that the cubic spline function has advantages of simple forms, good convergence, approximation, and second-order smoothness. A particular class of cubic spline function is constructed and an effective method to solve the numerical solution of nonlinear dynamic system is proposed based on the cubic spline function. Compared with existing methods, this method not only has high approximation precision, but also avoids the Runge phenomenon. The error analysis of several methods is given via two numeric examples, which turned out that the proposed method is a much more feasible tool applied to the engineering practice.

  16. Utilization of Short-Simulations for Tuning High-Resolution Climate Model

    Science.gov (United States)

    Lin, W.; Xie, S.; Ma, P. L.; Rasch, P. J.; Qian, Y.; Wan, H.; Ma, H. Y.; Klein, S. A.

    2016-12-01

    Many physical parameterizations in atmospheric models are sensitive to resolution. Tuning the models that involve a multitude of parameters at high resolution is computationally expensive, particularly when relying primarily on multi-year simulations. This work describes a complementary set of strategies for tuning high-resolution atmospheric models, using ensembles of short simulations to reduce the computational cost and elapsed time. Specifically, we utilize the hindcast approach developed through the DOE Cloud Associated Parameterization Testbed (CAPT) project for high-resolution model tuning, which is guided by a combination of short (tests have been found to be effective in numerous previous studies in identifying model biases due to parameterized fast physics, and we demonstrate that it is also useful for tuning. After the most egregious errors are addressed through an initial "rough" tuning phase, longer simulations are performed to "hone in" on model features that evolve over longer timescales. We explore these strategies to tune the DOE ACME (Accelerated Climate Modeling for Energy) model. For the ACME model at 0.25° resolution, it is confirmed that, given the same parameters, major biases in global mean statistics and many spatial features are consistent between Atmospheric Model Intercomparison Project (AMIP)-type simulations and CAPT-type hindcasts, with just a small number of short-term simulations for the latter over the corresponding season. The use of CAPT hindcasts to find parameter choice for the reduction of large model biases dramatically improves the turnaround time for the tuning at high resolution. Improvement seen in CAPT hindcasts generally translates to improved AMIP-type simulations. An iterative CAPT-AMIP tuning approach is therefore adopted during each major tuning cycle, with the former to survey the likely responses and narrow the parameter space, and the latter to verify the results in climate context along with assessment in

  17. An Improved Method for Producing High Spatial-Resolution NDVI Time Series Datasets with Multi-Temporal MODIS NDVI Data and Landsat TM/ETM+ Images

    Directory of Open Access Journals (Sweden)

    Yuhan Rao

    2015-06-01

    Full Text Available Due to technical limitations, it is impossible to have high resolution in both spatial and temporal dimensions for current NDVI datasets. Therefore, several methods are developed to produce high resolution (spatial and temporal NDVI time-series datasets, which face some limitations including high computation loads and unreasonable assumptions. In this study, an unmixing-based method, NDVI Linear Mixing Growth Model (NDVI-LMGM, is proposed to achieve the goal of accurately and efficiently blending MODIS NDVI time-series data and multi-temporal Landsat TM/ETM+ images. This method firstly unmixes the NDVI temporal changes in MODIS time-series to different land cover types and then uses unmixed NDVI temporal changes to predict Landsat-like NDVI dataset. The test over a forest site shows high accuracy (average difference: −0.0070; average absolute difference: 0.0228; and average absolute relative difference: 4.02% and computation efficiency of NDVI-LMGM (31 seconds using a personal computer. Experiments over more complex landscape and long-term time-series demonstrated that NDVI-LMGM performs well in each stage of vegetation growing season and is robust in regions with contrasting spatial and spatial variations. Comparisons between NDVI-LMGM and current methods (i.e., Spatial and Temporal Adaptive Reflectance Fusion Model (STARFM, Enhanced STARFM (ESTARFM and Weighted Linear Model (WLM show that NDVI-LMGM is more accurate and efficient than current methods. The proposed method will benefit land surface process research, which requires a dense NDVI time-series dataset with high spatial resolution.

  18. Multi-scale method for the resolution of the neutronic kinetics equations

    International Nuclear Information System (INIS)

    Chauvet, St.

    2008-10-01

    In this PhD thesis and in order to improve the time/precision ratio of the numerical simulation calculations, we investigate multi-scale techniques for the resolution of the reactor kinetics equations. We choose to focus on the mixed dual diffusion approximation and the quasi-static methods. We introduce a space dependency for the amplitude function which only depends on the time variable in the standard quasi-static context. With this new factorization, we develop two mixed dual problems which can be solved with Cea's solver MINOS. An algorithm is implemented, performing the resolution of these problems defined on different scales (for time and space). We name this approach: the Local Quasi-Static method. We present here this new multi-scale approach and its implementation. The inherent details of amplitude and shape treatments are discussed and justified. Results and performances, compared to MINOS, are studied. They illustrate the improvement on the time/precision ratio for kinetics calculations. Furthermore, we open some new possibilities to parallelize computations with MINOS. For the future, we also introduce some improvement tracks with adaptive scales. (author)

  19. High Resolution Simulations of Future Climate in West Africa Using a Variable-Resolution Atmospheric Model

    Science.gov (United States)

    Adegoke, J. O.; Engelbrecht, F.; Vezhapparambu, S.

    2013-12-01

    In previous work demonstrated the application of a var¬iable-resolution global atmospheric model, the conformal-cubic atmospheric model (CCAM), across a wide range of spatial and time scales to investigate the ability of the model to provide realistic simulations of present-day climate and plausible projections of future climate change over sub-Saharan Africa. By applying the model in stretched-grid mode the versatility of the model dynamics, numerical formulation and physical parameterizations to function across a range of length scales over the region of interest, was also explored. We primarily used CCAM to illustrate the capability of the model to function as a flexible downscaling tool at the climate-change time scale. Here we report on additional long term climate projection studies performed by downscaling at much higher resolutions (8 Km) over an area that stretches from just south of Sahara desert to the southern coast of the Niger Delta and into the Gulf of Guinea. To perform these simulations, CCAM was provided with synoptic-scale forcing of atmospheric circulation from 2.5 deg resolution NCEP reanalysis at 6-hourly interval and SSTs from NCEP reanalysis data uses as lower boundary forcing. CCAM 60 Km resolution downscaled to 8 Km (Schmidt factor 24.75) then 8 Km resolution simulation downscaled to 1 Km (Schmidt factor 200) over an area approximately 50 Km x 50 Km in the southern Lake Chad Basin (LCB). Our intent in conducting these high resolution model runs was to obtain a deeper understanding of linkages between the projected future climate and the hydrological processes that control the surface water regime in this part of sub-Saharan Africa.

  20. Low-resolution ship detection from high-altitude aerial images

    Science.gov (United States)

    Qi, Shengxiang; Wu, Jianmin; Zhou, Qing; Kang, Minyang

    2018-02-01

    Ship detection from optical images taken by high-altitude aircrafts such as unmanned long-endurance airships and unmanned aerial vehicles has broad applications in marine fishery management, ship monitoring and vessel salvage. However, the major challenge is the limited capability of information processing on unmanned high-altitude platforms. Furthermore, in order to guarantee the wide detection range, unmanned aircrafts generally cruise at high altitudes, resulting in imagery with low-resolution targets and strong clutters suffered by heavy clouds. In this paper, we propose a low-resolution ship detection method to extract ships from these high-altitude optical images. Inspired by a recent research on visual saliency detection indicating that small salient signals could be well detected by a gradient enhancement operation combined with Gaussian smoothing, we propose the facet kernel filtering to rapidly suppress cluttered backgrounds and delineate candidate target regions from the sea surface. Then, the principal component analysis (PCA) is used to compute the orientation of the target axis, followed by a simplified histogram of oriented gradient (HOG) descriptor to characterize the ship shape property. Finally, support vector machine (SVM) is applied to discriminate real targets and false alarms. Experimental results show that the proposed method actually has high efficiency in low-resolution ship detection.

  1. Simulation and Prediction of Weather Radar Clutter Using a Wave Propagator on High Resolution NWP Data

    DEFF Research Database (Denmark)

    Benzon, Hans-Henrik; Bovith, Thomas

    2008-01-01

    for prediction of this type of weather radar clutter is presented. The method uses a wave propagator to identify areas of potential non-standard propagation. The wave propagator uses a three dimensional refractivity field derived from the geophysical parameters: temperature, humidity, and pressure obtained from......Weather radars are essential sensors for observation of precipitation in the troposphere and play a major part in weather forecasting and hydrological modelling. Clutter caused by non-standard wave propagation is a common problem in weather radar applications, and in this paper a method...... a high-resolution Numerical Weather Prediction (NWP) model. The wave propagator is based on the parabolic equation approximation to the electromagnetic wave equation. The parabolic equation is solved using the well-known Fourier split-step method. Finally, the radar clutter prediction technique is used...

  2. A robust object-based shadow detection method for cloud-free high resolution satellite images over urban areas and water bodies

    Science.gov (United States)

    Tatar, Nurollah; Saadatseresht, Mohammad; Arefi, Hossein; Hadavand, Ahmad

    2018-06-01

    Unwanted contrast in high resolution satellite images such as shadow areas directly affects the result of further processing in urban remote sensing images. Detecting and finding the precise position of shadows is critical in different remote sensing processing chains such as change detection, image classification and digital elevation model generation from stereo images. The spectral similarity between shadow areas, water bodies, and some dark asphalt roads makes the development of robust shadow detection algorithms challenging. In addition, most of the existing methods work on pixel-level and neglect the contextual information contained in neighboring pixels. In this paper, a new object-based shadow detection framework is introduced. In the proposed method a pixel-level shadow mask is built by extending established thresholding methods with a new C4 index which enables to solve the ambiguity of shadow and water bodies. Then the pixel-based results are further processed in an object-based majority analysis to detect the final shadow objects. Four different high resolution satellite images are used to validate this new approach. The result shows the superiority of the proposed method over some state-of-the-art shadow detection method with an average of 96% in F-measure.

  3. High resolution present climate and surface mass balance (SMB) of Svalbard modelled by MAR and implementation of a new online SMB downscaling method

    Science.gov (United States)

    Lang, C.; Fettweis, X.; Kittel, C.; Erpicum, M.

    2017-12-01

    We present the results of high resolution simulations of the climate and SMB of Svalbard with the regional climate model MAR forced by ERA-40 then ERA-Interim, as well as an online downscaling method allowing us to model the SMB and its components at a resolution twice as high (2.5 vs 5 km here) using only about 25% more CPU time. Spitsbergen, the largest island in Svalbard, has a very hilly topography and a high spatial resolution is needed to correctly represent the local topography and the complex pattern of ice distribution and precipitation. However, high resolution runs with an RCM fully coupled to an energy balance module like MAR require a huge amount of computation time. The hydrostatic equilibrium hypothesis used in MAR also becomes less valid as the spatial resolution increases. We therefore developed in MAR a method to run the snow module at a resolution twice as high as the atmospheric module. Near-surface temperature and humidity are corrected on a grid with a resolution twice as high, as a function of their local gradients and the elevation difference between the corresponding pixels in the 2 grids. We compared the results of our runs at 5 km and with SMB downscaled at 2.5 km over 1960 — 2016 and compared those to previous 10 km runs. On Austfonna, where the slopes are gentle, the agreement between observations and the 5 km SMB is better than with the 10 km SMB. It is again improved at 2.5 km but the gain is relatively small, showing the interest of our method rather than running a time consuming classic 2.5 km resolution simulation. On Spitsbergen, we show that a spatial resolution of 2.5 km is still not enough to represent the complex pattern of topography, precipitation and SMB. Due to a change in the summer atmospheric circulation, from a westerly flow over Svalbard to a northwesterly flow bringing colder air, the SMB of Svalbard was stable between 2006 and 2012, while several melt records were broken in Greenland, due to conditions more

  4. Structure from motion, a low cost, very high resolution method for surveying glaciers using GoPros and opportunistic helicopter flights

    Science.gov (United States)

    Girod, L.; Nuth, C.; Schellenberger, T.

    2014-12-01

    The capability of structure from motion techniques to survey glaciers with a very high spatial and temporal resolution is a promising tool for better understanding the dynamic changes of glaciers. Modern software and computing power allow us to produce accurate data sets from low cost surveys, thus improving the observational capabilities on a wider range of glaciers and glacial processes. In particular, highly accurate glacier volume change monitoring and 3D movement computations will be possible Taking advantage of the helicopter flight needed to survey the ice stakes on Kronenbreen, NW Svalbard, we acquired high resolution photogrammetric data over the well-studied Midre Lovénbreen in September 2013. GoPro Hero 2 cameras were attached to the landing gear of the helicopter, acquiring two images per second. A C/A code based GPS was used for registering the stereoscopic model. Camera clock calibration is obtained through fitting together the shapes of the flight given by both the GPS logger and the relative orientation of the images. A DEM and an ortho-image are generated at 30cm resolution from 300 images collected. The comparison with a 2005 LiDAR DEM (5 meters resolution) shows an absolute error in the direct registration of about 6±3m in 3D which could be easily reduced to 1,5±1m by using fine point cloud alignment algorithms on stable ground. Due to the different nature of the acquisition method, it was not possible to use tie point based co-registration. A combination of the DEM and ortho-image is shown with the point cloud in figure below. A second photogrammetric data set will be acquired in September 2014 to survey the annual volume change and movement. These measurements will then be compared to the annual resolution glaciological stake mass balance and velocity measurements to assess the precision of the method to monitor at an annual resolution.

  5. Numerical Hydrodynamics in Special Relativity.

    Science.gov (United States)

    Martí, José Maria; Müller, Ewald

    2003-01-01

    This review is concerned with a discussion of numerical methods for the solution of the equations of special relativistic hydrodynamics (SRHD). Particular emphasis is put on a comprehensive review of the application of high-resolution shock-capturing methods in SRHD. Results of a set of demanding test bench simulations obtained with different numerical SRHD methods are compared. Three applications (astrophysical jets, gamma-ray bursts and heavy ion collisions) of relativistic flows are discussed. An evaluation of various SRHD methods is presented, and future developments in SRHD are analyzed involving extension to general relativistic hydrodynamics and relativistic magneto-hydrodynamics. The review further provides FORTRAN programs to compute the exact solution of a 1D relativistic Riemann problem with zero and nonzero tangential velocities, and to simulate 1D relativistic flows in Cartesian Eulerian coordinates using the exact SRHD Riemann solver and PPM reconstruction. Supplementary material is available for this article at 10.12942/lrr-2003-7 and is accessible for authorized users.

  6. High-resolution imaging methods in array signal processing

    DEFF Research Database (Denmark)

    Xenaki, Angeliki

    in active sonar signal processing for detection and imaging of submerged oil contamination in sea water from a deep-water oil leak. The submerged oil _eld is modeled as a uid medium exhibiting spatial perturbations in the acoustic parameters from their mean ambient values which cause weak scattering...... of the incident acoustic energy. A highfrequency active sonar is selected to insonify the medium and receive the backscattered waves. High-frequency acoustic methods can both overcome the optical opacity of water (unlike methods based on electromagnetic waves) and resolve the small-scale structure...... of the submerged oil field (unlike low-frequency acoustic methods). The study shows that high-frequency acoustic methods are suitable not only for large-scale localization of the oil contamination in the water column but also for statistical characterization of the submerged oil field through inference...

  7. Assessment of vulnerability in karst aquifers using a quantitative integrated numerical model: catchment characterization and high resolution monitoring - Application to semi-arid regions- Lebanon.

    Science.gov (United States)

    Doummar, Joanna; Aoun, Michel; Andari, Fouad

    2016-04-01

    Karst aquifers are highly heterogeneous and characterized by a duality of recharge (concentrated; fast versus diffuse; slow) and a duality of flow which directly influences groundwater flow and spring responses. Given this heterogeneity in flow and infiltration, karst aquifers do not always obey standard hydraulic laws. Therefore the assessment of their vulnerability reveals to be challenging. Studies have shown that vulnerability of aquifers is highly governed by recharge to groundwater. On the other hand specific parameters appear to play a major role in the spatial and temporal distribution of infiltration on a karst system, thus greatly influencing the discharge rates observed at a karst spring, and consequently the vulnerability of a spring. This heterogeneity can only be depicted using an integrated numerical model to quantify recharge spatially and assess the spatial and temporal vulnerability of a catchment for contamination. In the framework of a three-year PEER NSF/USAID funded project, the vulnerability of a karst catchment in Lebanon is assessed quantitatively using a numerical approach. The aim of the project is also to refine actual evapotranspiration rates and spatial recharge distribution in a semi arid environment. For this purpose, a monitoring network was installed since July 2014 on two different pilot karst catchment (drained by Qachqouch Spring and Assal Spring) to collect high resolution data to be used in an integrated catchment numerical model with MIKE SHE, DHI including climate, unsaturated zone, and saturated zone. Catchment characterization essential for the model included geological mapping and karst features (e.g., dolines) survey as they contribute to fast flow. Tracer experiments were performed under different flow conditions (snow melt and low flow) to delineate the catchment area, reveal groundwater velocities and response to snowmelt events. An assessment of spring response after precipitation events allowed the estimation of the

  8. 3D high spectral and spatial resolution imaging of ex vivo mouse brain

    Energy Technology Data Exchange (ETDEWEB)

    Foxley, Sean, E-mail: sean.foxley@ndcn.ox.ac.uk; Karczmar, Gregory S. [Department of Radiology, University of Chicago, Chicago, Illinois 60637 (United States); Domowicz, Miriam [Department of Pediatrics, University of Chicago, Chicago, Illinois 60637 (United States); Schwartz, Nancy [Department of Pediatrics, Department of Biochemistry and Molecular Biology, University of Chicago, Chicago, Illinois 60637 (United States)

    2015-03-15

    Purpose: Widely used MRI methods show brain morphology both in vivo and ex vivo at very high resolution. Many of these methods (e.g., T{sub 2}{sup *}-weighted imaging, phase-sensitive imaging, or susceptibility-weighted imaging) are sensitive to local magnetic susceptibility gradients produced by subtle variations in tissue composition. However, the spectral resolution of commonly used methods is limited to maintain reasonable run-time combined with very high spatial resolution. Here, the authors report on data acquisition at increased spectral resolution, with 3-dimensional high spectral and spatial resolution MRI, in order to analyze subtle variations in water proton resonance frequency and lineshape that reflect local anatomy. The resulting information compliments previous studies based on T{sub 2}{sup *} and resonance frequency. Methods: The proton free induction decay was sampled at high resolution and Fourier transformed to produce a high-resolution water spectrum for each image voxel in a 3D volume. Data were acquired using a multigradient echo pulse sequence (i.e., echo-planar spectroscopic imaging) with a spatial resolution of 50 × 50 × 70 μm{sup 3} and spectral resolution of 3.5 Hz. Data were analyzed in the spectral domain, and images were produced from the various Fourier components of the water resonance. This allowed precise measurement of local variations in water resonance frequency and lineshape, at the expense of significantly increased run time (16–24 h). Results: High contrast T{sub 2}{sup *}-weighted images were produced from the peak of the water resonance (peak height image), revealing a high degree of anatomical detail, specifically in the hippocampus and cerebellum. In images produced from Fourier components of the water resonance at −7.0 Hz from the peak, the contrast between deep white matter tracts and the surrounding tissue is the reverse of the contrast in water peak height images. This indicates the presence of a shoulder in

  9. High-resolution coded-aperture design for compressive X-ray tomography using low resolution detectors

    Science.gov (United States)

    Mojica, Edson; Pertuz, Said; Arguello, Henry

    2017-12-01

    One of the main challenges in Computed Tomography (CT) is obtaining accurate reconstructions of the imaged object while keeping a low radiation dose in the acquisition process. In order to solve this problem, several researchers have proposed the use of compressed sensing for reducing the amount of measurements required to perform CT. This paper tackles the problem of designing high-resolution coded apertures for compressed sensing computed tomography. In contrast to previous approaches, we aim at designing apertures to be used with low-resolution detectors in order to achieve super-resolution. The proposed method iteratively improves random coded apertures using a gradient descent algorithm subject to constraints in the coherence and homogeneity of the compressive sensing matrix induced by the coded aperture. Experiments with different test sets show consistent results for different transmittances, number of shots and super-resolution factors.

  10. Berkeley High-Resolution Ball

    International Nuclear Information System (INIS)

    Diamond, R.M.

    1984-10-01

    Criteria for a high-resolution γ-ray system are discussed. Desirable properties are high resolution, good response function, and moderate solid angle so as to achieve not only double- but triple-coincidences with good statistics. The Berkeley High-Resolution Ball involved the first use of bismuth germanate (BGO) for anti-Compton shield for Ge detectors. The resulting compact shield permitted rather close packing of 21 detectors around a target. In addition, a small central BGO ball gives the total γ-ray energy and multiplicity, as well as the angular pattern of the γ rays. The 21-detector array is nearly complete, and the central ball has been designed, but not yet constructed. First results taken with 9 detector modules are shown for the nucleus 156 Er. The complex decay scheme indicates a transition from collective rotation (prolate shape) to single- particle states (possibly oblate) near spin 30 h, and has other interesting features

  11. Numerical computation of FCT equilibria by inverse equilibrium method

    International Nuclear Information System (INIS)

    Tokuda, Shinji; Tsunematsu, Toshihide; Takeda, Tatsuoki

    1986-11-01

    FCT (Flux Conserving Tokamak) equilibria were obtained numerically by the inverse equilibrium method. The high-beta tokamak ordering was used to get the explicit boundary conditions for FCT equilibria. The partial differential equation was reduced to the simultaneous quasi-linear ordinary differential equations by using the moment method. The regularity conditions for solutions at the singular point of the equations can be expressed correctly by this reduction and the problem to be solved becomes a tractable boundary value problem on the quasi-linear ordinary differential equations. This boundary value problem was solved by the method of quasi-linearization, one of the shooting methods. Test calculations show that this method provides high-beta tokamak equilibria with sufficiently high accuracy for MHD stability analysis. (author)

  12. Geothermal-Related Thermo-Elastic Fracture Analysis by Numerical Manifold Method

    Directory of Open Access Journals (Sweden)

    Jun He

    2018-05-01

    Full Text Available One significant factor influencing geothermal energy exploitation is the variation of the mechanical properties of rock in high temperature environments. Since rock is typically a heterogeneous granular material, thermal fracturing frequently occurs in the rock when the ambient temperature changes, which can greatly influence the geothermal energy exploitation. A numerical method based on the numerical manifold method (NMM is developed in this study to simulate the thermo-elastic fracturing of rocklike granular materials. The Voronoi tessellation is incorporated into the pre-processor of NMM to represent the grain structure. A contact-based heat transfer model is developed to reflect heat interaction among grains. Based on the model, the transient thermal conduction algorithm for granular materials is established. To simulate the cohesion effects among grains and the fracturing process between grains, a damage-based contact fracture model is developed to improve the contact algorithm of NMM. In the developed numerical method, the heat interaction among grains as well as the heat transfer inside each solid grain are both simulated. Additionally, as damage evolution and fracturing at grain interfaces are also considered, the developed numerical method is applicable to simulate the geothermal-related thermal fracturing process.

  13. Numerical solution of the Navier-Stokes equations by discontinuous Galerkin method

    Science.gov (United States)

    Krasnov, M. M.; Kuchugov, P. A.; E Ladonkina, M.; E Lutsky, A.; Tishkin, V. F.

    2017-02-01

    Detailed unstructured grids and numerical methods of high accuracy are frequently used in the numerical simulation of gasdynamic flows in areas with complex geometry. Galerkin method with discontinuous basis functions or Discontinuous Galerkin Method (DGM) works well in dealing with such problems. This approach offers a number of advantages inherent to both finite-element and finite-difference approximations. Moreover, the present paper shows that DGM schemes can be viewed as Godunov method extension to piecewise-polynomial functions. As is known, DGM involves significant computational complexity, and this brings up the question of ensuring the most effective use of all the computational capacity available. In order to speed up the calculations, operator programming method has been applied while creating the computational module. This approach makes possible compact encoding of mathematical formulas and facilitates the porting of programs to parallel architectures, such as NVidia CUDA and Intel Xeon Phi. With the software package, based on DGM, numerical simulations of supersonic flow past solid bodies has been carried out. The numerical results are in good agreement with the experimental ones.

  14. Methylation-Sensitive High Resolution Melting (MS-HRM).

    Science.gov (United States)

    Hussmann, Dianna; Hansen, Lise Lotte

    2018-01-01

    Methylation-Sensitive High Resolution Melting (MS-HRM) is an in-tube, PCR-based method to detect methylation levels at specific loci of interest. A unique primer design facilitates a high sensitivity of the assays enabling detection of down to 0.1-1% methylated alleles in an unmethylated background.Primers for MS-HRM assays are designed to be complementary to the methylated allele, and a specific annealing temperature enables these primers to anneal both to the methylated and the unmethylated alleles thereby increasing the sensitivity of the assays. Bisulfite treatment of the DNA prior to performing MS-HRM ensures a different base composition between methylated and unmethylated DNA, which is used to separate the resulting amplicons by high resolution melting.The high sensitivity of MS-HRM has proven useful for detecting cancer biomarkers in a noninvasive manner in urine from bladder cancer patients, in stool from colorectal cancer patients, and in buccal mucosa from breast cancer patients. MS-HRM is a fast method to diagnose imprinted diseases and to clinically validate results from whole-epigenome studies. The ability to detect few copies of methylated DNA makes MS-HRM a key player in the quest for establishing links between environmental exposure, epigenetic changes, and disease.

  15. Application of high resolution synchrotron micro-CT radiation in dental implant osseointegration

    DEFF Research Database (Denmark)

    Neldam, Camilla Albeck; Lauridsen, Torsten; Rack, Alexander

    2015-01-01

    The purpose of this study was to describe a refined method using high-resolution synchrotron radiation microtomography (SRmicro-CT) to evaluate osseointegration and peri-implant bone volume fraction after titanium dental implant insertion. SRmicro-CT is considered gold standard evaluating bone...... microarchitecture. Its high resolution, high contrast, and excellent high signal-to-noise-ratio all contribute to the highest spatial resolutions achievable today. Using SRmicro-CT at a voxel size of 5 μm in an experimental goat mandible model, the peri-implant bone volume fraction was found to quickly increase...

  16. High-resolution computed tomography findings in pulmonary Langerhans cell histiocytosis

    Energy Technology Data Exchange (ETDEWEB)

    Rodrigues, Rosana Souza [Universidade Federal do Rio de Janeiro (HUCFF/UFRJ), RJ (Brazil). Hospital Universitario Clementino Fraga Filho. Unit of Radiology; Capone, Domenico; Ferreira Neto, Armando Leao [Universidade do Estado do Rio de Janeiro (UERJ), Rio de Janeiro, RJ (Brazil)

    2011-07-15

    Objective: The present study was aimed at characterizing main lung changes observed in pulmonary Langerhans cell histiocytosis by means of high-resolution computed tomography. Materials and Methods: High-resolution computed tomography findings in eight patients with proven disease diagnosed by open lung biopsy, immunohistochemistry studies and/or extrapulmonary manifestations were retrospectively evaluated. Results: Small rounded, thin-walled cystic lesions were observed in the lung of all the patients. Nodules with predominantly peripheral distribution over the lung parenchyma were observed in 75% of the patients. The lesions were diffusely distributed, predominantly in the upper and middle lung fields in all of the cases, but involvement of costophrenic angles was observed in 25% of the patients. Conclusion: Comparative analysis of high-resolution computed tomography and chest radiography findings demonstrated that thinwalled cysts and small nodules cannot be satisfactorily evaluated by conventional radiography. Because of its capacity to detect and characterize lung cysts and nodules, high-resolution computed tomography increases the probability of diagnosing pulmonary Langerhans cell histiocytosis. (author)

  17. Numerical methods and modelling for engineering

    CERN Document Server

    Khoury, Richard

    2016-01-01

    This textbook provides a step-by-step approach to numerical methods in engineering modelling. The authors provide a consistent treatment of the topic, from the ground up, to reinforce for students that numerical methods are a set of mathematical modelling tools which allow engineers to represent real-world systems and compute features of these systems with a predictable error rate. Each method presented addresses a specific type of problem, namely root-finding, optimization, integral, derivative, initial value problem, or boundary value problem, and each one encompasses a set of algorithms to solve the problem given some information and to a known error bound. The authors demonstrate that after developing a proper model and understanding of the engineering situation they are working on, engineers can break down a model into a set of specific mathematical problems, and then implement the appropriate numerical methods to solve these problems. Uses a “building-block” approach, starting with simpler mathemati...

  18. Proposing New Methods to Enhance the Low-Resolution Simulated GPR Responses in the Frequency and Wavelet Domains

    Directory of Open Access Journals (Sweden)

    Reza Ahmadi

    2014-12-01

    Full Text Available To date, a number of numerical methods, including the popular Finite-Difference Time Domain (FDTD technique, have been proposed to simulate Ground-Penetrating Radar (GPR responses. Despite having a number of advantages, the finite-difference method also has pitfalls such as being very time consuming in simulating the most common case of media with high dielectric permittivity, causing the forward modelling process to be very long lasting, even with modern high-speed computers. In the present study the well-known hyperbolic pattern response of horizontal cylinders, usually found in GPR B-Scan images, is used as a basic model to examine the possibility of reducing the forward modelling execution time. In general, the simulated GPR traces of common reflected objects are time shifted, as with the Normal Moveout (NMO traces encountered in seismic reflection responses. This suggests the application of Fourier transform to the GPR traces, employing the time-shifting property of the transformation to interpolate the traces between the adjusted traces in the frequency domain (FD. Therefore, in the present study two post-processing algorithms have been adopted to increase the speed of forward modelling while maintaining the required precision. The first approach is based on linear interpolation in the Fourier domain, resulting in increasing lateral trace-to-trace interval of appropriate sampling frequency of the signal, preventing any aliasing. In the second approach, a super-resolution algorithm based on 2D-wavelet transform is developed to increase both vertical and horizontal resolution of the GPR B-Scan images through preserving scale and shape of hidden hyperbola features. Through comparing outputs from both methods with the corresponding actual high-resolution forward response, it is shown that both approaches can perform satisfactorily, although the wavelet-based approach outperforms the frequency-domain approach noticeably, both in amplitude and

  19. High-resolution observation of phase contrast at 1MeV. Amorphous or crystalline objects

    International Nuclear Information System (INIS)

    Bourret, A.; Desseaux, J.

    1975-01-01

    Many authors have stressed the possibilities of high voltage to improve resolution, but owing to numerous experimental difficulties the resolution limit at 1MeV, which lies around 1A for conventional lenses, has so far been unattainable. Thus the phase contrast at 1MeV has not been studied on evaporated objects. On the other hand the fringes of crystal planes have been observed at 1MeV. the CEN-G microscope having been considerably modified it has been possible to observe the phase contrast of amorphous or crystalline objects [fr

  20. Improving PET spatial resolution and detectability for prostate cancer imaging

    International Nuclear Information System (INIS)

    Bal, H; Guerin, L; Casey, M E; Conti, M; Eriksson, L; Michel, C; Fanti, S; Pettinato, C; Adler, S; Choyke, P

    2014-01-01

    Prostate cancer, one of the most common forms of cancer among men, can benefit from recent improvements in positron emission tomography (PET) technology. In particular, better spatial resolution, lower noise and higher detectability of small lesions could be greatly beneficial for early diagnosis and could provide a strong support for guiding biopsy and surgery. In this article, the impact of improved PET instrumentation with superior spatial resolution and high sensitivity are discussed, together with the latest development in PET technology: resolution recovery and time-of-flight reconstruction. Using simulated cancer lesions, inserted in clinical PET images obtained with conventional protocols, we show that visual identification of the lesions and detectability via numerical observers can already be improved using state of the art PET reconstruction methods. This was achieved using both resolution recovery and time-of-flight reconstruction, and a high resolution image with 2 mm pixel size. Channelized Hotelling numerical observers showed an increase in the area under the LROC curve from 0.52 to 0.58. In addition, a relationship between the simulated input activity and the area under the LROC curve showed that the minimum detectable activity was reduced by more than 23%. (paper)

  1. Classification of high resolution remote sensing image based on geo-ontology and conditional random fields

    Science.gov (United States)

    Hong, Liang

    2013-10-01

    The availability of high spatial resolution remote sensing data provides new opportunities for urban land-cover classification. More geometric details can be observed in the high resolution remote sensing image, Also Ground objects in the high resolution remote sensing image have displayed rich texture, structure, shape and hierarchical semantic characters. More landscape elements are represented by a small group of pixels. Recently years, the an object-based remote sensing analysis methodology is widely accepted and applied in high resolution remote sensing image processing. The classification method based on Geo-ontology and conditional random fields is presented in this paper. The proposed method is made up of four blocks: (1) the hierarchical ground objects semantic framework is constructed based on geoontology; (2) segmentation by mean-shift algorithm, which image objects are generated. And the mean-shift method is to get boundary preserved and spectrally homogeneous over-segmentation regions ;(3) the relations between the hierarchical ground objects semantic and over-segmentation regions are defined based on conditional random fields framework ;(4) the hierarchical classification results are obtained based on geo-ontology and conditional random fields. Finally, high-resolution remote sensed image data -GeoEye, is used to testify the performance of the presented method. And the experimental results have shown the superiority of this method to the eCognition method both on the effectively and accuracy, which implies it is suitable for the classification of high resolution remote sensing image.

  2. FPscope: a field-portable high-resolution microscope using a cellphone lens.

    Science.gov (United States)

    Dong, Siyuan; Guo, Kaikai; Nanda, Pariksheet; Shiradkar, Radhika; Zheng, Guoan

    2014-10-01

    The large consumer market has made cellphone lens modules available at low-cost and in high-quality. In a conventional cellphone camera, the lens module is used to demagnify the scene onto the image plane of the camera, where image sensor is located. In this work, we report a 3D-printed high-resolution Fourier ptychographic microscope, termed FPscope, which uses a cellphone lens in a reverse manner. In our platform, we replace the image sensor with sample specimens, and use the cellphone lens to project the magnified image to the detector. To supersede the diffraction limit of the lens module, we use an LED array to illuminate the sample from different incident angles and synthesize the acquired images using the Fourier ptychographic algorithm. As a demonstration, we use the reported platform to acquire high-resolution images of resolution target and biological specimens, with a maximum synthetic numerical aperture (NA) of 0.5. We also show that, the depth-of-focus of the reported platform is about 0.1 mm, orders of magnitude longer than that of a conventional microscope objective with a similar NA. The reported platform may enable healthcare accesses in low-resource settings. It can also be used to demonstrate the concept of computational optics for educational purposes.

  3. Analytical method by high resolution liquid chromatography for the stability study of cloratidine syrup 0.1 %

    International Nuclear Information System (INIS)

    Torres Amaro, Leonid; Garcia Penna, Caridad M; Pardo Ruiz, Zenia

    2007-01-01

    A high resolution liquid chromatography method was validated to study the stability of cloratidine syrup 0.1 %. The calibration curve in the range from 13.6 to 3.36 μg/mL was lineal, with a coefficient of correlation equal to 0.99975. The intercept and slope statistical test was not significant. The recovery obtained was 100.2 % in the concentration range studied, and the Cochran and Student (t) tests results were not important. The variation coefficient in the repeatability study was equal to 0.41 % for 10 replications assayed, whereas in the reproducibility Fischer and Student tests were not remarkable. The method proved to be specific, lineal, accurate, and exact. (Author)

  4. Development of realistic high-resolution whole-body voxel models of Japanese adult males and females of average height and weight, and application of models to radio-frequency electromagnetic-field dosimetry

    International Nuclear Information System (INIS)

    Nagaoka, Tomoaki; Watanabe, Soichi; Sakurai, Kiyoko; Kunieda, Etsuo; Watanabe, Satoshi; Taki, Masao; Yamanaka, Yukio

    2004-01-01

    With advances in computer performance, the use of high-resolution voxel models of the entire human body has become more frequent in numerical dosimetries of electromagnetic waves. Using magnetic resonance imaging, we have developed realistic high-resolution whole-body voxel models for Japanese adult males and females of average height and weight. The developed models consist of cubic voxels of 2 mm on each side; the models are segmented into 51 anatomic regions. The adult female model is the first of its kind in the world and both are the first Asian voxel models (representing average Japanese) that enable numerical evaluation of electromagnetic dosimetry at high frequencies of up to 3 GHz. In this paper, we will also describe the basic SAR characteristics of the developed models for the VHF/UHF bands, calculated using the finite-difference time-domain method

  5. High-Resolution PET Detector. Final report

    International Nuclear Information System (INIS)

    Karp, Joel

    2014-01-01

    The objective of this project was to develop an understanding of the limits of performance for a high resolution PET detector using an approach based on continuous scintillation crystals rather than pixelated crystals. The overall goal was to design a high-resolution detector, which requires both high spatial resolution and high sensitivity for 511 keV gammas. Continuous scintillation detectors (Anger cameras) have been used extensively for both single-photon and PET scanners, however, these instruments were based on NaI(Tl) scintillators using relatively large, individual photo-multipliers. In this project we investigated the potential of this type of detector technology to achieve higher spatial resolution through the use of improved scintillator materials and photo-sensors, and modification of the detector surface to optimize the light response function.We achieved an average spatial resolution of 3-mm for a 25-mm thick, LYSO continuous detector using a maximum likelihood position algorithm and shallow slots cut into the entrance surface

  6. Method for local temperature measurement in a nanoreactor for in situ high-resolution electron microscopy.

    Science.gov (United States)

    Vendelbo, S B; Kooyman, P J; Creemer, J F; Morana, B; Mele, L; Dona, P; Nelissen, B J; Helveg, S

    2013-10-01

    In situ high-resolution transmission electron microscopy (TEM) of solids under reactive gas conditions can be facilitated by microelectromechanical system devices called nanoreactors. These nanoreactors are windowed cells containing nanoliter volumes of gas at ambient pressures and elevated temperatures. However, due to the high spatial confinement of the reaction environment, traditional methods for measuring process parameters, such as the local temperature, are difficult to apply. To address this issue, we devise an electron energy loss spectroscopy (EELS) method that probes the local temperature of the reaction volume under inspection by the electron beam. The local gas density, as measured using quantitative EELS, is combined with the inherent relation between gas density and temperature, as described by the ideal gas law, to obtain the local temperature. Using this method we determined the temperature gradient in a nanoreactor in situ, while the average, global temperature was monitored by a traditional measurement of the electrical resistivity of the heater. The local gas temperatures had a maximum of 56 °C deviation from the global heater values under the applied conditions. The local temperatures, obtained with the proposed method, are in good agreement with predictions from an analytical model. Copyright © 2013 Elsevier B.V. All rights reserved.

  7. Three dimensional numerical modeling for ground penetrating radar using finite difference time domain (FDTD) method; Jikan ryoiki yugen sabunho ni yoru chika radar no sanjigen suchi modeling

    Energy Technology Data Exchange (ETDEWEB)

    Sanada, Y; Ashida, Y; Sassa, K [Kyoto University, Kyoto (Japan)

    1996-10-01

    3-D numerical modeling by FDTD method was studied for ground penetrating radar. Radar radiates electromagnetic wave, and determines the existence and distance of objects by reflection wave. Ground penetrating radar uses the above functions for underground surveys, however, its resolution and velocity analysis accuracy are problems. In particular, propagation characteristics of electromagnetic wave in media such as heterogeneous and anisotropic soil and rock are essential. The behavior of electromagnetic wave in the ground could be precisely reproduced by 3-D numerical modeling using FDTD method. FDTD method makes precise analysis in time domain and electric and magnetic fields possible by sequentially calculating the difference equation of Maxwell`s equation. Because of the high calculation efficiency of FDTD method, more precise complicated analysis can be expected by using the latest advanced computers. The numerical model and calculation example are illustrated for surface type electromagnetic pulse ground penetrating radar assuming the survey of steel pipes of 1m deep. 4 refs., 3 figs., 1 tab.

  8. European Workshop on High Order Nonlinear Numerical Schemes for Evolutionary PDEs

    CERN Document Server

    Beaugendre, Héloïse; Congedo, Pietro; Dobrzynski, Cécile; Perrier, Vincent; Ricchiuto, Mario

    2014-01-01

    This book collects papers presented during the European Workshop on High Order Nonlinear Numerical Methods for Evolutionary PDEs (HONOM 2013) that was held at INRIA Bordeaux Sud-Ouest, Talence, France in March, 2013. The central topic is high order methods for compressible fluid dynamics. In the workshop, and in this proceedings, greater emphasis is placed on the numerical than the theoretical aspects of this scientific field. The range of topics is broad, extending through algorithm design, accuracy, large scale computing, complex geometries, discontinuous Galerkin, finite element methods, Lagrangian hydrodynamics, finite difference methods and applications and uncertainty quantification. These techniques find practical applications in such fields as fluid mechanics, magnetohydrodynamics, nonlinear solid mechanics, and others for which genuinely nonlinear methods are needed.

  9. Low resolution spectroscopic investigation of Am stars using Automated method

    Science.gov (United States)

    Sharma, Kaushal; Joshi, Santosh; Singh, Harinder P.

    2018-04-01

    The automated method of full spectrum fitting gives reliable estimates of stellar atmospheric parameters (Teff, log g and [Fe/H]) for late A, F, G, and early K type stars. Recently, the technique was further improved in the cooler regime and the validity range was extended up to a spectral type of M6 - M7 (Teff˜ 2900 K). The present study aims to explore the application of this method on the low-resolution spectra of Am stars, a class of chemically peculiar stars, to examine its robustness for these objects. We use ULySS with the Medium-resolution INT Library of Empirical Spectra (MILES) V2 spectral interpolator for parameter determination. The determined Teff and log g values are found to be in good agreement with those obtained from high-resolution spectroscopy.

  10. Yeast expression proteomics by high-resolution mass spectrometry

    DEFF Research Database (Denmark)

    Walther, Tobias C; Olsen, Jesper Velgaard; Mann, Matthias

    2010-01-01

    -translational controls contribute majorly to regulation of protein abundance, for example in heat shock stress response. The development of new sample preparation methods, high-resolution mass spectrometry and novel bioinfomatic tools close this gap and allow the global quantitation of the yeast proteome under different...

  11. Development of an improved high resolution mass spectrometry based multi-residue method for veterinary drugs in various food matrices.

    Science.gov (United States)

    Kaufmann, A; Butcher, P; Maden, K; Walker, S; Widmer, M

    2011-08-26

    Multi-residue methods for veterinary drugs or pesticides in food are increasingly often based on ultra performance liquid chromatography (UPLC) coupled to high resolution mass spectrometry (HRMS). Previous available time of flight (TOF) technologies, showing resolutions up to 15,000 full width at half maximum (FWHM), were not sufficiently selective for monitoring low residue concentrations in difficult matrices (e.g. hormones in tissue or antibiotics in honey). The approach proposed in this paper is based on a single stage Orbitrap mass spectrometer operated at 50,000 FWHM. Extracts (liver and kidney) which were produced according to a validated multi-residue method (time of flight detection based) could not be analyzed by Orbitrap because of extensive signal suppression. This required the improvement of established extraction and clean-up procedures. The introduced, more extensive deproteinzation steps and dedicated instrumental settings successfully eliminated these detrimental suppression effects. The reported method, covering more than 100 different veterinary dugs, was validated according to the EU Commission Decision 2002/657/EEC. Validated matrices include muscle, kidney, liver, fish and honey. Significantly better performance parameters (e.g. linearity, reproducibility and detection limits) were obtained when comparing the new method with the older, TOF based method. These improvements are attributed to the higher resolution (50,000 versus 12,000 FWHM) and the superior mass stability of the of the Orbitrap over the previously utilized TOF instrument. Copyright © 2010 Elsevier B.V. All rights reserved.

  12. Per-Pixel Coded Exposure for High-Speed and High-Resolution Imaging Using a Digital Micromirror Device Camera

    Directory of Open Access Journals (Sweden)

    Wei Feng

    2016-03-01

    Full Text Available High-speed photography is an important tool for studying rapid physical phenomena. However, low-frame-rate CCD (charge coupled device or CMOS (complementary metal oxide semiconductor camera cannot effectively capture the rapid phenomena with high-speed and high-resolution. In this paper, we incorporate the hardware restrictions of existing image sensors, design the sampling functions, and implement a hardware prototype with a digital micromirror device (DMD camera in which spatial and temporal information can be flexibly modulated. Combined with the optical model of DMD camera, we theoretically analyze the per-pixel coded exposure and propose a three-element median quicksort method to increase the temporal resolution of the imaging system. Theoretically, this approach can rapidly increase the temporal resolution several, or even hundreds, of times without increasing bandwidth requirements of the camera. We demonstrate the effectiveness of our method via extensive examples and achieve 100 fps (frames per second gain in temporal resolution by using a 25 fps camera.

  13. Theoretical and applied aerodynamics and related numerical methods

    CERN Document Server

    Chattot, J J

    2015-01-01

    This book covers classical and modern aerodynamics, theories and related numerical methods, for senior and first-year graduate engineering students, including: -The classical potential (incompressible) flow theories for low speed aerodynamics of thin airfoils and high and low aspect ratio wings. - The linearized theories for compressible subsonic and supersonic aerodynamics. - The nonlinear transonic small disturbance potential flow theory, including supercritical wing sections, the extended transonic area rule with lift effect, transonic lifting line and swept or oblique wings to minimize wave drag. Unsteady flow is also briefly discussed. Numerical simulations based on relaxation mixed-finite difference methods are presented and explained. - Boundary layer theory for all Mach number regimes and viscous/inviscid interaction procedures used in practical aerodynamics calculations. There are also four chapters covering special topics, including wind turbines and propellers, airplane design, flow analogies and h...

  14. Ultra high resolution soft x-ray tomography

    International Nuclear Information System (INIS)

    Haddad, W.S.; Trebes, J.E.; Goodman, D.M.

    1995-01-01

    Ultra high resolution three dimensional images of a microscopic test object were made with soft x-rays using a scanning transmission x-ray microscope. The test object consisted of two different patterns of gold bars on silicon nitride windows that were separated by ∼5μm. A series of nine 2-D images of the object were recorded at angles between -50 to +55 degrees with respect to the beam axis. The projections were then combined tomographically to form a 3-D image by means of an algebraic reconstruction technique (ART) algorithm. A transverse resolution of ∼1000 Angstrom was observed. Artifacts in the reconstruction limited the overall depth resolution to ∼6000 Angstrom, however some features were clearly reconstructed with a depth resolution of ∼1000 Angstrom. A specially modified ART algorithm and a constrained conjugate gradient (CCG) code were also developed as improvements over the standard ART algorithm. Both of these methods made significant improvements in the overall depth resolution bringing it down to ∼1200 Angstrom overall. Preliminary projection data sets were also recorded with both dry and re-hydrated human sperm cells over a similar angular range

  15. Ultra high resolution soft x-ray tomography

    International Nuclear Information System (INIS)

    Haddad, W.S.; Trebes, J.E.; Goodman, D.M.; Lee, H.R.; McNulty, I.; Zalensky, A.O.

    1995-01-01

    Ultra high resolution three dimensional images of a microscopic test object were made with soft x-rays using a scanning transmission x-ray microscope. The test object consisted of two different patterns of gold bars on silicon nitride windows that were separated by ∼5 microm. A series of nine 2-D images of the object were recorded at angles between -50 to +55 degrees with respect to the beam axis. The projections were then combined tomographically to form a 3-D image by means of an algebraic reconstruction technique (ART) algorithm. A transverse resolution of ∼ 1,000 angstrom was observed. Artifacts in the reconstruction limited the overall depth resolution to ∼ 6,000 angstrom, however some features were clearly reconstructed with a depth resolution of ∼ 1,000 angstrom. A specially modified ART algorithm and a constrained conjugate gradient (CCG) code were also developed as improvements over the standard ART algorithm. Both of these methods made significant improvements in the overall depth resolution, bringing it down to ∼ 1,200 angstrom overall. Preliminary projection data sets were also recorded with both dry and re-hydrated human sperm cells over a similar angular range

  16. Comparison of elastic-viscous-plastic and viscous-plastic dynamics models using a high resolution Arctic sea ice model

    Energy Technology Data Exchange (ETDEWEB)

    Hunke, E.C. [Los Alamos National Lab., NM (United States); Zhang, Y. [Naval Postgraduate School, Monterey, CA (United States)

    1997-12-31

    A nonlinear viscous-plastic (VP) rheology proposed by Hibler (1979) has been demonstrated to be the most suitable of the rheologies commonly used for modeling sea ice dynamics. However, the presence of a huge range of effective viscosities hinders numerical implementations of this model, particularly on high resolution grids or when the ice model is coupled to an ocean or atmosphere model. Hunke and Dukowicz (1997) have modified the VP model by including elastic waves as a numerical regularization in the case of zero strain rate. This modification (EVP) allows an efficient, fully explicit discretization that adapts well to parallel architectures. The authors present a comparison of EVP and VP dynamics model results from two 5-year simulations of Arctic sea ice, obtained with a high resolution sea ice model. The purpose of the comparison is to determine how differently the two dynamics models behave, and to decide whether the elastic-viscous-plastic model is preferable for high resolution climate simulations, considering its high efficiency in parallel computation. Results from the first year of this experiment (1990) are discussed in detail in Hunke and Zhang (1997).

  17. Towards numerical simulations of supersonic liquid jets using ghost fluid method

    International Nuclear Information System (INIS)

    Majidi, Sahand; Afshari, Asghar

    2015-01-01

    Highlights: • A ghost fluid method based solver is developed for numerical simulation of compressible multiphase flows. • The performance of the numerical tool is validated via several benchmark problems. • Emergence of supersonic liquid jets in quiescent gaseous environment is simulated using ghost fluid method for the first time. • Bow-shock formation ahead of the liquid jet is clearly observed in the obtained numerical results. • Radiation of mach waves from the phase-interface witnessed experimentally is evidently captured in our numerical simulations. - Abstract: A computational tool based on the ghost fluid method (GFM) is developed to study supersonic liquid jets involving strong shocks and contact discontinuities with high density ratios. The solver utilizes constrained reinitialization method and is capable of switching between the exact and approximate Riemann solvers to increase the robustness. The numerical methodology is validated through several benchmark test problems; these include one-dimensional multiphase shock tube problem, shock–bubble interaction, air cavity collapse in water, and underwater-explosion. A comparison between our results and numerical and experimental observations indicate that the developed solver performs well investigating these problems. The code is then used to simulate the emergence of a supersonic liquid jet into a quiescent gaseous medium, which is the very first time to be studied by a ghost fluid method. The results of simulations are in good agreement with the experimental investigations. Also some of the famous flow characteristics, like the propagation of pressure-waves from the liquid jet interface and dependence of the Mach cone structure on the inlet Mach number, are reproduced numerically. The numerical simulations conducted here suggest that the ghost fluid method is an affordable and reliable scheme to study complicated interfacial evolutions in complex multiphase systems such as supersonic liquid

  18. High-efficient method for spectrometric data real time processing with increased resolution of a measuring channel

    International Nuclear Information System (INIS)

    Ashkinaze, S.I.; Voronov, V.A.; Nechaev, Yu.I.

    1988-01-01

    Solution of reduction problem as a mean to increase spectrometric tract resolution when it is realized using the digit-by-digit modified method and special strategy, significantly reducing the time of processing, is considered. The results presented confirm that the complex measurement tract plus microcomputer is equivalent to the use of the tract with a higher resolution, and the use of the digit-by-digit modified method permits to process spectrometric information in real time scale

  19. Numerical Verification Methods for Spherical $t$-Designs

    OpenAIRE

    Chen, Xiaojun

    2009-01-01

    The construction of spherical $t$-designs with $(t+1)^2$ points on the unit sphere $S^2$ in $\\mathbb{R}^3$ can be reformulated as an underdetermined system of nonlinear equations. This system is highly nonlinear and involves the evaluation of a degree $t$ polynomial in $(t+1)^4$ arguments. This paper reviews numerical verification methods using the Brouwer fixed point theorem and Krawczyk interval operator for solutions of the underdetermined system of nonlinear equations...

  20. High-Resolution Near Real-Time Drought Monitoring in South Asia

    Science.gov (United States)

    Aadhar, S.; Mishra, V.

    2017-12-01

    Drought in South Asia affect food and water security and pose challenges for millions of people. For policy-making, planning and management of water resources at the sub-basin or administrative levels, high-resolution datasets of precipitation and air temperature are required in near-real time. Here we develop a high resolution (0.05 degree) bias-corrected precipitation and temperature data that can be used to monitor near real-time drought conditions over South Asia. Moreover, the dataset can be used to monitor climatic extremes (heat waves, cold waves, dry and wet anomalies) in South Asia. A distribution mapping method was applied to correct bias in precipitation and air temperature (maximum and minimum), which performed well compared to the other bias correction method based on linear scaling. Bias-corrected precipitation and temperature data were used to estimate Standardized precipitation index (SPI) and Standardized Precipitation Evapotranspiration Index (SPEI) to assess the historical and current drought conditions in South Asia. We evaluated drought severity and extent against the satellite-based Normalized Difference Vegetation Index (NDVI) anomalies and satellite-driven Drought Severity Index (DSI) at 0.05˚. We find that the bias-corrected high-resolution data can effectively capture observed drought conditions as shown by the satellite-based drought estimates. High resolution near real-time dataset can provide valuable information for decision-making at district and sub- basin levels.

  1. Evaluation of different shadow detection and restoration methods and their impact on vegetation indices using UAV high-resolution imageries over vineyards

    Science.gov (United States)

    Aboutalebi, M.; Torres-Rua, A. F.; McKee, M.; Kustas, W. P.; Nieto, H.

    2017-12-01

    Shadows are an unavoidable component of high-resolution imagery. Although shadows can be a useful source of information about terrestrial features, they are a hindrance for image processing and lead to misclassification errors and increased uncertainty in defining surface reflectance properties. In precision agriculture activities, shadows may affect the performance of vegetation indices at pixel and plant scales. Thus, it becomes necessary to evaluate existing shadow detection and restoration methods, especially for applications that makes direct use of pixel information to estimate vegetation biomass, leaf area index (LAI), plant water use and stress, chlorophyll content, just to name a few. In this study, four high-resolution imageries captured by the Utah State University - AggieAir Unmanned Aerial Vehicle (UAV) system flown in 2014, 2015, and 2016 over a commercial vineyard located in the California for the USDA-Agricultural Research Service Grape Remote sensing Atmospheric Profile and Evapotranspiration Experiment (GRAPEX) Program are used for shadow detection and restoration. Four different methods for shadow detection are compared: (1) unsupervised classification, (2) supervised classification, (3) index-based method, and (4) physically-based method. Also, two different shadow restoration methods are evaluated: (1) linear correlation correction, and (2) gamma correction. The models' performance is evaluated over two vegetation indices: normalized difference vegetation index (NDVI) and LAI for both sunlit and shadowed pixels. Histogram and analysis of variance (ANOVA) are used as performance indicators. Results indicated that the performance of the supervised classification and the index-based method are better than other methods. In addition, there is a statistical difference between the average of NDVI and LAI on the sunlit and shadowed pixels. Among the shadow restoration methods, gamma correction visually works better than the linear correlation

  2. Employing Tropospheric Numerical Weather Prediction Model for High-Precision GNSS Positioning

    Science.gov (United States)

    Alves, Daniele; Gouveia, Tayna; Abreu, Pedro; Magário, Jackes

    2014-05-01

    In the past few years is increasing the necessity of realizing high accuracy positioning. In this sense, the spatial technologies have being widely used. The GNSS (Global Navigation Satellite System) has revolutionized the geodetic positioning activities. Among the existent methods one can emphasize the Precise Point Positioning (PPP) and network-based positioning. But, to get high accuracy employing these methods, mainly in real time, is indispensable to realize the atmospheric modeling (ionosphere and troposphere) accordingly. Related to troposphere, there are the empirical models (for example Saastamoinen and Hopfield). But when highly accuracy results (error of few centimeters) are desired, maybe these models are not appropriated to the Brazilian reality. In order to minimize this limitation arises the NWP (Numerical Weather Prediction) models. In Brazil the CPTEC/INPE (Center for Weather Prediction and Climate Studies / Brazilian Institute for Spatial Researches) provides a regional NWP model, currently used to produce Zenithal Tropospheric Delay (ZTD) predictions (http://satelite.cptec.inpe.br/zenital/). The actual version, called eta15km model, has a spatial resolution of 15 km and temporal resolution of 3 hours. In this paper the main goal is to accomplish experiments and analysis concerning the use of troposphere NWP model (eta15km model) in PPP and network-based positioning. Concerning PPP it was used data from dozens of stations over the Brazilian territory, including Amazon forest. The results obtained with NWP model were compared with Hopfield one. NWP model presented the best results in all experiments. Related to network-based positioning it was used data from GNSS/SP Network in São Paulo State, Brazil. This network presents the best configuration in the country to realize this kind of positioning. Actually the network is composed by twenty stations (http://www.fct.unesp.br/#!/pesquisa/grupos-de-estudo-e-pesquisa/gege//gnss-sp-network2789/). The

  3. Enhanced spectral resolution by high-dimensional NMR using the filter diagonalization method and “hidden” dimensions

    Science.gov (United States)

    Meng, Xi; Nguyen, Bao D.; Ridge, Clark; Shaka, A. J.

    2009-01-01

    High-dimensional (HD) NMR spectra have poorer digital resolution than low-dimensional (LD) spectra, for a fixed amount of experiment time. This has led to “reduced-dimensionality” strategies, in which several LD projections of the HD NMR spectrum are acquired, each with higher digital resolution; an approximate HD spectrum is then inferred by some means. We propose a strategy that moves in the opposite direction, by adding more time dimensions to increase the information content of the data set, even if only a very sparse time grid is used in each dimension. The full HD time-domain data can be analyzed by the Filter Diagonalization Method (FDM), yielding very narrow resonances along all of the frequency axes, even those with sparse sampling. Integrating over the added dimensions of HD FDM NMR spectra reconstitutes LD spectra with enhanced resolution, often more quickly than direct acquisition of the LD spectrum with a larger number of grid points in each of the fewer dimensions. If the extra dimensions do not appear in the final spectrum, and are used solely to boost information content, we propose the moniker hidden-dimension NMR. This work shows that HD peaks have unmistakable frequency signatures that can be detected as single HD objects by an appropriate algorithm, even though their patterns would be tricky for a human operator to visualize or recognize, and even if digital resolution in an HD FT spectrum is very coarse compared with natural line widths. PMID:18926747

  4. Numerical Methods for Radiation Magnetohydrodynamics in Astrophysics

    Energy Technology Data Exchange (ETDEWEB)

    Klein, R I; Stone, J M

    2007-11-20

    We describe numerical methods for solving the equations of radiation magnetohydrodynamics (MHD) for astrophysical fluid flow. Such methods are essential for the investigation of the time-dependent and multidimensional dynamics of a variety of astrophysical systems, although our particular interest is motivated by problems in star formation. Over the past few years, the authors have been members of two parallel code development efforts, and this review reflects that organization. In particular, we discuss numerical methods for MHD as implemented in the Athena code, and numerical methods for radiation hydrodynamics as implemented in the Orion code. We discuss the challenges introduced by the use of adaptive mesh refinement in both codes, as well as the most promising directions for future developments.

  5. Numerical Methods for Radiation Magnetohydrodynamics in Astrophysics

    International Nuclear Information System (INIS)

    Klein, R I; Stone, J M

    2007-01-01

    We describe numerical methods for solving the equations of radiation magnetohydrodynamics (MHD) for astrophysical fluid flow. Such methods are essential for the investigation of the time-dependent and multidimensional dynamics of a variety of astrophysical systems, although our particular interest is motivated by problems in star formation. Over the past few years, the authors have been members of two parallel code development efforts, and this review reflects that organization. In particular, we discuss numerical methods for MHD as implemented in the Athena code, and numerical methods for radiation hydrodynamics as implemented in the Orion code. We discuss the challenges introduced by the use of adaptive mesh refinement in both codes, as well as the most promising directions for future developments

  6. High resolution ultrasonic densitometer

    International Nuclear Information System (INIS)

    Dress, W.B.

    1983-01-01

    The velocity of torsional stress pulses in an ultrasonic waveguide of non-circular cross section is affected by the temperature and density of the surrounding medium. Measurement of the transit times of acoustic echoes from the ends of a sensor section are interpreted as level, density, and temperature of the fluid environment surrounding that section. This paper examines methods of making these measurements to obtain high resolution, temperature-corrected absolute and relative density and level determinations of the fluid. Possible applications include on-line process monitoring, a hand-held density probe for battery charge state indication, and precise inventory control for such diverse fluids as uranium salt solutions in accountability storage and gasoline in service station storage tanks

  7. SAGA GIS based processing of spatial high resolution temperature data

    International Nuclear Information System (INIS)

    Gerlitz, Lars; Bechtel, Benjamin; Kawohl, Tobias; Boehner, Juergen; Zaksek, Klemen

    2013-01-01

    Many climate change impact studies require surface and near surface temperature data with high spatial and temporal resolution. The resolution of state of the art climate models and remote sensing data is often by far to coarse to represent the meso- and microscale distinctions of temperatures. This is particularly the case for regions with a huge variability of topoclimates, such as mountainous or urban areas. Statistical downscaling techniques are promising methods to refine gridded temperature data with limited spatial resolution, particularly due to their low demand for computer capacity. This paper presents two downscaling approaches - one for climate model output and one for remote sensing data. Both are methodically based on the FOSS-GIS platform SAGA. (orig.)

  8. Developing Teaching Material Software Assisted for Numerical Methods

    Science.gov (United States)

    Handayani, A. D.; Herman, T.; Fatimah, S.

    2017-09-01

    The NCTM vision shows the importance of two things in school mathematics, which is knowing the mathematics of the 21st century and the need to continue to improve mathematics education to answer the challenges of a changing world. One of the competencies associated with the great challenges of the 21st century is the use of help and tools (including IT), such as: knowing the existence of various tools for mathematical activity. One of the significant challenges in mathematical learning is how to teach students about abstract concepts. In this case, technology in the form of mathematics learning software can be used more widely to embed the abstract concept in mathematics. In mathematics learning, the use of mathematical software can make high level math activity become easier accepted by student. Technology can strengthen student learning by delivering numerical, graphic, and symbolic content without spending the time to calculate complex computing problems manually. The purpose of this research is to design and develop teaching materials software assisted for numerical method. The process of developing the teaching material starts from the defining step, the process of designing the learning material developed based on information obtained from the step of early analysis, learners, materials, tasks that support then done the design step or design, then the last step is the development step. The development of teaching materials software assisted for numerical methods is valid in content. While validator assessment for teaching material in numerical methods is good and can be used with little revision.

  9. Development of high-energy resolution inverse photoemission technique

    International Nuclear Information System (INIS)

    Asakura, D.; Fujii, Y.; Mizokawa, T.

    2005-01-01

    We developed a new inverse photoemission (IPES) machine based on a new idea to improve the energy resolution: off-plane Eagle mounting of the optical system in combination with dispersion matching between incoming electron and outgoing photon. In order to achieve dispersion matching, we have employed a parallel plate electron source and have investigated whether the electron beam is obtained as expected. In this paper, we present the principle and design of the new IPES method and report the current status of the high-energy resolution IPES machine

  10. Numerical differentiation methods for the logarithmic derivative technique used in dielectric spectroscopy

    Directory of Open Access Journals (Sweden)

    Henrik Haspel

    2010-06-01

    Full Text Available In dielectric relaxation spectroscopy the conduction contribution often hampers the evaluation of dielectric spectra, especially in the low-frequency regime. In order to overcome this the logarithmic derivative technique could be used, where the calculation of the logarithmic derivative of the real part of the complex permittivity function is needed. Since broadband dielectric measurement provides discrete permittivity function, numerical differentiation has to be used. Applicability of the Savitzky-Golay convolution method in the derivative analysis is examined, and a detailed investigation of the influential parameters (frequency, spectrum resolution, peak shape is presented on synthetic dielectric data.

  11. Potential of high resolution protein mapping as a method of monitoring the human immune system

    International Nuclear Information System (INIS)

    Anderson, N.L.; Anderson, N.G

    1980-01-01

    Immunology traditionally deals with complex cellular systems and heterogeneous mixtures of effector molecules (primarily antibodies). Some sense has emerged from this chaos through the use of functional assays. Such an approach however naturally leaves a great deal undiscovered since the assays are simple and the assayed objects are complex. In this chapter some experimental approaches to immunological problems are described using high-resolution two-dimensional electrophoresis, a method that can resolve thousands of proteins and can thus begin to treat immunological entities at their appropriate level of complexity. In addition, the possible application of this work to the problem of monitoring events in the individual human immune system are discussed

  12. Assessment of engineered surfaces roughness by high-resolution 3D SEM photogrammetry

    Energy Technology Data Exchange (ETDEWEB)

    Gontard, L.C., E-mail: lionelcg@gmail.com [Departamento de Ciencia de los Materiales e Ingeniería Metalúrgica y Química Inorgánica, Universidad de Cádiz, Puerto Real 11510 (Spain); López-Castro, J.D.; González-Rovira, L. [Departamento de Ciencia de los Materiales e Ingeniería Metalúrgica y Química Inorgánica, Escuela Superior de Ingeniería, Laboratorio de Corrosión, Universidad de Cádiz, Puerto Real 11519 (Spain); Vázquez-Martínez, J.M. [Departamento de Ingeniería Mecánica y Diseño Industrial, Escuela Superior de Ingeniería, Universidad de Cádiz, Puerto Real 11519 (Spain); Varela-Feria, F.M. [Servicio de Microscopía Centro de Investigación, Tecnología e Innovación (CITIUS), Universidad de Sevilla, Av. Reina Mercedes 4b, 41012 Sevilla (Spain); Marcos, M. [Departamento de Ingeniería Mecánica y Diseño Industrial, Escuela Superior de Ingeniería, Universidad de Cádiz, Puerto Real 11519 (Spain); and others

    2017-06-15

    Highlights: • We describe a method to acquire a high-angle tilt series of SEM images that is symmetrical respect to the zero tilt of the sample stage. The method can be applied in any SEM microscope. • Using the method, high-resolution 3D SEM photogrammetry can be applied on planar surfaces. • 3D models of three surfaces patterned with grooves are reconstructed with high resolution using multi-view freeware photogrammetry software as described in LC Gontard et al. Ultramicroscopy, 2016. • From the 3D models roughness parameters are measured • 3D SEM high-resolution photogrammetry is compared with two conventional methods used for roughness characetrization: stereophotogrammetry and contact profilometry. • It provides three-dimensional information with high-resolution that is out of reach for any other metrological technique. - Abstract: We describe a methodology to obtain three-dimensional models of engineered surfaces using scanning electron microscopy and multi-view photogrammetry (3DSEM). For the reconstruction of the 3D models of the surfaces we used freeware available in the cloud. The method was applied to study the surface roughness of metallic samples patterned with parallel grooves by means of laser. The results are compared with measurements obtained using stylus profilometry (PR) and SEM stereo-photogrammetry (SP). The application of 3DSEM is more time demanding than PR or SP, but it provides a more accurate representation of the surfaces. The results obtained with the three techniques are compared by investigating the influence of sampling step on roughness parameters.

  13. High resolution sequence stratigraphy in China

    International Nuclear Information System (INIS)

    Zhang Shangfeng; Zhang Changmin; Yin Yanshi; Yin Taiju

    2008-01-01

    Since high resolution sequence stratigraphy was introduced into China by DENG Hong-wen in 1995, it has been experienced two development stages in China which are the beginning stage of theory research and development of theory research and application, and the stage of theoretical maturity and widely application that is going into. It is proved by practices that high resolution sequence stratigraphy plays more and more important roles in the exploration and development of oil and gas in Chinese continental oil-bearing basin and the research field spreads to the exploration of coal mine, uranium mine and other strata deposits. However, the theory of high resolution sequence stratigraphy still has some shortages, it should be improved in many aspects. The authors point out that high resolution sequence stratigraphy should be characterized quantitatively and modelized by computer techniques. (authors)

  14. Development of AMS high resolution injector system

    International Nuclear Information System (INIS)

    Bao Yiwen; Guan Xialing; Hu Yueming

    2008-01-01

    The Beijing HI-13 tandem accelerator AMS high resolution injector system was developed. The high resolution energy achromatic system consists of an electrostatic analyzer and a magnetic analyzer, which mass resolution can reach 600 and transmission is better than 80%. (authors)

  15. Improvements in the energy resolution and high-count-rate performance of bismuth germanate

    International Nuclear Information System (INIS)

    Koehler, P.E.; Wender, S.A.; Kapustinsky, J.S.

    1985-01-01

    Several methods for improving the energy resolution of bismuth germanate (BGO) have been investigated. It is shown that some of these methods resulted in a substantial improvement in the energy resolution. In addition, a method to improve the performance of BGO at high counting rates has been systematically studied. The results of this study are presented and discussed

  16. Resolution enhancement of low quality videos using a high-resolution frame

    NARCIS (Netherlands)

    Pham, T.Q.; Van Vliet, L.J.; Schutte, K.

    2006-01-01

    This paper proposes an example-based Super-Resolution (SR) algorithm of compressed videos in the Discrete Cosine Transform (DCT) domain. Input to the system is a Low-Resolution (LR) compressed video together with a High-Resolution (HR) still image of similar content. Using a training set of

  17. DSM GENERATION FROM HIGH RESOLUTION COSMO-SKYMED IMAGERY WITH RADARGRAMMETRIC MODEL

    OpenAIRE

    P. Capaldo; M. Crespi; F. Fratarcangeli; A. Nascetti; F. Pieralice

    2012-01-01

    The availability of new high resolution radar spaceborne sensors offers new interesting potentialities for the geomatics application: spatial and temporal change detection, features extraction, generation of Digital Surface (DSMs). As regards the DSMs generation from new high resolution data (as SpotLight imagery), the development and the accuracy assessment of method based on radargrammetric approach are topics of great interest and relevance. The aim of this investigation is the DSM generat...

  18. High resolution, high speed ultrahigh vacuum microscopy

    International Nuclear Information System (INIS)

    Poppa, Helmut

    2004-01-01

    The history and future of transmission electron microscopy (TEM) is discussed as it refers to the eventual development of instruments and techniques applicable to the real time in situ investigation of surface processes with high resolution. To reach this objective, it was necessary to transform conventional high resolution instruments so that an ultrahigh vacuum (UHV) environment at the sample site was created, that access to the sample by various in situ sample modification procedures was provided, and that in situ sample exchanges with other integrated surface analytical systems became possible. Furthermore, high resolution image acquisition systems had to be developed to take advantage of the high speed imaging capabilities of projection imaging microscopes. These changes to conventional electron microscopy and its uses were slowly realized in a few international laboratories over a period of almost 40 years by a relatively small number of researchers crucially interested in advancing the state of the art of electron microscopy and its applications to diverse areas of interest; often concentrating on the nucleation, growth, and properties of thin films on well defined material surfaces. A part of this review is dedicated to the recognition of the major contributions to surface and thin film science by these pioneers. Finally, some of the important current developments in aberration corrected electron optics and eventual adaptations to in situ UHV microscopy are discussed. As a result of all the path breaking developments that have led to today's highly sophisticated UHV-TEM systems, integrated fundamental studies are now possible that combine many traditional surface science approaches. Combined investigations to date have involved in situ and ex situ surface microscopies such as scanning tunneling microscopy/atomic force microscopy, scanning Auger microscopy, and photoemission electron microscopy, and area-integrating techniques such as x-ray photoelectron

  19. New numerical method for solving the solute transport equation

    International Nuclear Information System (INIS)

    Ross, B.; Koplik, C.M.

    1978-01-01

    The solute transport equation can be solved numerically by approximating the water flow field by a network of stream tubes and using a Green's function solution within each stream tube. Compared to previous methods, this approach permits greater computational efficiency and easier representation of small discontinuities, and the results are easier to interpret physically. The method has been used to study hypothetical sites for disposal of high-level radioactive waste

  20. A rapid numerical method for solving Serre-Green-Naghdi equations describing long free surface gravity waves

    Science.gov (United States)

    Favrie, N.; Gavrilyuk, S.

    2017-07-01

    A new numerical method for solving the Serre-Green-Naghdi (SGN) equations describing dispersive waves on shallow water is proposed. From the mathematical point of view, the SGN equations are the Euler-Lagrange equations for a ‘master’ lagrangian submitted to a differential constraint which is the mass conservation law. One major numerical challenge in solving the SGN equations is the resolution of an elliptic problem at each time instant. This is the most time-consuming part of the numerical method. The idea is to replace the ‘master’ lagrangian by a one-parameter family of ‘augmented’ lagrangians, depending on a greater number of variables, for which the corresponding Euler-Lagrange equations are hyperbolic. In such an approach, the ‘master’ lagrangian is recovered by the augmented lagrangian in some limit (for example, when the corresponding parameter is large). The choice of such a family of augmented lagrangians is proposed and discussed. The corresponding hyperbolic system is numerically solved by a Godunov type method. Numerical solutions are compared with exact solutions to the SGN equations. It appears that the computational time in solving the hyperbolic system is much lower than in the case where the elliptic operator is inverted. The new method is applied, in particular, to the study of ‘Favre waves’ representing non-stationary undular bores produced after reflection of the fluid flow with a free surface at an immobile wall.

  1. Drainage network extraction from a high-resolution DEM using parallel programming in the .NET Framework

    Science.gov (United States)

    Du, Chao; Ye, Aizhong; Gan, Yanjun; You, Jinjun; Duan, Qinyun; Ma, Feng; Hou, Jingwen

    2017-12-01

    High-resolution Digital Elevation Models (DEMs) can be used to extract high-accuracy prerequisite drainage networks. A higher resolution represents a larger number of grids. With an increase in the number of grids, the flow direction determination will require substantial computer resources and computing time. Parallel computing is a feasible method with which to resolve this problem. In this paper, we proposed a parallel programming method within the .NET Framework with a C# Compiler in a Windows environment. The basin is divided into sub-basins, and subsequently the different sub-basins operate on multiple threads concurrently to calculate flow directions. The method was applied to calculate the flow direction of the Yellow River basin from 3 arc-second resolution SRTM DEM. Drainage networks were extracted and compared with HydroSHEDS river network to assess their accuracy. The results demonstrate that this method can calculate the flow direction from high-resolution DEMs efficiently and extract high-precision continuous drainage networks.

  2. New numerical method for iterative or perturbative solution of quantum field theory

    International Nuclear Information System (INIS)

    Hahn, S.C.; Guralnik, G.S.

    1999-01-01

    A new computational idea for continuum quantum Field theories is outlined. This approach is based on the lattice source Galerkin methods developed by Garcia, Guralnik and Lawson. The method has many promising features including treating fermions on a relatively symmetric footing with bosons. As a spin-off of the technology developed for 'exact' solutions, the numerical methods used have a special case application to perturbation theory. We are in the process of developing an entirely numerical approach to evaluating graphs to high perturbative order. (authors)

  3. A high resolution solar atlas for fluorescence calculations

    Science.gov (United States)

    Hearn, M. F.; Ohlmacher, J. T.; Schleicher, D. G.

    1983-01-01

    The characteristics required of a solar atlas to be used for studying the fluorescence process in comets are examined. Several sources of low resolution data were combined to provide an absolutely calibrated spectrum from 2250 A to 7000A. Three different sources of high resolution data were also used to cover this same spectral range. The low resolution data were then used to put each high resolution spectrum on an absolute scale. The three high resolution spectra were then combined in their overlap regions to produce a single, absolutely calibrated high resolution spectrum over the entire spectral range.

  4. HIGH-RESOLUTION ATMOSPHERIC ENSEMBLE MODELING AT SRNL

    Energy Technology Data Exchange (ETDEWEB)

    Buckley, R.; Werth, D.; Chiswell, S.; Etherton, B.

    2011-05-10

    The High-Resolution Mid-Atlantic Forecasting Ensemble (HME) is a federated effort to improve operational forecasts related to precipitation, convection and boundary layer evolution, and fire weather utilizing data and computing resources from a diverse group of cooperating institutions in order to create a mesoscale ensemble from independent members. Collaborating organizations involved in the project include universities, National Weather Service offices, and national laboratories, including the Savannah River National Laboratory (SRNL). The ensemble system is produced from an overlapping numerical weather prediction model domain and parameter subsets provided by each contributing member. The coordination, synthesis, and dissemination of the ensemble information are performed by the Renaissance Computing Institute (RENCI) at the University of North Carolina-Chapel Hill. This paper discusses background related to the HME effort, SRNL participation, and example results available from the RENCI website.

  5. Application of high resolution synchrotron micro-CT radiation in dental implant osseointegration.

    Science.gov (United States)

    Neldam, Camilla Albeck; Lauridsen, Torsten; Rack, Alexander; Lefolii, Tore Tranberg; Jørgensen, Niklas Rye; Feidenhans'l, Robert; Pinholt, Else Marie

    2015-06-01

    The purpose of this study was to describe a refined method using high-resolution synchrotron radiation microtomography (SRmicro-CT) to evaluate osseointegration and peri-implant bone volume fraction after titanium dental implant insertion. SRmicro-CT is considered gold standard evaluating bone microarchitecture. Its high resolution, high contrast, and excellent high signal-to-noise-ratio all contribute to the highest spatial resolutions achievable today. Using SRmicro-CT at a voxel size of 5 μm in an experimental goat mandible model, the peri-implant bone volume fraction was found to quickly increase to 50% as the radial distance from the implant surface increased, and levelled out to approximately 80% at a distance of 400 μm. This method has been successful in depicting the bone and cavities in three dimensions thereby enabling us to give a more precise answer to the fraction of the bone-to-implant contact compared to previous methods. Copyright © 2015 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.

  6. High resolution capacitance detection circuit for rotor micro-gyroscope

    Directory of Open Access Journals (Sweden)

    Ming-Yuan Ren

    2014-03-01

    Full Text Available Conventional methods for rotor position detection of micro-gyroscopes include common exciting electrodes (single frequency and common sensing electrodes (frequency multiplex, but they have encountered some problems. So we present a high resolution and low noise pick-off circuit for micro-gyroscopes which utilizes the time multiplex method. The detecting circuit adopts a continuous-time current sensing circuit for capacitance measurement, and its noise analysis of the charge amplifier is introduced. The equivalent output noise power spectral density of phase-sensitive demodulation is 120 nV/Hz1/2. Tests revealed that the whole circuitry has a relative capacitance resolution of 1 × 10−8.

  7. THE INFLUENCE OF SPATIAL RESOLUTION ON NONLINEAR FORCE-FREE MODELING

    Energy Technology Data Exchange (ETDEWEB)

    DeRosa, M. L.; Schrijver, C. J. [Lockheed Martin Solar and Astrophysics Laboratory, 3251 Hanover St. B/252, Palo Alto, CA 94304 (United States); Wheatland, M. S.; Gilchrist, S. A. [Sydney Institute for Astronomy, School of Physics, The University of Sydney, Sydney, NSW 2006 (Australia); Leka, K. D.; Barnes, G. [NorthWest Research Associates, 3380 Mitchell Ln., Boulder, CO 80301 (United States); Amari, T.; Canou, A. [CNRS, Centre de Physique Théorique de l’École Polytechnique, F-91128, Palaiseau Cedex (France); Thalmann, J. K. [Institute of Physics/IGAM, University of Graz, Universitätsplatz 5, A-8010 Graz (Austria); Valori, G. [Mullard Space Science Laboratory, University College London, Holmbury St. Mary, Dorking, Surrey, RH5 6NT (United Kingdom); Wiegelmann, T. [Max-Planck-Institut für Sonnensystemforschung, Justus-von-Liebig-Weg 3, D-37077, Göttingen (Germany); Malanushenko, A. [Department of Physics, Montana State University, Bozeman, MT 59717 (United States); Sun, X. [W. W. Hansen Experimental Physics Laboratory, Stanford University, Stanford, CA 94305 (United States); Régnier, S. [Department of Mathematics and Information Sciences, Faculty of Engineering and Environment, Northumbria University, Newcastle-Upon-Tyne, NE1 8ST (United Kingdom)

    2015-10-01

    The nonlinear force-free field (NLFFF) model is often used to describe the solar coronal magnetic field, however a series of earlier studies revealed difficulties in the numerical solution of the model in application to photospheric boundary data. We investigate the sensitivity of the modeling to the spatial resolution of the boundary data, by applying multiple codes that numerically solve the NLFFF model to a sequence of vector magnetogram data at different resolutions, prepared from a single Hinode/Solar Optical Telescope Spectro-Polarimeter scan of NOAA Active Region 10978 on 2007 December 13. We analyze the resulting energies and relative magnetic helicities, employ a Helmholtz decomposition to characterize divergence errors, and quantify changes made by the codes to the vector magnetogram boundary data in order to be compatible with the force-free model. This study shows that NLFFF modeling results depend quantitatively on the spatial resolution of the input boundary data, and that using more highly resolved boundary data yields more self-consistent results. The free energies of the resulting solutions generally trend higher with increasing resolution, while relative magnetic helicity values vary significantly between resolutions for all methods. All methods require changing the horizontal components, and for some methods also the vertical components, of the vector magnetogram boundary field in excess of nominal uncertainties in the data. The solutions produced by the various methods are significantly different at each resolution level. We continue to recommend verifying agreement between the modeled field lines and corresponding coronal loop images before any NLFFF model is used in a scientific setting.

  8. Analysis of numerical methods

    CERN Document Server

    Isaacson, Eugene

    1994-01-01

    This excellent text for advanced undergraduates and graduate students covers norms, numerical solution of linear systems and matrix factoring, iterative solutions of nonlinear equations, eigenvalues and eigenvectors, polynomial approximation, and other topics. It offers a careful analysis and stresses techniques for developing new methods, plus many examples and problems. 1966 edition.

  9. Arc arrays: studies of high resolution techniques for multibeam bathymetric applications

    Digital Repository Service at National Institute of Oceanography (India)

    Chakraborty, B.; Schenke, H.W.

    . This geometry is tested using the Bartlett method for varying arc and linear arrays of 30 - elements. We also examine `high resolution techniques' such as the Maximum LIkelihood (ML) method and the Maximum Entropy (ME) methods (different orders), for 16-element...

  10. A Method for Retrieving Daily Land Surface Albedo from Space at 30-m Resolution

    Directory of Open Access Journals (Sweden)

    Bo Gao

    2015-08-01

    Full Text Available Land surface albedo data with high spatio-temporal resolution are increasingly important for scientific studies addressing spatially and/or temporally small-scale phenomena, such as urban heat islands and urban land surface energy balance. Our previous study derived albedo data with 2–4-day and 30-m temporal and spatial resolution that have better spatio-temporal resolution than existing albedo data, but do not completely satisfy the requirements for monitoring high-frequency land surface changes at the small scale. Downscaling technology provides a chance to further improve the albedo data spatio-temporal resolution and accuracy. This paper introduces a method that combines downscaling technology for land surface reflectance with an empirical method of deriving land surface albedo. Firstly, downscaling daily MODIS land surface reflectance data (MOD09GA from 500 m to 30 m on the basis of HJ-1A/B BRDF data with 2–4-day and 30-m temporal and spatial resolution is performed: this is the key step in the improved method. Subsequently, the daily 30-m land surface albedo data are derived by an empirical method combining prior knowledge of the MODIS BRDF product and the downscaled daily 30-m reflectance. Validation of albedo data obtained using the proposed method shows that the new method has both improved spatio-temporal resolution and good accuracy (a total absolute accuracy of 0.022 and a total root mean squared error at six sites of 0.028.

  11. Individual tree detection based on densities of high points of high resolution airborne lidar

    NARCIS (Netherlands)

    Abd Rahman, M.Z.; Gorte, B.G.H.

    2008-01-01

    The retrieval of individual tree location from Airborne LiDAR has focused largely on utilizing canopy height. However, high resolution Airborne LiDAR offers another source of information for tree detection. This paper presents a new method for tree detection based on high points’ densities from a

  12. High-Resolution Graphene Films for Electrochemical Sensing via Inkjet Maskless Lithography.

    Science.gov (United States)

    Hondred, John A; Stromberg, Loreen R; Mosher, Curtis L; Claussen, Jonathan C

    2017-10-24

    Solution-phase printing of nanomaterial-based graphene inks are rapidly gaining interest for fabrication of flexible electronics. However, scalable manufacturing techniques for high-resolution printed graphene circuits are still lacking. Here, we report a patterning technique [i.e., inkjet maskless lithography (IML)] to form high-resolution, flexible, graphene films (line widths down to 20 μm) that significantly exceed the current inkjet printing resolution of graphene (line widths ∼60 μm). IML uses an inkjet printed polymer lacquer as a sacrificial pattern, viscous spin-coated graphene, and a subsequent graphene lift-off to pattern films without the need for prefabricated stencils, templates, or cleanroom technology (e.g., photolithography). Laser annealing is employed to increase conductivity on thermally sensitive, flexible substrates [polyethylene terephthalate (PET)]. Laser annealing and subsequent platinum nanoparticle deposition substantially increases the electroactive nature of graphene as illustrated by electrochemical hydrogen peroxide (H 2 O 2 ) sensing [rapid response (5 s), broad linear sensing range (0.1-550 μm), high sensitivity (0.21 μM/μA), and low detection limit (0.21 μM)]. Moreover, high-resolution, complex graphene circuits [i.e., interdigitated electrodes (IDE) with varying finger width and spacing] were created with IML and characterized via potassium chloride (KCl) electrochemical impedance spectroscopy (EIS). Results indicated that sensitivity directly correlates to electrode feature size as the IDE with the smallest finger width and spacing (50 and 50 μm) displayed the largest response to changes in KCl concentration (∼21 kΩ). These results indicate that the developed IML patterning technique is well-suited for rapid, solution-phase graphene film prototyping on flexible substrates for numerous applications including electrochemical sensing.

  13. High-resolution gas chromatography/mas spectrometry method for characterization and quantitative analysis of ginkgolic acids in ginkgo biloba plants, extracts, and dietary supplements

    Science.gov (United States)

    A high resolution GC/MS with Selected Ion Monitor (SIM) method focusing on the characterization and quantitative analysis of ginkgolic acids (GAs) in Ginkgo biloba L. plant materials, extracts and commercial products was developed and validated. The method involved sample extraction with (1:1) meth...

  14. High-resolution computer-aided moire

    Science.gov (United States)

    Sciammarella, Cesar A.; Bhat, Gopalakrishna K.

    1991-12-01

    This paper presents a high resolution computer assisted moire technique for the measurement of displacements and strains at the microscopic level. The detection of micro-displacements using a moire grid and the problem associated with the recovery of displacement field from the sampled values of the grid intensity are discussed. A two dimensional Fourier transform method for the extraction of displacements from the image of the moire grid is outlined. An example of application of the technique to the measurement of strains and stresses in the vicinity of the crack tip in a compact tension specimen is given.

  15. Numerical simulation of pulse-tube refrigerators

    NARCIS (Netherlands)

    Lyulina, I.A.; Mattheij, R.M.M.; Tijsseling, A.S.; Waele, de A.T.A.M.

    2004-01-01

    A new numerical model has been introduced to study steady oscillatory heat and mass transfer in the tube section of a pulse-tube refrigerator. Conservation equations describing compressible gas flow in the tube are solved numerically, using high resolution schemes. The equation of conservation of

  16. a Method for the Extraction of Long-Term Deformation Characteristics of Long-Span High-Speed Railway Bridges Using High-Resolution SAR Images

    Science.gov (United States)

    Jia, H. G.; Liu, L. Y.

    2016-06-01

    Natural causes and high-speed train load will result in the structural deformation of long-span bridges, which greatly influence the safety operation of high-speed railway. Hence it is necessary to conduct the deformation monitoring and regular status assessment for long-span bridges. However for some traditional surveying technique, e.g. control-point-based surveying techniques, a lot of human and material resources are needed to perform the long-term monitoring for the whole bridge. In this study we detected the long-term bridge deformation time-series by persistent scatterer interferometric synthetic aperture radar (PSInSAR) technique using the high-resolution SAR images and external digital elevation model. A test area in Nanjing city in China is chosen and TerraSAR-X images and Tandem-X for this area have been used. There is the Dashengguan bridge in high speed railway in this area as study object to evaluate this method. Experiment results indicate that the proposed method can effectively extract the long-term deformation of long-span high-speed railway bridge with higher accuracy.

  17. A METHOD FOR THE EXTRACTION OF LONG-TERM DEFORMATION CHARACTERISTICS OF LONG-SPAN HIGH-SPEED RAILWAY BRIDGES USING HIGH-RESOLUTION SAR IMAGES

    Directory of Open Access Journals (Sweden)

    H. G. Jia

    2016-06-01

    Full Text Available Natural causes and high-speed train load will result in the structural deformation of long-span bridges, which greatly influence the safety operation of high-speed railway. Hence it is necessary to conduct the deformation monitoring and regular status assessment for long-span bridges. However for some traditional surveying technique, e.g. control-point-based surveying techniques, a lot of human and material resources are needed to perform the long-term monitoring for the whole bridge. In this study we detected the long-term bridge deformation time-series by persistent scatterer interferometric synthetic aperture radar (PSInSAR technique using the high-resolution SAR images and external digital elevation model. A test area in Nanjing city in China is chosen and TerraSAR-X images and Tandem-X for this area have been used. There is the Dashengguan bridge in high speed railway in this area as study object to evaluate this method. Experiment results indicate that the proposed method can effectively extract the long-term deformation of long-span high-speed railway bridge with higher accuracy.

  18. Nanometric depth resolution from multi-focal images in microscopy.

    Science.gov (United States)

    Dalgarno, Heather I C; Dalgarno, Paul A; Dada, Adetunmise C; Towers, Catherine E; Gibson, Gavin J; Parton, Richard M; Davis, Ilan; Warburton, Richard J; Greenaway, Alan H

    2011-07-06

    We describe a method for tracking the position of small features in three dimensions from images recorded on a standard microscope with an inexpensive attachment between the microscope and the camera. The depth-measurement accuracy of this method is tested experimentally on a wide-field, inverted microscope and is shown to give approximately 8 nm depth resolution, over a specimen depth of approximately 6 µm, when using a 12-bit charge-coupled device (CCD) camera and very bright but unresolved particles. To assess low-flux limitations a theoretical model is used to derive an analytical expression for the minimum variance bound. The approximations used in the analytical treatment are tested using numerical simulations. It is concluded that approximately 14 nm depth resolution is achievable with flux levels available when tracking fluorescent sources in three dimensions in live-cell biology and that the method is suitable for three-dimensional photo-activated localization microscopy resolution. Sub-nanometre resolution could be achieved with photon-counting techniques at high flux levels.

  19. Numerical resolution of the Navier-Stokes equations for a low Mach number by a spectral method

    International Nuclear Information System (INIS)

    Frohlich, Jochen

    1990-01-01

    The low Mach number approximation of the Navier-Stokes equations, also called isobar, is an approximation which is less restrictive than the one due to Boussinesq. It permits strong density variations while neglecting acoustic phenomena. We present a numerical method to solve these equations in the unsteady, two dimensional case with one direction of periodicity. The discretization uses a semi-implicit finite difference scheme in time and a Fourier-Chebycheff pseudo-spectral method in space. The solution of the equations of motion is based on an iterative algorithm of Uzawa type. In the Boussinesq limit we obtain a direct method. A first application is concerned with natural convection in the Rayleigh-Benard setting. We compare the results of the low Mach number equations with the ones in the Boussinesq case and consider the influence of variable fluid properties. A linear stability analysis based on a Chebychev-Tau method completes the study. The second application that we treat is a case of isobaric combustion in an open domain. We communicate results for the hydrodynamic Darrieus-Landau instability of a plane laminar flame front. [fr

  20. A Method of Road Extraction from High-resolution Remote Sensing Images Based on Shape Features

    Directory of Open Access Journals (Sweden)

    LEI Xiaoqi

    2016-02-01

    Full Text Available Road extraction from high-resolution remote sensing image is an important and difficult task.Since remote sensing images include complicated information,the methods that extract roads by spectral,texture and linear features have certain limitations.Also,many methods need human-intervention to get the road seeds(semi-automatic extraction,which have the great human-dependence and low efficiency.The road-extraction method,which uses the image segmentation based on principle of local gray consistency and integration shape features,is proposed in this paper.Firstly,the image is segmented,and then the linear and curve roads are obtained by using several object shape features,so the method that just only extract linear roads are rectified.Secondly,the step of road extraction is carried out based on the region growth,the road seeds are automatic selected and the road network is extracted.Finally,the extracted roads are regulated by combining the edge information.In experiments,the images that including the better gray uniform of road and the worse illuminated of road surface were chosen,and the results prove that the method of this study is promising.

  1. High-resolution x-ray imaging using a structured scintillator

    Energy Technology Data Exchange (ETDEWEB)

    Hormozan, Yashar, E-mail: hormozan@kth.se; Sychugov, Ilya; Linnros, Jan [Materials and Nano Physics, School of Information and Communication Technology, KTH Royal Institute of Technology, Electrum 229, Kista, Stockholm SE-16440 (Sweden)

    2016-02-15

    Purpose: In this study, the authors introduce a new generation of finely structured scintillators with a very high spatial resolution (a few micrometers) compared to conventional scintillators, yet maintaining a thick absorbing layer for improved detectivity. Methods: Their concept is based on a 2D array of high aspect ratio pores which are fabricated by ICP etching, with spacings (pitches) of a few micrometers, on silicon and oxidation of the pore walls. The pores were subsequently filled by melting of powdered CsI(Tl), as the scintillating agent. In order to couple the secondary emitted photons of the back of the scintillator array to a CCD device, having a larger pixel size than the pore pitch, an open optical microscope with adjustable magnification was designed and implemented. By imaging a sharp edge, the authors were able to calculate the modulation transfer function (MTF) of this finely structured scintillator. Results: The x-ray images of individually resolved pores suggest that they have been almost uniformly filled, and the MTF measurements show the feasibility of a few microns spatial resolution imaging, as set by the scintillator pore size. Compared to existing techniques utilizing CsI needles as a structured scintillator, their results imply an almost sevenfold improvement in resolution. Finally, high resolution images, taken by their detector, are presented. Conclusions: The presented work successfully shows the functionality of their detector concept for high resolution imaging and further fabrication developments are most likely to result in higher quantum efficiencies.

  2. A New Method for Single-Epoch Ambiguity Resolution with Indoor Pseudolite Positioning.

    Science.gov (United States)

    Li, Xin; Zhang, Peng; Guo, Jiming; Wang, Jinling; Qiu, Weining

    2017-04-21

    Ambiguity resolution (AR) is crucial for high-precision indoor pseudolite positioning. Due to the existing characteristics of the pseudolite positioning system, such as the geometry structure of the stationary pseudolite which is consistently invariant, the indoor signal is easy to interrupt and the first order linear truncation error cannot be ignored, and a new AR method based on the idea of the ambiguity function method (AFM) is proposed in this paper. The proposed method is a single-epoch and nonlinear method that is especially well-suited for indoor pseudolite positioning. Considering the very low computational efficiency of conventional AFM, we adopt an improved particle swarm optimization (IPSO) algorithm to search for the best solution in the coordinate domain, and variances of a least squares adjustment is conducted to ensure the reliability of the solving ambiguity. Several experiments, including static and kinematic tests, are conducted to verify the validity of the proposed AR method. Numerical results show that the IPSO significantly improved the computational efficiency of AFM and has a more elaborate search ability compared to the conventional grid searching method. For the indoor pseudolite system, which had an initial approximate coordinate precision better than 0.2 m, the AFM exhibited good performances in both static and kinematic tests. With the corrected ambiguity gained from our proposed method, indoor pseudolite positioning can achieve centimeter-level precision using a low-cost single-frequency software receiver.

  3. High frequency, high time resolution time-to-digital converter employing passive resonating circuits.

    Science.gov (United States)

    Ripamonti, Giancarlo; Abba, Andrea; Geraci, Angelo

    2010-05-01

    A method for measuring time intervals accurate to the picosecond range is based on phase measurements of oscillating waveforms synchronous with their beginning and/or end. The oscillation is generated by triggering an LC resonant circuit, whose capacitance is precharged. By using high Q resonators and a final active quenching of the oscillation, it is possible to conjugate high time resolution and a small measurement time, which allows a high measurement rate. Methods for fast analysis of the data are considered and discussed with reference to computing resource requirements, speed, and accuracy. Experimental tests show the feasibility of the method and a time accuracy better than 4 ps rms. Methods aimed at further reducing hardware resources are finally discussed.

  4. High frequency, high time resolution time-to-digital converter employing passive resonating circuits

    International Nuclear Information System (INIS)

    Ripamonti, Giancarlo; Abba, Andrea; Geraci, Angelo

    2010-01-01

    A method for measuring time intervals accurate to the picosecond range is based on phase measurements of oscillating waveforms synchronous with their beginning and/or end. The oscillation is generated by triggering an LC resonant circuit, whose capacitance is precharged. By using high Q resonators and a final active quenching of the oscillation, it is possible to conjugate high time resolution and a small measurement time, which allows a high measurement rate. Methods for fast analysis of the data are considered and discussed with reference to computing resource requirements, speed, and accuracy. Experimental tests show the feasibility of the method and a time accuracy better than 4 ps rms. Methods aimed at further reducing hardware resources are finally discussed.

  5. High-resolution axial MR imaging of tibial stress injuries

    Directory of Open Access Journals (Sweden)

    Mammoto Takeo

    2012-05-01

    Full Text Available Abstract Purpose To evaluate the relative involvement of tibial stress injuries using high-resolution axial MR imaging and the correlation with MR and radiographic images. Methods A total of 33 patients with exercise-induced tibial pain were evaluated. All patients underwent radiograph and high-resolution axial MR imaging. Radiographs were taken at initial presentation and 4 weeks later. High-resolution MR axial images were obtained using a microscopy surface coil with 60 × 60 mm field of view on a 1.5T MR unit. All images were evaluated for abnormal signals of the periosteum, cortex and bone marrow. Results Nineteen patients showed no periosteal reaction at initial and follow-up radiographs. MR imaging showed abnormal signals in the periosteal tissue and partially abnormal signals in the bone marrow. In 7 patients, periosteal reaction was not seen at initial radiograph, but was detected at follow-up radiograph. MR imaging showed abnormal signals in the periosteal tissue and entire bone marrow. Abnormal signals in the cortex were found in 6 patients. The remaining 7 showed periosteal reactions at initial radiograph. MR imaging showed abnormal signals in the periosteal tissue in 6 patients. Abnormal signals were seen in the partial and entire bone marrow in 4 and 3 patients, respectively. Conclusions Bone marrow abnormalities in high-resolution axial MR imaging were related to periosteal reactions at follow-up radiograph. Bone marrow abnormalities might predict later periosteal reactions, suggesting shin splints or stress fractures. High-resolution axial MR imaging is useful in early discrimination of tibial stress injuries.

  6. High-resolution axial MR imaging of tibial stress injuries

    Science.gov (United States)

    2012-01-01

    Purpose To evaluate the relative involvement of tibial stress injuries using high-resolution axial MR imaging and the correlation with MR and radiographic images. Methods A total of 33 patients with exercise-induced tibial pain were evaluated. All patients underwent radiograph and high-resolution axial MR imaging. Radiographs were taken at initial presentation and 4 weeks later. High-resolution MR axial images were obtained using a microscopy surface coil with 60 × 60 mm field of view on a 1.5T MR unit. All images were evaluated for abnormal signals of the periosteum, cortex and bone marrow. Results Nineteen patients showed no periosteal reaction at initial and follow-up radiographs. MR imaging showed abnormal signals in the periosteal tissue and partially abnormal signals in the bone marrow. In 7 patients, periosteal reaction was not seen at initial radiograph, but was detected at follow-up radiograph. MR imaging showed abnormal signals in the periosteal tissue and entire bone marrow. Abnormal signals in the cortex were found in 6 patients. The remaining 7 showed periosteal reactions at initial radiograph. MR imaging showed abnormal signals in the periosteal tissue in 6 patients. Abnormal signals were seen in the partial and entire bone marrow in 4 and 3 patients, respectively. Conclusions Bone marrow abnormalities in high-resolution axial MR imaging were related to periosteal reactions at follow-up radiograph. Bone marrow abnormalities might predict later periosteal reactions, suggesting shin splints or stress fractures. High-resolution axial MR imaging is useful in early discrimination of tibial stress injuries. PMID:22574840

  7. Gold finger formation studied by high-resolution mass spectrometry and in silico methods

    NARCIS (Netherlands)

    Laskay, Ü.A.; Garino, C.; Tsybin, Y.O.; Salassa, L.; Casini, A.

    2015-01-01

    High-resolution mass spectrometry and quantum mechanics/molecular mechanics studies were employed for characterizing the formation of two gold finger (GF) domains from the reaction of zinc fingers (ZF) with gold complexes. The influence of both the gold oxidation state and the ZF coordination sphere

  8. High-resolution urban flood modelling - a joint probability approach

    Science.gov (United States)

    Hartnett, Michael; Olbert, Agnieszka; Nash, Stephen

    2017-04-01

    (Divoky et al., 2005). Nevertheless, such events occur and in Ireland alone there are several cases of serious damage due to flooding resulting from a combination of high sea water levels and river flows driven by the same meteorological conditions (e.g. Olbert et al. 2015). A November 2009 fluvial-coastal flooding of Cork City bringing €100m loss was one such incident. This event was used by Olbert et al. (2015) to determine processes controlling urban flooding and is further explored in this study to elaborate on coastal and fluvial flood mechanisms and their roles in controlling water levels. The objective of this research is to develop a methodology to assess combined effect of multiple source flooding on flood probability and severity in urban areas and to establish a set of conditions that dictate urban flooding due to extreme climatic events. These conditions broadly combine physical flood drivers (such as coastal and fluvial processes), their mechanisms and thresholds defining flood severity. The two main physical processes controlling urban flooding: high sea water levels (coastal flooding) and high river flows (fluvial flooding), and their threshold values for which flood is likely to occur, are considered in this study. Contribution of coastal and fluvial drivers to flooding and their impacts are assessed in a two-step process. The first step involves frequency analysis and extreme value statistical modelling of storm surges, tides and river flows and ultimately the application of joint probability method to estimate joint exceedence return periods for combination of surges, tide and river flows. In the second step, a numerical model of Cork Harbour MSN_Flood comprising a cascade of four nested high-resolution models is used to perform simulation of flood inundation under numerous hypothetical coastal and fluvial flood scenarios. The risk of flooding is quantified based on a range of physical aspects such as the extent and depth of inundation (Apel et al

  9. Numerical evaluation of high energy particle effects in magnetohydrodynamics

    International Nuclear Information System (INIS)

    White, R.B.; Wu, Y.

    1994-03-01

    The interaction of high energy ions with magnetohydrodynamic modes is analyzed. A numerical code is developed which evaluates the contribution of the high energy particles to mode stability using orbit averaging of motion in either analytic or numerically generated equilibria through Hamiltonian guiding center equations. A dispersion relation is then used to evaluate the effect of the particles on the linear mode. Generic behavior of the solutions of the dispersion relation is discussed and dominant contributions of different components of the particle distribution function are identified. Numerical convergence of Monte-Carlo simulations is analyzed. The resulting code ORBIT provides an accurate means of comparing experimental results with the predictions of kinetic magnetohydrodynamics. The method can be extended to include self consistent modification of the particle orbits by the mode, and hence the full nonlinear dynamics of the coupled system

  10. Assessing resolution in live cell structured illumination microscopy

    Science.gov (United States)

    Pospíšil, Jakub; Fliegel, Karel; Klíma, Miloš

    2017-12-01

    Structured Illumination Microscopy (SIM) is a powerful super-resolution technique, which is able to enhance the resolution of optical microscope beyond the Abbe diffraction limit. In the last decade, numerous SIM methods that achieve the resolution of 100 nm in the lateral dimension have been developed. The SIM setups with new high-speed cameras and illumination pattern generators allow rapid acquisition of the live specimen. Therefore, SIM is widely used for investigation of the live structures in molecular and live cell biology. Quantitative evaluation of resolution enhancement in a real sample is essential to describe the efficiency of super-resolution microscopy technique. However, measuring the resolution of a live cell sample is a challenging task. Based on our experimental findings, the widely used Fourier ring correlation (FRC) method does not seem to be well suited for measuring the resolution of SIM live cell video sequences. Therefore, the resolution assessing methods based on Fourier spectrum analysis are often used. We introduce a measure based on circular average power spectral density (PSDca) estimated from a single SIM image (one video frame). PSDca describes the distribution of the power of a signal with respect to its spatial frequency. Spatial resolution corresponds to the cut-off frequency in Fourier space. In order to estimate the cut-off frequency from a noisy signal, we use a spectral subtraction method for noise suppression. In the future, this resolution assessment approach might prove useful also for single-molecule localization microscopy (SMLM) live cell imaging.

  11. High-resolution SPECT for small-animal imaging

    International Nuclear Information System (INIS)

    Qi Yujin

    2006-01-01

    This article presents a brief overview of the development of high-resolution SPECT for small-animal imaging. A pinhole collimator has been used for high-resolution animal SPECT to provide better spatial resolution and detection efficiency in comparison with a parallel-hole collimator. The theory of imaging characteristics of the pinhole collimator is presented and the designs of the pinhole aperture are discussed. The detector technologies used for the development of small-animal SPECT and the recent advances are presented. The evolving trend of small-animal SPECT is toward a multi-pinhole and a multi-detector system to obtain a high resolution and also a high detection efficiency. (authors)

  12. Microfabricated ommatidia using a laser induced self-writing process for high resolution artificial compound eye optical systems.

    Science.gov (United States)

    Jung, Hyukjin; Jeong, Ki-Hun

    2009-08-17

    A microfabricated compound eye, comparable to a natural compound eye shows a spherical arrangement of integrated optical units called artificial ommatidia. Each consists of a self-aligned microlens and waveguide. The increase of waveguide length is imperative to obtain high resolution images through an artificial compound eye for wide field-of - view imaging as well as fast motion detection. This work presents an effective method for increasing the waveguide length of artificial ommatidium using a laser induced self-writing process in a photosensitive polymer resin. The numerical and experimental results show the uniform formation of waveguides and the increment of waveguide length over 850 microm. (c) 2009 Optical Society of America

  13. Variational data assimilation system with nesting model for high resolution ocean circulation

    Energy Technology Data Exchange (ETDEWEB)

    Ishikawa, Yoichi; Igarashi, Hiromichi; Hiyoshi, Yoshimasa; Sasaki, Yuji; Wakamatsu, Tsuyoshi; Awaji, Toshiyuki [Center for Earth Information Science and Technology, Japan Agency for Marine-Earth Science and Technology, 3173-25 Showa-machi, Kanazawa-Ku, Yokohama 236-0001 (Japan); In, Teiji [Japan Marine Science Foundation, 4-24, Minato-cho, Mutsu, Aomori, 035-0064 (Japan); Nakada, Satoshi [Graduate School of Maritime Science, Kobe University, 5-1-1, Fukae-minamimachi, Higashinada-Ku, Kobe, 658-0022 (Japan); Nishina, Kei, E-mail: ishikaway@jamstec.go.jp [Graduate School of Science, Kyoto University, Kitashirakawaoiwake-cho, Sakyo-Ku, Kyoto, 606-8502 (Japan)

    2015-10-15

    To obtain the high-resolution analysis fields for ocean circulation, a new incremental approach is developed using a four-dimensional variational data assimilation system with nesting models. The results show that there are substantial biases when using a classical method combined with data assimilation and downscaling, caused by different dynamics resulting from the different resolutions of the models used within the nesting models. However, a remarkable reduction in biases of the low-resolution model relative to the high-resolution model was observed using our new approach in narrow strait regions, such as the Tsushima and Tsugaru straits, where the difference in the dynamics represented by the high- and low-resolution models is substantial. In addition, error reductions are demonstrated in the downstream region of these narrow channels associated with the propagation of information through the model dynamics. (paper)

  14. APPLICATION OF CONVOLUTIONAL NEURAL NETWORK IN CLASSIFICATION OF HIGH RESOLUTION AGRICULTURAL REMOTE SENSING IMAGES

    Directory of Open Access Journals (Sweden)

    C. Yao

    2017-09-01

    Full Text Available With the rapid development of Precision Agriculture (PA promoted by high-resolution remote sensing, it makes significant sense in management and estimation of agriculture through crop classification of high-resolution remote sensing image. Due to the complex and fragmentation of the features and the surroundings in the circumstance of high-resolution, the accuracy of the traditional classification methods has not been able to meet the standard of agricultural problems. In this case, this paper proposed a classification method for high-resolution agricultural remote sensing images based on convolution neural networks(CNN. For training, a large number of training samples were produced by panchromatic images of GF-1 high-resolution satellite of China. In the experiment, through training and testing on the CNN under the toolbox of deep learning by MATLAB, the crop classification finally got the correct rate of 99.66 % after the gradual optimization of adjusting parameter during training. Through improving the accuracy of image classification and image recognition, the applications of CNN provide a reference value for the field of remote sensing in PA.

  15. High-resolution time series of Pseudomonas aeruginosa gene expression and rhamnolipid secretion through growth curve synchronization

    Directory of Open Access Journals (Sweden)

    Xavier João B

    2011-06-01

    Full Text Available Abstract Background Online spectrophotometric measurements allow monitoring dynamic biological processes with high-time resolution. Contrastingly, numerous other methods require laborious treatment of samples and can only be carried out offline. Integrating both types of measurement would allow analyzing biological processes more comprehensively. A typical example of this problem is acquiring quantitative data on rhamnolipid secretion by the opportunistic pathogen Pseudomonas aeruginosa. P. aeruginosa cell growth can be measured by optical density (OD600 and gene expression can be measured using reporter fusions with a fluorescent protein, allowing high time resolution monitoring. However, measuring the secreted rhamnolipid biosurfactants requires laborious sample processing, which makes this an offline measurement. Results Here, we propose a method to integrate growth curve data with endpoint measurements of secreted metabolites that is inspired by a model of exponential cell growth. If serial diluting an inoculum gives reproducible time series shifted in time, then time series of endpoint measurements can be reconstructed using calculated time shifts between dilutions. We illustrate the method using measured rhamnolipid secretion by P. aeruginosa as endpoint measurements and we integrate these measurements with high-resolution growth curves measured by OD600 and expression of rhamnolipid synthesis genes monitored using a reporter fusion. Two-fold serial dilution allowed integrating rhamnolipid measurements at a ~0.4 h-1 frequency with high-time resolved data measured at a 6 h-1 frequency. We show how this simple method can be used in combination with mutants lacking specific genes in the rhamnolipid synthesis or quorum sensing regulation to acquire rich dynamic data on P. aeruginosa virulence regulation. Additionally, the linear relation between the ratio of inocula and the time-shift between curves produces high-precision measurements of

  16. Riemann solvers and numerical methods for fluid dynamics a practical introduction

    CERN Document Server

    Toro, Eleuterio F

    2009-01-01

    High resolution upwind and centred methods are a mature generation of computational techniques applicable to a range of disciplines, Computational Fluid Dynamics being the most prominent. This book gives a practical presentation of this class of techniques.

  17. Method of Obtaining High Resolution Intrinsic Wire Boom Damping Parameters for Multi-Body Dynamics Simulations

    Science.gov (United States)

    Yew, Alvin G.; Chai, Dean J.; Olney, David J.

    2010-01-01

    The goal of NASA's Magnetospheric MultiScale (MMS) mission is to understand magnetic reconnection with sensor measurements from four spinning satellites flown in a tight tetrahedron formation. Four of the six electric field sensors on each satellite are located at the end of 60- meter wire booms to increase measurement sensitivity in the spin plane and to minimize motion coupling from perturbations on the main body. A propulsion burn however, might induce boom oscillations that could impact science measurements if oscillations do not damp to values on the order of 0.1 degree in a timely fashion. Large damping time constants could also adversely affect flight dynamics and attitude control performance. In this paper, we will discuss the implementation of a high resolution method for calculating the boom's intrinsic damping, which was used in multi-body dynamics simulations. In summary, experimental data was obtained with a scaled-down boom, which was suspended as a pendulum in vacuum. Optical techniques were designed to accurately measure the natural decay of angular position and subsequently, data processing algorithms resulted in excellent spatial and temporal resolutions. This method was repeated in a parametric study for various lengths, root tensions and vacuum levels. For all data sets, regression models for damping were applied, including: nonlinear viscous, frequency-independent hysteretic, coulomb and some combination of them. Our data analysis and dynamics models have shown that the intrinsic damping for the baseline boom is insufficient, thereby forcing project management to explore mitigation strategies.

  18. Testing methods for using high-resolution satellite imagery to monitor polar bear abundance and distribution

    Science.gov (United States)

    LaRue, Michelle A.; Stapleton, Seth P.; Porter, Claire; Atkinson, Stephen N.; Atwood, Todd C.; Dyck, Markus; Lecomte, Nicolas

    2015-01-01

    High-resolution satellite imagery is a promising tool for providing coarse information about polar species abundance and distribution, but current applications are limited. With polar bears (Ursus maritimus), the technique has only proven effective on landscapes with little topographic relief that are devoid of snow and ice, and time-consuming manual review of imagery is required to identify bears. Here, we evaluated mechanisms to further develop methods for satellite imagery by examining data from Rowley Island, Canada. We attempted to automate and expedite detection via a supervised spectral classification and image differencing to expedite image review. We also assessed what proportion of a region should be sampled to obtain reliable estimates of density and abundance. Although the spectral signature of polar bears differed from nontarget objects, these differences were insufficient to yield useful results via a supervised classification process. Conversely, automated image differencing—or subtracting one image from another—correctly identified nearly 90% of polar bear locations. This technique, however, also yielded false positives, suggesting that manual review will still be required to confirm polar bear locations. On Rowley Island, bear distribution approximated a Poisson distribution across a range of plot sizes, and resampling suggests that sampling >50% of the site facilitates reliable estimation of density (CV large-scale applications remain limited because of the challenges in automation and the limited environments in which the method can be effectively applied. Improvements in resolution may expand opportunities for its future uses.

  19. Testing methods for using high-resolution satellite imagery to monitor polar bear abundance and distribution

    Science.gov (United States)

    LaRue, Michelle A.; Stapleton, Seth P.; Porter, Claire; Atkinson, Stephen N.; Atwood, Todd C.; Dyck, Markus; Lecomte, Nicolas

    2015-01-01

    High-resolution satellite imagery is a promising tool for providing coarse information about polar species abundance and distribution, but current applications are limited. With polar bears (Ursus maritimus), the technique has only proven effective on landscapes with little topographic relief that are devoid of snow and ice, and time-consuming manual review of imagery is required to identify bears. Here, we evaluated mechanisms to further develop methods for satellite imagery by examining data from Rowley Island, Canada. We attempted to automate and expedite detection via a supervised spectral classification and image differencing to expedite image review. We also assessed what proportion of a region should be sampled to obtain reliable estimates of density and abundance. Although the spectral signature of polar bears differed from nontarget objects, these differences were insufficient to yield useful results via a supervised classification process. Conversely, automated image differencing—or subtracting one image from another—correctly identified nearly 90% of polar bear locations. This technique, however, also yielded false positives, suggesting that manual review will still be required to confirm polar bear locations. On Rowley Island, bear distribution approximated a Poisson distribution across a range of plot sizes, and resampling suggests that sampling >50% of the site facilitates reliable estimation of density (CV in certain areas, but large-scale applications remain limited because of the challenges in automation and the limited environments in which the method can be effectively applied. Improvements in resolution may expand opportunities for its future uses.

  20. High temporal resolution magnetic resonance imaging: development of a parallel three dimensional acquisition method for functional neuroimaging

    International Nuclear Information System (INIS)

    Rabrait, C.

    2007-11-01

    Echo Planar Imaging is widely used to perform data acquisition in functional neuroimaging. This sequence allows the acquisition of a set of about 30 slices, covering the whole brain, at a spatial resolution ranging from 2 to 4 mm, and a temporal resolution ranging from 1 to 2 s. It is thus well adapted to the mapping of activated brain areas but does not allow precise study of the brain dynamics. Moreover, temporal interpolation is needed in order to correct for inter-slices delays and 2-dimensional acquisition is subject to vascular in flow artifacts. To improve the estimation of the hemodynamic response functions associated with activation, this thesis aimed at developing a 3-dimensional high temporal resolution acquisition method. To do so, Echo Volume Imaging was combined with reduced field-of-view acquisition and parallel imaging. Indeed, E.V.I. allows the acquisition of a whole volume in Fourier space following a single excitation, but it requires very long echo trains. Parallel imaging and field-of-view reduction are used to reduce the echo train durations by a factor of 4, which allows the acquisition of a 3-dimensional brain volume with limited susceptibility-induced distortions and signal losses, in 200 ms. All imaging parameters have been optimized in order to reduce echo train durations and to maximize S.N.R., so that cerebral activation can be detected with a high level of confidence. Robust detection of brain activation was demonstrated with both visual and auditory paradigms. High temporal resolution hemodynamic response functions could be estimated through selective averaging of the response to the different trials of the stimulation. To further improve S.N.R., the matrix inversions required in parallel reconstruction were regularized, and the impact of the level of regularization on activation detection was investigated. Eventually, potential applications of parallel E.V.I. such as the study of non-stationary effects in the B.O.L.D. response

  1. Numerical and adaptive grid methods for ideal magnetohydrodynamics

    Science.gov (United States)

    Loring, Burlen

    2008-02-01

    In this thesis numerical finite difference methods for ideal magnetohydrodynamics(MHD) are investigated. A review of the relevant physics, essential for interpreting the results of numerical solutions and constructing validation cases, is presented. This review includes a discusion of the propagation of small amplitude waves in the MHD system as well as a thorough discussion of MHD shocks, contacts and rarefactions and how they can be piece together to obtain a solutions to the MHD Riemann problem. Numerical issues relevant to the MHD system such as: the loss of nonlinear numerical stability in the presence of discontinuous solutions, the introduction of spurious forces due to the growth of the divergence of the magnetic flux density, the loss of pressure positivity, and the effects of non-conservative numerical methods are discussed, along with the practical approaches which can be used to remedy or minimize the negative consequences of each. The use of block structured adaptive mesh refinement is investigated in the context of a divergence free MHD code. A new method for conserving magnetic flux across AMR grid interfaces is developed and a detailed discussion of our implementation of this method using the CHOMBO AMR framework is given. A preliminary validation of the new method for conserving magnetic flux density across AMR grid interfaces illustrates that the method works. Finally a number of code validation cases are examined spurring a discussion of the strengths and weaknesses of the numerics employed.

  2. A New Method Based on Two-Stage Detection Mechanism for Detecting Ships in High-Resolution SAR Images

    Directory of Open Access Journals (Sweden)

    Xu Yongli

    2017-01-01

    Full Text Available Ship detection in synthetic aperture radar (SAR remote sensing images, being a fundamental but challenging problem in the field of satellite image analysis, plays an important role for a wide range of applications and is receiving significant attention in recent years. Aiming at the requirements of ship detection in high-resolution SAR images, the accuracy, the intelligent level, a better real-time operation and processing efficiency, The characteristics of ocean background and ship target in high-resolution SAR images were analyzed, we put forward a ship detection algorithm in high-resolution SAR images. The algorithm consists of two detection stages: The first step designs a pre-training classifier based on improved spectral residual visual model to obtain the visual salient regions containing ship targets quickly, then achieve the purpose of probably detection of ships. In the second stage, considering the Bayesian theory of binary hypothesis detection, a local maximum posterior probability (MAP classifier is designed for the classification of pixels. After the parameter estimation and judgment criterion, the classification of pixels are carried out in the target areas to achieve the classification of two types of pixels in the salient regions. In the paper, several types of satellite image data, such as TerraSAR-X (TS-X, Radarsat-2, are used to evaluate the performance of detection methods. Comparing with classical CFAR detection algorithms, experimental results show that the algorithm can achieve a better effect of suppressing false alarms, which caused by the speckle noise and ocean clutter background inhomogeneity. At the same time, the detection speed is increased by 25% to 45%.

  3. Dynamic high resolution imaging of rats

    International Nuclear Information System (INIS)

    Miyaoka, R.S.; Lewellen, T.K.; Bice, A.N.

    1990-01-01

    A positron emission tomography with the sensitivity and resolution to do dynamic imaging of rats would be an invaluable tool for biological researchers. In this paper, the authors determine the biological criteria for dynamic positron emission imaging of rats. To be useful, 3 mm isotropic resolution and 2-3 second time binning were necessary characteristics for such a dedicated tomograph. A single plane in which two objects of interest could be imaged simultaneously was considered acceptable. Multi-layered detector designs were evaluated as a possible solution to the dynamic imaging and high resolution imaging requirements. The University of Washington photon history generator was used to generate data to investigate a tomograph's sensitivity to true, scattered and random coincidences for varying detector ring diameters. Intrinsic spatial uniformity advantages of multi-layered detector designs over conventional detector designs were investigated using a Monte Carlo program. As a result, a modular three layered detector prototype is being developed. A module will consist of a layer of five 3.5 mm wide crystals and two layers of six 2.5 mm wide crystals. The authors believe adequate sampling can be achieved with a stationary detector system using these modules. Economical crystal decoding strategies have been investigated and simulations have been run to investigate optimum light channeling methods for block decoding strategies. An analog block decoding method has been proposed and will be experimentally evaluated to determine whether it can provide the desired performance

  4. High-Resolution Wind Measurements for Offshore Wind Energy Development

    Science.gov (United States)

    Nghiem, Son V.; Neumann, Gregory

    2011-01-01

    A mathematical transform, called the Rosette Transform, together with a new method, called the Dense Sampling Method, have been developed. The Rosette Transform is invented to apply to both the mean part and the fluctuating part of a targeted radar signature using the Dense Sampling Method to construct the data in a high-resolution grid at 1-km posting for wind measurements over water surfaces such as oceans or lakes.

  5. High-resolution typing of Chlamydia trachomatis: epidemiological and clinical uses.

    Science.gov (United States)

    de Vries, Henry J C; Schim van der Loeff, Maarten F; Bruisten, Sylvia M

    2015-02-01

    A state-of-the-art overview of molecular Chlamydia trachomatis typing methods that are used for routine diagnostics and scientific studies. Molecular epidemiology uses high-resolution typing techniques such as multilocus sequence typing, multilocus variable number of tandem repeats analysis, and whole-genome sequencing to identify strains based on their DNA sequence. These data can be used for cluster, network and phylogenetic analyses, and are used to unveil transmission networks, risk groups, and evolutionary pathways. High-resolution typing of C. trachomatis strains is applied to monitor treatment efficacy and re-infections, and to study the recent emergence of lymphogranuloma venereum (LGV) amongst men who have sex with men in high-income countries. Chlamydia strain typing has clinical relevance in disease management, as LGV needs longer treatment than non-LGV C. trachomatis. It has also led to the discovery of a new variant Chlamydia strain in Sweden, which was not detected by some commercial C. trachomatis diagnostic platforms. After a brief history and comparison of the various Chlamydia typing methods, the applications of the current techniques are described and future endeavors to extend scientific understanding are formulated. High-resolution typing will likely help to further unravel the pathophysiological mechanisms behind the wide clinical spectrum of chlamydial disease.

  6. High tracking resolution detectors. Final Technical Report

    International Nuclear Information System (INIS)

    Vasile, Stefan; Li, Zheng

    2010-01-01

    High-resolution tracking detectors based on Active Pixel Sensor (APS) have been valuable tools in Nuclear Physics and High-Energy Physics research, and have contributed to major discoveries. Their integration time, radiation length and readout rate is a limiting factor for the planed luminosity upgrades in nuclear and high-energy physics collider-based experiments. The goal of this program was to demonstrate and develop high-gain, high-resolution tracking detector arrays with faster readout, and shorter radiation length than APS arrays. These arrays may operate as direct charged particle detectors or as readouts of high resolution scintillating fiber arrays. During this program, we developed in CMOS large, high-resolution pixel sensor arrays with integrated readout, and reset at pixel level. Their intrinsic gain, high immunity to surface and moisture damage, will allow operating these detectors with minimal packaging/passivation requirements and will result in radiation length superior to APS. In Phase I, we designed and fabricated arrays with calorimetric output capable of sub-pixel resolution and sub-microsecond readout rate. The technical effort was dedicated to detector and readout structure development, performance verification, as well as to radiation damage and damage annealing.

  7. EGS4CYL a Montecarlo simulation method of a PET or spect equipment at high spatial resolution

    International Nuclear Information System (INIS)

    Ferriani, S.; Galli, M.

    1995-11-01

    This report describes a Montecarlo simulation method for the simulation of a Pet or Spect equipment. The method is based on the Egs4cyl code. This work has been done in the framework of the Hirespet collaboration, for the developing of an high spatial resolution tomograph, the method will be used for the project of the tomograph. The treated geometry consists of a set of coaxial cylinders, surrounded by a ring of detectors. The detectors have a box shape, a collimator in front of each of them can be included, by means of geometrical constraints to the incident particles. An isotropic source is in the middle of the system. For the particles transport the Egs4code is used, for storing and plotting results the Cern packages Higz and Hbook are used

  8. Ion diode simulation with a finite-volume PIC approach for the numerical solution of the Maxwell-Lorentz system

    Energy Technology Data Exchange (ETDEWEB)

    Munz, C D; Schneider, R; Stein, E; Voss, U [Forschungszentrum Karlsruhe (Germany). Institut fuer Neutronenphysik und Reaktortechnik; Westermann, T [FH Karlsruhe (Germany). Fachbereich Naturwissenschaften; Krauss, M [Forschungszentrum Karlsruhe (Germany). Hauptabteilung Informations- und Kommunikationstechik

    1997-12-31

    The numerical concept realized in the the Karlsruhe Diode Code KADI2D is briefly reviewed. Several new aspects concerning the Maxwell field solver based on high resolution finite-volume methods are presented. A new approach maintaining charge conservation numerically for the Maxwell-Lorentz equations is shortly summarized. (author). 2 figs., 12 refs.

  9. Ion diode simulation with a finite-volume PIC approach for the numerical solution of the Maxwell-Lorentz system

    International Nuclear Information System (INIS)

    Munz, C.D.; Schneider, R.; Stein, E.; Voss, U.; Westermann, T.; Krauss, M.

    1996-01-01

    The numerical concept realized in the the Karlsruhe Diode Code KADI2D is briefly reviewed. Several new aspects concerning the Maxwell field solver based on high resolution finite-volume methods are presented. A new approach maintaining charge conservation numerically for the Maxwell-Lorentz equations is shortly summarized. (author). 2 figs., 12 refs

  10. Hybrid flux splitting schemes for numerical resolution of two-phase flows

    Energy Technology Data Exchange (ETDEWEB)

    Flaatten, Tore

    2003-07-01

    This thesis deals with the construction of numerical schemes for approximating. solutions to a hyperbolic two-phase flow model. Numerical schemes for hyperbolic models are commonly divided in two main classes: Flux Vector Splitting (FVS) schemes which are based on scalar computations and Flux Difference Splitting (FDS) schemes which are based on matrix computations. FVS schemes are more efficient than FDS schemes, but FDS schemes are more accurate. The canonical FDS schemes are the approximate Riemann solvers which are based on a local decomposition of the system into its full wave structure. In this thesis the mathematical structure of the model is exploited to construct a class of hybrid FVS/FDS schemes, denoted as Mixture Flux (MF) schemes. This approach is based on a splitting of the system in two components associated with the pressure and volume fraction variables respectively, and builds upon hybrid FVS/FDS schemes previously developed for one-phase flow models. Through analysis and numerical experiments it is demonstrated that the MF approach provides several desirable features, including (1) Improved efficiency compared to standard approximate Riemann solvers, (2) Robustness under stiff conditions, (3) Accuracy on linear and nonlinear phenomena. In particular it is demonstrated that the framework allows for an efficient weakly implicit implementation, focusing on an accurate resolution of slow transients relevant for the petroleum industry. (author)

  11. Localization-based super-resolution imaging meets high-content screening.

    Science.gov (United States)

    Beghin, Anne; Kechkar, Adel; Butler, Corey; Levet, Florian; Cabillic, Marine; Rossier, Olivier; Giannone, Gregory; Galland, Rémi; Choquet, Daniel; Sibarita, Jean-Baptiste

    2017-12-01

    Single-molecule localization microscopy techniques have proven to be essential tools for quantitatively monitoring biological processes at unprecedented spatial resolution. However, these techniques are very low throughput and are not yet compatible with fully automated, multiparametric cellular assays. This shortcoming is primarily due to the huge amount of data generated during imaging and the lack of software for automation and dedicated data mining. We describe an automated quantitative single-molecule-based super-resolution methodology that operates in standard multiwell plates and uses analysis based on high-content screening and data-mining software. The workflow is compatible with fixed- and live-cell imaging and allows extraction of quantitative data like fluorophore photophysics, protein clustering or dynamic behavior of biomolecules. We demonstrate that the method is compatible with high-content screening using 3D dSTORM and DNA-PAINT based super-resolution microscopy as well as single-particle tracking.

  12. Robust Hydrological Forecasting for High-resolution Distributed Models Using a Unified Data Assimilation Approach

    Science.gov (United States)

    Hernandez, F.; Liang, X.

    2017-12-01

    Reliable real-time hydrological forecasting, to predict important phenomena such as floods, is invaluable to the society. However, modern high-resolution distributed models have faced challenges when dealing with uncertainties that are caused by the large number of parameters and initial state estimations involved. Therefore, to rely on these high-resolution models for critical real-time forecast applications, considerable improvements on the parameter and initial state estimation techniques must be made. In this work we present a unified data assimilation algorithm called Optimized PareTo Inverse Modeling through Inverse STochastic Search (OPTIMISTS) to deal with the challenge of having robust flood forecasting for high-resolution distributed models. This new algorithm combines the advantages of particle filters and variational methods in a unique way to overcome their individual weaknesses. The analysis of candidate particles compares model results with observations in a flexible time frame, and a multi-objective approach is proposed which attempts to simultaneously minimize differences with the observations and departures from the background states by using both Bayesian sampling and non-convex evolutionary optimization. Moreover, the resulting Pareto front is given a probabilistic interpretation through kernel density estimation to create a non-Gaussian distribution of the states. OPTIMISTS was tested on a low-resolution distributed land surface model using VIC (Variable Infiltration Capacity) and on a high-resolution distributed hydrological model using the DHSVM (Distributed Hydrology Soil Vegetation Model). In the tests streamflow observations are assimilated. OPTIMISTS was also compared with a traditional particle filter and a variational method. Results show that our method can reliably produce adequate forecasts and that it is able to outperform those resulting from assimilating the observations using a particle filter or an evolutionary 4D variational

  13. Numerical methods used in simulation

    International Nuclear Information System (INIS)

    Caseau, Paul; Perrin, Michel; Planchard, Jacques

    1978-01-01

    The fundamental numerical problem posed by simulation problems is the stability of the resolution diagram. The system of the most used equations is defined, since there is a family of models of increasing complexity with 3, 4 or 5 equations although only models with 3 and 4 equations have been used extensively. After defining what is meant by explicit or implicit, the best established stability results is given for one-dimension problems and then for two-dimension problems. It is shown that two types of discretisation may be defined: four and eight point diagrams (in one or two dimensions) and six and ten point diagrams (in one or two dimensions). To end, some results are given on problems that are not usually treated very much, i.e. non-asymptotic stability and the stability of diagrams based on finite elements [fr

  14. A Numerical Model for Trickle Bed Reactors

    Science.gov (United States)

    Propp, Richard M.; Colella, Phillip; Crutchfield, William Y.; Day, Marcus S.

    2000-12-01

    Trickle bed reactors are governed by equations of flow in porous media such as Darcy's law and the conservation of mass. Our numerical method for solving these equations is based on a total-velocity splitting, sequential formulation which leads to an implicit pressure equation and a semi-implicit mass conservation equation. We use high-resolution finite-difference methods to discretize these equations. Our solution scheme extends previous work in modeling porous media flows in two ways. First, we incorporate physical effects due to capillary pressure, a nonlinear inlet boundary condition, spatial porosity variations, and inertial effects on phase mobilities. In particular, capillary forces introduce a parabolic component into the recast evolution equation, and the inertial effects give rise to hyperbolic nonconvexity. Second, we introduce a modification of the slope-limiting algorithm to prevent our numerical method from producing spurious shocks. We present a numerical algorithm for accommodating these difficulties, show the algorithm is second-order accurate, and demonstrate its performance on a number of simplified problems relevant to trickle bed reactor modeling.

  15. Bayesian Peptide Peak Detection for High Resolution TOF Mass Spectrometry.

    Science.gov (United States)

    Zhang, Jianqiu; Zhou, Xiaobo; Wang, Honghui; Suffredini, Anthony; Zhang, Lin; Huang, Yufei; Wong, Stephen

    2010-11-01

    In this paper, we address the issue of peptide ion peak detection for high resolution time-of-flight (TOF) mass spectrometry (MS) data. A novel Bayesian peptide ion peak detection method is proposed for TOF data with resolution of 10 000-15 000 full width at half-maximum (FWHW). MS spectra exhibit distinct characteristics at this resolution, which are captured in a novel parametric model. Based on the proposed parametric model, a Bayesian peak detection algorithm based on Markov chain Monte Carlo (MCMC) sampling is developed. The proposed algorithm is tested on both simulated and real datasets. The results show a significant improvement in detection performance over a commonly employed method. The results also agree with expert's visual inspection. Moreover, better detection consistency is achieved across MS datasets from patients with identical pathological condition.

  16. A numerical method for resonance integral calculations

    International Nuclear Information System (INIS)

    Tanbay, Tayfun; Ozgener, Bilge

    2013-01-01

    A numerical method has been proposed for resonance integral calculations and a cubic fit based on least squares approximation to compute the optimum Bell factor is given. The numerical method is based on the discretization of the neutron slowing down equation. The scattering integral is approximated by taking into account the location of the upper limit in energy domain. The accuracy of the method has been tested by performing computations of resonance integrals for uranium dioxide isolated rods and comparing the results with empirical values. (orig.)

  17. Design of heat exchangers by numerical methods

    International Nuclear Information System (INIS)

    Konuk, A.A.

    1981-01-01

    Differential equations describing the heat tranfer in shell - and tube heat exchangers are derived and solved numerically. The method of ΔT sub(lm) is compared with the proposed method in cases where the specific heat at constant pressure, Cp and the overall heat transfer coefficient, U, vary with temperature. The error of the method of ΔT sub (lm) for the computation of the exchanger lenght is less than + 10%. However, the numerical method, being more accurate and at the same time easy to use and economical, is recommended for the design of shell-and-tube heat exchangers. (Author) [pt

  18. Numerical methods for semiconductor heterostructures with band nonparabolicity

    International Nuclear Information System (INIS)

    Wang Weichung; Hwang Tsungmin; Lin Wenwei; Liu Jinnliang

    2003-01-01

    This article presents numerical methods for computing bound state energies and associated wave functions of three-dimensional semiconductor heterostructures with special interest in the numerical treatment of the effect of band nonparabolicity. A nonuniform finite difference method is presented to approximate a model of a cylindrical-shaped semiconductor quantum dot embedded in another semiconductor matrix. A matrix reduction method is then proposed to dramatically reduce huge eigenvalue systems to relatively very small subsystems. Moreover, the nonparabolic band structure results in a cubic type of nonlinear eigenvalue problems for which a cubic Jacobi-Davidson method with an explicit nonequivalence deflation method are proposed to compute all the desired eigenpairs. Numerical results are given to illustrate the spectrum of energy levels and the corresponding wave functions in rather detail

  19. Ultra high resolution imaging of the human head at 8 tesla: 2K x 2K for Y2K.

    Science.gov (United States)

    Robitaille, P M; Abduljalil, A M; Kangarlu, A

    2000-01-01

    To acquire ultra high resolution MRI images of the human brain at 8 Tesla within a clinically acceptable time frame. Gradient echo images were acquired from the human head of normal subjects using a transverse electromagnetic resonator operating in quadrature and tuned to 340 MHz. In each study, a group of six images was obtained containing a total of 208 MB of unprocessed information. Typical acquisition parameters were as follows: matrix = 2,000 x 2,000, field of view = 20 cm, slice thickness = 2 mm, number of excitations (NEX) = 1, flip angle = 45 degrees, TR = 750 ms, TE = 17 ms, receiver bandwidth = 69.4 kHz. This resulted in a total scan time of 23 minutes, an in-plane resolution of 100 microm, and a pixel volume of 0.02 mm3. The ultra high resolution images acquired in this study represent more than a 50-fold increase in in-plane resolution relative to conventional 256 x 256 images obtained with a 20 cm field of view and a 5 mm slice thickness. Nonetheless, the ultra high resolution images could be acquired both with adequate image quality and signal to noise. They revealed numerous small venous structures throughout the image plane and provided reasonable delineation between gray and white matter. The elevated signal-to-noise ratio observed in ultra high field magnetic resonance imaging can be utilized to acquire images with a level of resolution approaching the histological level under in vivo conditions. However, brain motion is likely to degrade the useful resolution. This situation may be remedied in part with cardiac gating. Nonetheless, these images represent a significant advance in our ability to examine small anatomical features with noninvasive imaging methods.

  20. High-resolution electron microscopy

    CERN Document Server

    Spence, John C H

    2013-01-01

    This new fourth edition of the standard text on atomic-resolution transmission electron microscopy (TEM) retains previous material on the fundamentals of electron optics and aberration correction, linear imaging theory (including wave aberrations to fifth order) with partial coherence, and multiple-scattering theory. Also preserved are updated earlier sections on practical methods, with detailed step-by-step accounts of the procedures needed to obtain the highest quality images of atoms and molecules using a modern TEM or STEM electron microscope. Applications sections have been updated - these include the semiconductor industry, superconductor research, solid state chemistry and nanoscience, and metallurgy, mineralogy, condensed matter physics, materials science and material on cryo-electron microscopy for structural biology. New or expanded sections have been added on electron holography, aberration correction, field-emission guns, imaging filters, super-resolution methods, Ptychography, Ronchigrams, tomogr...

  1. Outcomes and challenges of global high-resolution non-hydrostatic atmospheric simulations using the K computer

    Science.gov (United States)

    Satoh, Masaki; Tomita, Hirofumi; Yashiro, Hisashi; Kajikawa, Yoshiyuki; Miyamoto, Yoshiaki; Yamaura, Tsuyoshi; Miyakawa, Tomoki; Nakano, Masuo; Kodama, Chihiro; Noda, Akira T.; Nasuno, Tomoe; Yamada, Yohei; Fukutomi, Yoshiki

    2017-12-01

    This article reviews the major outcomes of a 5-year (2011-2016) project using the K computer to perform global numerical atmospheric simulations based on the non-hydrostatic icosahedral atmospheric model (NICAM). The K computer was made available to the public in September 2012 and was used as a primary resource for Japan's Strategic Programs for Innovative Research (SPIRE), an initiative to investigate five strategic research areas; the NICAM project fell under the research area of climate and weather simulation sciences. Combining NICAM with high-performance computing has created new opportunities in three areas of research: (1) higher resolution global simulations that produce more realistic representations of convective systems, (2) multi-member ensemble simulations that are able to perform extended-range forecasts 10-30 days in advance, and (3) multi-decadal simulations for climatology and variability. Before the K computer era, NICAM was used to demonstrate realistic simulations of intra-seasonal oscillations including the Madden-Julian oscillation (MJO), merely as a case study approach. Thanks to the big leap in computational performance of the K computer, we could greatly increase the number of cases of MJO events for numerical simulations, in addition to integrating time and horizontal resolution. We conclude that the high-resolution global non-hydrostatic model, as used in this five-year project, improves the ability to forecast intra-seasonal oscillations and associated tropical cyclogenesis compared with that of the relatively coarser operational models currently in use. The impacts of the sub-kilometer resolution simulation and the multi-decadal simulations using NICAM are also reviewed.

  2. Numerical implementation of the loop-tree duality method

    Energy Technology Data Exchange (ETDEWEB)

    Buchta, Sebastian; Rodrigo, German [Universitat de Valencia-Consejo Superior de Investigaciones Cientificas, Parc Cientific, Instituto de Fisica Corpuscular, Valencia (Spain); Chachamis, Grigorios [Universidad Autonoma de Madrid, Instituto de Fisica Teorica UAM/CSIC, Madrid (Spain); Draggiotis, Petros [Institute of Nuclear and Particle Physics, NCSR ' ' Demokritos' ' , Agia Paraskevi (Greece)

    2017-05-15

    We present a first numerical implementation of the loop-tree duality (LTD) method for the direct numerical computation of multi-leg one-loop Feynman integrals. We discuss in detail the singular structure of the dual integrands and define a suitable contour deformation in the loop three-momentum space to carry out the numerical integration. Then we apply the LTD method to the computation of ultraviolet and infrared finite integrals, and we present explicit results for scalar and tensor integrals with up to eight external legs (octagons). The LTD method features an excellent performance independently of the number of external legs. (orig.)

  3. Mesoscale spiral vortex embedded within a Lake Michigan snow squall band - High resolution satellite observations and numerical model simulations

    Science.gov (United States)

    Lyons, Walter A.; Keen, Cecil S.; Hjelmfelt, Mark; Pease, Steven R.

    1988-01-01

    It is known that Great Lakes snow squall convection occurs in a variety of different modes depending on various factors such as air-water temperature contrast, boundary-layer wind shear, and geostrophic wind direction. An exceptional and often neglected source of data for mesoscale cloud studies is the ultrahigh resolution multispectral data produced by Landsat satellites. On October 19, 1972, a clearly defined spiral vortex was noted in a Landsat-1 image near the southern end of Lake Michigan during an exceptionally early cold air outbreak over a still very warm lake. In a numerical simulation using a three-dimensional Eulerian hydrostatic primitive equation mesoscale model with an initially uniform wind field, a definite analog to the observed vortex was generated. This suggests that intense surface heating can be a principal cause in the development of a low-level mesoscale vortex.

  4. Performance Evaluations for Super-Resolution Mosaicing on UAS Surveillance Videos

    Directory of Open Access Journals (Sweden)

    Aldo Camargo

    2013-05-01

    Full Text Available Abstract Unmanned Aircraft Systems (UAS have been widely applied for reconnaissance and surveillance by exploiting information collected from the digital imaging payload. The super-resolution (SR mosaicing of low-resolution (LR UAS surveillance video frames has become a critical requirement for UAS video processing and is important for further effective image understanding. In this paper we develop a novel super-resolution framework, which does not require the construction of sparse matrices. The proposed method implements image operations in the spatial domain and applies an iterated back-projection to construct super-resolution mosaics from the overlapping UAS surveillance video frames. The Steepest Descent method, the Conjugate Gradient method and the Levenberg-Marquardt algorithm are used to numerically solve the nonlinear optimization problem for estimating a super-resolution mosaic. A quantitative performance comparison in terms of computation time and visual quality of the super-resolution mosaics through the three numerical techniques is presented.

  5. The development of high resolution silicon x-ray microcalorimeters

    Science.gov (United States)

    Porter, F. S.; Kelley, R. L.; Kilbourne, C. A.

    2005-12-01

    Recently we have produced x-ray microcalorimeters with resolving powers approaching 2000 at 5.9 keV using a spare XRS microcalorimeter array. We attached 400 um square, 8 um thick HgTe absorbers using a variety of attachment methods to an XRS array and ran the detector array at temperatures between 40 and 60 mK. The best results were for absorbers attached using the standard XRS absorber-pixel thermal isolation scheme utilizing SU8 polymer tubes. In this scenario we achieved a resolution of 3.2 eV FWHM at 5.9 keV. Substituting a silicon spacer for the SU8 tubes also yielded sub-4eV results. In contrast, absorbers attached directly to the thermistor produced significant position dependence and thus degraded resolution. Finally, we tested standard 640um-square XRS detectors at reduced bias power at 50mK and achieved a resolution of 3.7eV, a 50% improvement over the XRS flight instrument. Implanted silicon microcalorimeters are a mature flight-qualified technology that still has a substantial phase space for future development. We will discuss these new high resolution results, the various absorber attachment schemes, planned future improvements, and, finally, their relevance to future high resolution x-ray spectrometers including Constellation-X.

  6. High resolution SAW elastography for ex-vivo porcine skin specimen

    Science.gov (United States)

    Zhou, Kanheng; Feng, Kairui; Wang, Mingkai; Jamera, Tanatswa; Li, Chunhui; Huang, Zhihong

    2018-02-01

    Surface acoustic wave (SAW) elastography has been proven to be a non-invasive, non-destructive method for accurately characterizing tissue elastic properties. Current SAW elastography technique tracks generated surface acoustic wave impulse point by point which are a few millimeters away. Thus, reconstructed elastography has low lateral resolution. To improve the lateral resolution of current SAW elastography, a new method was proposed in this research. A M-B scan mode, high spatial resolution phase sensitive optical coherence tomography (PhS-OCT) system was employed to track the ultrasonically induced SAW impulse. Ex-vivo porcine skin specimen was tested using this proposed method. A 2D fast Fourier transform based algorithm was applied to process the acquired data for estimating the surface acoustic wave dispersion curve and its corresponding penetration depth. Then, the ex-vivo porcine skin elastogram was established by relating the surface acoustic wave dispersion curve and its corresponding penetration depth. The result from the proposed method shows higher lateral resolution than that from current SAW elastography technique, and the approximated skin elastogram could also distinguish the different layers in the skin specimen, i.e. epidermis, dermis and fat layer. This proposed SAW elastography technique may have a large potential to be widely applied in clinical use for skin disease diagnosis and treatment monitoring.

  7. High-frequency Rayleigh-wave method

    Science.gov (United States)

    Xia, J.; Miller, R.D.; Xu, Y.; Luo, Y.; Chen, C.; Liu, J.; Ivanov, J.; Zeng, C.

    2009-01-01

    High-frequency (???2 Hz) Rayleigh-wave data acquired with a multichannel recording system have been utilized to determine shear (S)-wave velocities in near-surface geophysics since the early 1980s. This overview article discusses the main research results of high-frequency surface-wave techniques achieved by research groups at the Kansas Geological Survey and China University of Geosciences in the last 15 years. The multichannel analysis of surface wave (MASW) method is a non-invasive acoustic approach to estimate near-surface S-wave velocity. The differences between MASW results and direct borehole measurements are approximately 15% or less and random. Studies show that simultaneous inversion with higher modes and the fundamental mode can increase model resolution and an investigation depth. The other important seismic property, quality factor (Q), can also be estimated with the MASW method by inverting attenuation coefficients of Rayleigh waves. An inverted model (S-wave velocity or Q) obtained using a damped least-squares method can be assessed by an optimal damping vector in a vicinity of the inverted model determined by an objective function, which is the trace of a weighted sum of model-resolution and model-covariance matrices. Current developments include modeling high-frequency Rayleigh-waves in near-surface media, which builds a foundation for shallow seismic or Rayleigh-wave inversion in the time-offset domain; imaging dispersive energy with high resolution in the frequency-velocity domain and possibly with data in an arbitrary acquisition geometry, which opens a door for 3D surface-wave techniques; and successfully separating surface-wave modes, which provides a valuable tool to perform S-wave velocity profiling with high-horizontal resolution. ?? China University of Geosciences (Wuhan) and Springer-Verlag GmbH 2009.

  8. Performance of a high resolution cavity beam position monitor system

    Science.gov (United States)

    Walston, Sean; Boogert, Stewart; Chung, Carl; Fitsos, Pete; Frisch, Joe; Gronberg, Jeff; Hayano, Hitoshi; Honda, Yosuke; Kolomensky, Yury; Lyapin, Alexey; Malton, Stephen; May, Justin; McCormick, Douglas; Meller, Robert; Miller, David; Orimoto, Toyoko; Ross, Marc; Slater, Mark; Smith, Steve; Smith, Tonee; Terunuma, Nobuhiro; Thomson, Mark; Urakawa, Junji; Vogel, Vladimir; Ward, David; White, Glen

    2007-07-01

    It has been estimated that an RF cavity Beam Position Monitor (BPM) could provide a position measurement resolution of less than 1 nm. We have developed a high resolution cavity BPM and associated electronics. A triplet comprised of these BPMs was installed in the extraction line of the Accelerator Test Facility (ATF) at the High Energy Accelerator Research Organization (KEK) for testing with its ultra-low emittance beam. The three BPMs were each rigidly mounted inside an alignment frame on six variable-length struts which could be used to move the BPMs in position and angle. We have developed novel methods for extracting the position and tilt information from the BPM signals including a robust calibration algorithm which is immune to beam jitter. To date, we have demonstrated a position resolution of 15.6 nm and a tilt resolution of 2.1 μrad over a dynamic range of approximately ±20 μm.

  9. A high resolution portable spectroscopy system

    International Nuclear Information System (INIS)

    Kulkarni, C.P.; Vaidya, P.P.; Paulson, M.; Bhatnagar, P.V.; Pande, S.S.; Padmini, S.

    2003-01-01

    Full text: This paper describes the system details of a High Resolution Portable Spectroscopy System (HRPSS) developed at Electronics Division, BARC. The system can be used for laboratory class, high-resolution nuclear spectroscopy applications. The HRPSS consists of a specially designed compact NIM bin, with built-in power supplies, accommodating a low power, high resolution MCA, and on-board embedded computer for spectrum building and communication. A NIM based spectroscopy amplifier and a HV module for detector bias are integrated (plug-in) in the bin. The system communicates with a host PC via a serial link. Along-with a laptop PC, and a portable HP-Ge detector, the HRPSS offers a laboratory class performance for portable applications

  10. Automated method for relating regional pulmonary structure and function: integration of dynamic multislice CT and thin-slice high-resolution CT

    Science.gov (United States)

    Tajik, Jehangir K.; Kugelmass, Steven D.; Hoffman, Eric A.

    1993-07-01

    We have developed a method utilizing x-ray CT for relating pulmonary perfusion to global and regional anatomy, allowing for detailed study of structure to function relationships. A thick slice, high temporal resolution mode is used to follow a bolus contrast agent for blood flow evaluation and is fused with a high spatial resolution, thin slice mode to obtain structure- function detail. To aid analysis of blood flow, we have developed a software module, for our image analysis package (VIDA), to produce the combined structure-function image. Color coded images representing blood flow, mean transit time, regional tissue content, regional blood volume, regional air content, etc. are generated and imbedded in the high resolution volume image. A text file containing these values along with a voxel's 3-D coordinates is also generated. User input can be minimized to identifying the location of the pulmonary artery from which the input function to a blood flow model is derived. Any flow model utilizing one input and one output function can be easily added to a user selectable list. We present examples from our physiologic based research findings to demonstrate the strengths of combining dynamic CT and HRCT relative to other scanning modalities to uniquely characterize pulmonary normal and pathophysiology.

  11. Microbeam high-resolution diffraction and x-ray standing wave methods applied to semiconductor structures

    International Nuclear Information System (INIS)

    Kazimirov, A; Bilderback, D H; Huang, R; Sirenko, A; Ougazzaden, A

    2004-01-01

    A new approach to conditioning x-ray microbeams for high angular resolution x-ray diffraction and scattering techniques is introduced. We combined focusing optics (one-bounce imaging capillary) and post-focusing collimating optics (miniature Si(004) channel-cut crystal) to generate an x-ray microbeam with a size of 10 μm and ultimate angular resolution of 14 μrad. The microbeam was used to analyse the strain in sub-micron thick InGaAsP epitaxial layers grown on an InP(100) substrate by the selective area growth technique in narrow openings between the oxide stripes. For the structures for which the diffraction peaks from the substrate and the film overlap, the x-ray standing wave technique was applied for precise measurements of the strain with a Δd/d resolution of better than 10 -4 . (rapid communication)

  12. High Resolution Atmospheric Modeling for Wind Energy Applications

    Energy Technology Data Exchange (ETDEWEB)

    Simpson, M; Bulaevskaya, V; Glascoe, L; Singer, M

    2010-03-18

    The ability of the WRF atmospheric model to forecast wind speed over the Nysted wind park was investigated as a function of time. It was found that in the time period we considered (August 1-19, 2008), the model is able to predict wind speeds reasonably accurately for 48 hours ahead, but that its forecast skill deteriorates rapidly after 48 hours. In addition, a preliminary analysis was carried out to investigate the impact of vertical grid resolution on the forecast skill. Our preliminary finding is that increasing vertical grid resolution does not have a significant impact on the forecast skill of the WRF model over Nysted wind park during the period we considered. Additional simulations during this period, as well as during other time periods, will be run in order to validate the results presented here. Wind speed is a difficult parameter to forecast due the interaction of large and small length scale forcing. To accurately forecast the wind speed at a given location, the model must correctly forecast the movement and strength of synoptic systems, as well as the local influence of topography / land use on the wind speed. For example, small deviations in the forecast track or strength of a large-scale low pressure system can result in significant forecast errors for local wind speeds. The purpose of this study is to provide a preliminary baseline of a high-resolution limited area model forecast performance against observations from the Nysted wind park. Validating the numerical weather prediction model performance for past forecasts will give a reasonable measure of expected forecast skill over the Nysted wind park. Also, since the Nysted Wind Park is over water and some distance from the influence of terrain, the impact of high vertical grid spacing for wind speed forecast skill will also be investigated.

  13. Numerical methods for metamaterial design

    CERN Document Server

    2013-01-01

    This book describes a relatively new approach for the design of electromagnetic metamaterials.  Numerical optimization routines are combined with electromagnetic simulations to tailor the broadband optical properties of a metamaterial to have predetermined responses at predetermined wavelengths. After a review of both the major efforts within the field of metamaterials and the field of mathematical optimization, chapters covering both gradient-based and derivative-free design methods are considered.  Selected topics including surrogate-base optimization, adaptive mesh search, and genetic algorithms are shown to be effective, gradient-free optimization strategies.  Additionally, new techniques for representing dielectric distributions in two dimensions, including level sets, are demonstrated as effective methods for gradient-based optimization.  Each chapter begins with a rigorous review of the optimization strategy used, and is followed by numerous examples that combine the strategy with either electromag...

  14. Quadruplex MAPH: improvement of throughput in high-resolution copy number screening.

    Science.gov (United States)

    Tyson, Jess; Majerus, Tamsin Mo; Walker, Susan; Armour, John Al

    2009-09-28

    Copy number variation (CNV) in the human genome is recognised as a widespread and important source of human genetic variation. Now the challenge is to screen for these CNVs at high resolution in a reliable, accurate and cost-effective way. Multiplex Amplifiable Probe Hybridisation (MAPH) is a sensitive, high-resolution technology appropriate for screening for CNVs in a defined region, for a targeted population. We have developed MAPH to a highly multiplexed format ("QuadMAPH") that allows the user a four-fold increase in the number of loci tested simultaneously. We have used this method to analyse a genomic region of 210 kb, including the MSH2 gene and 120 kb of flanking DNA. We show that the QuadMAPH probes report copy number with equivalent accuracy to simplex MAPH, reliably demonstrating diploid copy number in control samples and accurately detecting deletions in Hereditary Non-Polyposis Colorectal Cancer (HNPCC) samples. QuadMAPH is an accurate, high-resolution method that allows targeted screening of large numbers of subjects without the expense of genome-wide approaches. Whilst we have applied this technique to a region of the human genome, it is equally applicable to the genomes of other organisms.

  15. High-Resolution Remotely Sensed Small Target Detection by Imitating Fly Visual Perception Mechanism

    Directory of Open Access Journals (Sweden)

    Fengchen Huang

    2012-01-01

    Full Text Available The difficulty and limitation of small target detection methods for high-resolution remote sensing data have been a recent research hot spot. Inspired by the information capture and processing theory of fly visual system, this paper endeavors to construct a characterized model of information perception and make use of the advantages of fast and accurate small target detection under complex varied nature environment. The proposed model forms a theoretical basis of small target detection for high-resolution remote sensing data. After the comparison of prevailing simulation mechanism behind fly visual systems, we propose a fly-imitated visual system method of information processing for high-resolution remote sensing data. A small target detector and corresponding detection algorithm are designed by simulating the mechanism of information acquisition, compression, and fusion of fly visual system and the function of pool cell and the character of nonlinear self-adaption. Experiments verify the feasibility and rationality of the proposed small target detection model and fly-imitated visual perception method.

  16. High-resolution remotely sensed small target detection by imitating fly visual perception mechanism.

    Science.gov (United States)

    Huang, Fengchen; Xu, Lizhong; Li, Min; Tang, Min

    2012-01-01

    The difficulty and limitation of small target detection methods for high-resolution remote sensing data have been a recent research hot spot. Inspired by the information capture and processing theory of fly visual system, this paper endeavors to construct a characterized model of information perception and make use of the advantages of fast and accurate small target detection under complex varied nature environment. The proposed model forms a theoretical basis of small target detection for high-resolution remote sensing data. After the comparison of prevailing simulation mechanism behind fly visual systems, we propose a fly-imitated visual system method of information processing for high-resolution remote sensing data. A small target detector and corresponding detection algorithm are designed by simulating the mechanism of information acquisition, compression, and fusion of fly visual system and the function of pool cell and the character of nonlinear self-adaption. Experiments verify the feasibility and rationality of the proposed small target detection model and fly-imitated visual perception method.

  17. Numerical method for the eigenvalue problem and the singular equation by using the multi-grid method and application to ordinary differential equation

    International Nuclear Information System (INIS)

    Kanki, Takashi; Uyama, Tadao; Tokuda, Shinji.

    1995-07-01

    In the numerical method to compute the matching data which are necessary for resistive MHD stability analyses, it is required to solve the eigenvalue problem and the associated singular equation. An iterative method is developed to solve the eigenvalue problem and the singular equation. In this method, the eigenvalue problem is replaced with an equivalent nonlinear equation and a singular equation is derived from Newton's method for the nonlinear equation. The multi-grid method (MGM), a high speed iterative method, can be applied to this method. The convergence of the eigenvalue and the eigenvector, and the CPU time in this method are investigated for a model equation. It is confirmed from the numerical results that this method is effective for solving the eigenvalue problem and the singular equation with numerical stability and high accuracy. It is shown by improving the MGM that the CPU time for this method is 50 times shorter than that of the direct method. (author)

  18. High-resolution noise substitution to measure overfitting and validate resolution in 3D structure determination by single particle electron cryomicroscopy

    International Nuclear Information System (INIS)

    Chen, Shaoxia; McMullan, Greg; Faruqi, Abdul R.; Murshudov, Garib N.; Short, Judith M.; Scheres, Sjors H.W.; Henderson, Richard

    2013-01-01

    Three-dimensional (3D) structure determination by single particle electron cryomicroscopy (cryoEM) involves the calculation of an initial 3D model, followed by extensive iterative improvement of the orientation determination of the individual particle images and the resulting 3D map. Because there is much more noise than signal at high resolution in the images, this creates the possibility of noise reinforcement in the 3D map, which can give a false impression of the resolution attained. The balance between signal and noise in the final map at its limiting resolution depends on the image processing procedure and is not easily predicted. There is a growing awareness in the cryoEM community of how to avoid such over-fitting and over-estimation of resolution. Equally, there has been a reluctance to use the two principal methods of avoidance because they give lower resolution estimates, which some people believe are too pessimistic. Here we describe a simple test that is compatible with any image processing protocol. The test allows measurement of the amount of signal and the amount of noise from overfitting that is present in the final 3D map. We have applied the method to two different sets of cryoEM images of the enzyme beta-galactosidase using several image processing packages. Our procedure involves substituting the Fourier components of the initial particle image stack beyond a chosen resolution by either the Fourier components from an adjacent area of background, or by simple randomisation of the phases of the particle structure factors. This substituted noise thus has the same spectral power distribution as the original data. Comparison of the Fourier Shell Correlation (FSC) plots from the 3D map obtained using the experimental data with that from the same data with high-resolution noise (HR-noise) substituted allows an unambiguous measurement of the amount of overfitting and an accompanying resolution assessment. A simple formula can be used to calculate an

  19. High-resolution noise substitution to measure overfitting and validate resolution in 3D structure determination by single particle electron cryomicroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Shaoxia; McMullan, Greg; Faruqi, Abdul R.; Murshudov, Garib N.; Short, Judith M.; Scheres, Sjors H.W.; Henderson, Richard, E-mail: rh15@mrc-lmb.cam.ac.uk

    2013-12-15

    Three-dimensional (3D) structure determination by single particle electron cryomicroscopy (cryoEM) involves the calculation of an initial 3D model, followed by extensive iterative improvement of the orientation determination of the individual particle images and the resulting 3D map. Because there is much more noise than signal at high resolution in the images, this creates the possibility of noise reinforcement in the 3D map, which can give a false impression of the resolution attained. The balance between signal and noise in the final map at its limiting resolution depends on the image processing procedure and is not easily predicted. There is a growing awareness in the cryoEM community of how to avoid such over-fitting and over-estimation of resolution. Equally, there has been a reluctance to use the two principal methods of avoidance because they give lower resolution estimates, which some people believe are too pessimistic. Here we describe a simple test that is compatible with any image processing protocol. The test allows measurement of the amount of signal and the amount of noise from overfitting that is present in the final 3D map. We have applied the method to two different sets of cryoEM images of the enzyme beta-galactosidase using several image processing packages. Our procedure involves substituting the Fourier components of the initial particle image stack beyond a chosen resolution by either the Fourier components from an adjacent area of background, or by simple randomisation of the phases of the particle structure factors. This substituted noise thus has the same spectral power distribution as the original data. Comparison of the Fourier Shell Correlation (FSC) plots from the 3D map obtained using the experimental data with that from the same data with high-resolution noise (HR-noise) substituted allows an unambiguous measurement of the amount of overfitting and an accompanying resolution assessment. A simple formula can be used to calculate an

  20. Fast and accurate denoising method applied to very high resolution optical remote sensing images

    Science.gov (United States)

    Masse, Antoine; Lefèvre, Sébastien; Binet, Renaud; Artigues, Stéphanie; Lassalle, Pierre; Blanchet, Gwendoline; Baillarin, Simon

    2017-10-01

    Restoration of Very High Resolution (VHR) optical Remote Sensing Image (RSI) is critical and leads to the problem of removing instrumental noise while keeping integrity of relevant information. Improving denoising in an image processing chain implies increasing image quality and improving performance of all following tasks operated by experts (photo-interpretation, cartography, etc.) or by algorithms (land cover mapping, change detection, 3D reconstruction, etc.). In a context of large industrial VHR image production, the selected denoising method should optimized accuracy and robustness with relevant information and saliency conservation, and rapidity due to the huge amount of data acquired and/or archived. Very recent research in image processing leads to a fast and accurate algorithm called Non Local Bayes (NLB) that we propose to adapt and optimize for VHR RSIs. This method is well suited for mass production thanks to its best trade-off between accuracy and computational complexity compared to other state-of-the-art methods. NLB is based on a simple principle: similar structures in an image have similar noise distribution and thus can be denoised with the same noise estimation. In this paper, we describe in details algorithm operations and performances, and analyze parameter sensibilities on various typical real areas observed in VHR RSIs.

  1. High Resolution Elevation Contours

    Data.gov (United States)

    Minnesota Department of Natural Resources — This dataset contains contours generated from high resolution data sources such as LiDAR. Generally speaking this data is 2 foot or less contour interval.

  2. Fuzzy Classification of High Resolution Remote Sensing Scenes Using Visual Attention Features

    Directory of Open Access Journals (Sweden)

    Linyi Li

    2017-01-01

    Full Text Available In recent years the spatial resolutions of remote sensing images have been improved greatly. However, a higher spatial resolution image does not always lead to a better result of automatic scene classification. Visual attention is an important characteristic of the human visual system, which can effectively help to classify remote sensing scenes. In this study, a novel visual attention feature extraction algorithm was proposed, which extracted visual attention features through a multiscale process. And a fuzzy classification method using visual attention features (FC-VAF was developed to perform high resolution remote sensing scene classification. FC-VAF was evaluated by using remote sensing scenes from widely used high resolution remote sensing images, including IKONOS, QuickBird, and ZY-3 images. FC-VAF achieved more accurate classification results than the others according to the quantitative accuracy evaluation indices. We also discussed the role and impacts of different decomposition levels and different wavelets on the classification accuracy. FC-VAF improves the accuracy of high resolution scene classification and therefore advances the research of digital image analysis and the applications of high resolution remote sensing images.

  3. High Spatial Resolution Visual Band Imagery Outperforms Medium Resolution Spectral Imagery for Ecosystem Assessment in the Semi-Arid Brazilian Sertão

    Directory of Open Access Journals (Sweden)

    Ran Goldblatt

    2017-12-01

    Full Text Available Semi-arid ecosystems play a key role in global agricultural production, seasonal carbon cycle dynamics, and longer-run climate change. Because semi-arid landscapes are heterogeneous and often sparsely vegetated, repeated and large-scale ecosystem assessments of these regions have to date been impossible. Here, we assess the potential of high-spatial resolution visible band imagery for semi-arid ecosystem mapping. We use WorldView satellite imagery at 0.3–0.5 m resolution to develop a reference data set of nearly 10,000 labeled examples of three classes—trees, shrubs/grasses, and bare land—across 1000 km 2 of the semi-arid Sertão region of northeast Brazil. Using Google Earth Engine, we show that classification with low-spectral but high-spatial resolution input (WorldView outperforms classification with the full spectral information available from Landsat 30 m resolution imagery as input. Classification with high spatial resolution input improves detection of sparse vegetation and distinction between trees and seasonal shrubs and grasses, two features which are lost at coarser spatial (but higher spectral resolution input. Our total tree cover estimates for the study area disagree with recent estimates using other methods that may underestimate treecover because they confuse trees with seasonal vegetation (shrubs and grasses. This distinction is important for monitoring seasonal and long-run carbon cycle and ecosystem health. Our results suggest that newer remote sensing products that promise high frequency global coverage at high spatial but lower spectral resolution may offer new possibilities for direct monitoring of the world’s semi-arid ecosystems, and we provide methods that could be scaled to do so.

  4. A design of high resolution one-clock-cycle TDC based on FPGA

    International Nuclear Information System (INIS)

    Qi Ji; Deng Zhi; Liu Yinong

    2011-01-01

    It describes an FPGA-based high resolution TDC. Using delay chain and Wave Union methods, this TDC has a resolution of 9 ps, which is comparable to ASIC TDC. The design uses XORs and MUXs to implement a quick 1 -cycle encoder, which reduces the dead time. Self-calibration method makes the design easy to be migrated into other FPGAs. This TDC can be used in TOF experiment, medical imaging system, etc (authors)

  5. Parameter estimation method that directly compares gravitational wave observations to numerical relativity

    Science.gov (United States)

    Lange, J.; O'Shaughnessy, R.; Boyle, M.; Calderón Bustillo, J.; Campanelli, M.; Chu, T.; Clark, J. A.; Demos, N.; Fong, H.; Healy, J.; Hemberger, D. A.; Hinder, I.; Jani, K.; Khamesra, B.; Kidder, L. E.; Kumar, P.; Laguna, P.; Lousto, C. O.; Lovelace, G.; Ossokine, S.; Pfeiffer, H.; Scheel, M. A.; Shoemaker, D. M.; Szilagyi, B.; Teukolsky, S.; Zlochower, Y.

    2017-11-01

    We present and assess a Bayesian method to interpret gravitational wave signals from binary black holes. Our method directly compares gravitational wave data to numerical relativity (NR) simulations. In this study, we present a detailed investigation of the systematic and statistical parameter estimation errors of this method. This procedure bypasses approximations used in semianalytical models for compact binary coalescence. In this work, we use the full posterior parameter distribution for only generic nonprecessing binaries, drawing inferences away from the set of NR simulations used, via interpolation of a single scalar quantity (the marginalized log likelihood, ln L ) evaluated by comparing data to nonprecessing binary black hole simulations. We also compare the data to generic simulations, and discuss the effectiveness of this procedure for generic sources. We specifically assess the impact of higher order modes, repeating our interpretation with both l ≤2 as well as l ≤3 harmonic modes. Using the l ≤3 higher modes, we gain more information from the signal and can better constrain the parameters of the gravitational wave signal. We assess and quantify several sources of systematic error that our procedure could introduce, including simulation resolution and duration; most are negligible. We show through examples that our method can recover the parameters for equal mass, zero spin, GW150914-like, and unequal mass, precessing spin sources. Our study of this new parameter estimation method demonstrates that we can quantify and understand the systematic and statistical error. This method allows us to use higher order modes from numerical relativity simulations to better constrain the black hole binary parameters.

  6. The geometric phase analysis method based on the local high resolution discrete Fourier transform for deformation measurement

    International Nuclear Information System (INIS)

    Dai, Xianglu; Xie, Huimin; Wang, Huaixi; Li, Chuanwei; Wu, Lifu; Liu, Zhanwei

    2014-01-01

    The geometric phase analysis (GPA) method based on the local high resolution discrete Fourier transform (LHR-DFT) for deformation measurement, defined as LHR-DFT GPA, is proposed to improve the measurement accuracy. In the general GPA method, the fundamental frequency of the image plays a crucial role. However, the fast Fourier transform, which is generally employed in the general GPA method, could make it difficult to locate the fundamental frequency accurately when the fundamental frequency is not located at an integer pixel position in the Fourier spectrum. This study focuses on this issue and presents a LHR-DFT algorithm that can locate the fundamental frequency with sub-pixel precision in a specific frequency region for the GPA method. An error analysis is offered and simulation is conducted to verify the effectiveness of the proposed method; both results show that the LHR-DFT algorithm can accurately locate the fundamental frequency and improve the measurement accuracy of the GPA method. Furthermore, typical tensile and bending tests are carried out and the experimental results verify the effectiveness of the proposed method. (paper)

  7. High resolution reservoir geological modelling using outcrop information

    Energy Technology Data Exchange (ETDEWEB)

    Zhang Changmin; Lin Kexiang; Liu Huaibo [Jianghan Petroleum Institute, Hubei (China)] [and others

    1997-08-01

    This is China`s first case study of high resolution reservoir geological modelling using outcrop information. The key of the modelling process is to build a prototype model and using the model as a geological knowledge bank. Outcrop information used in geological modelling including seven aspects: (1) Determining the reservoir framework pattern by sedimentary depositional system and facies analysis; (2) Horizontal correlation based on the lower and higher stand duration of the paleo-lake level; (3) Determining the model`s direction based on the paleocurrent statistics; (4) Estimating the sandbody communication by photomosaic and profiles; (6) Estimating reservoir properties distribution within sandbody by lithofacies analysis; and (7) Building the reservoir model in sandbody scale by architectural element analysis and 3-D sampling. A high resolution reservoir geological model of Youshashan oil field has been built by using this method.

  8. Turbine component casting core with high resolution region

    Science.gov (United States)

    Kamel, Ahmed; Merrill, Gary B.

    2014-08-26

    A hollow turbine engine component with complex internal features can include a first region and a second, high resolution region. The first region can be defined by a first ceramic core piece formed by any conventional process, such as by injection molding or transfer molding. The second region can be defined by a second ceramic core piece formed separately by a method effective to produce high resolution features, such as tomo lithographic molding. The first core piece and the second core piece can be joined by interlocking engagement that once subjected to an intermediate thermal heat treatment process thermally deform to form a three dimensional interlocking joint between the first and second core pieces by allowing thermal creep to irreversibly interlock the first and second core pieces together such that the joint becomes physically locked together providing joint stability through thermal processing.

  9. High-resolution numerical model of the middle and inner ear for a detailed analysis of radio frequency absorption

    International Nuclear Information System (INIS)

    Schmid, Gernot; Ueberbacher, Richard; Samaras, Theodoros; Jappel, Alexandra; Baumgartner, Wolf-Dieter; Tschabitscher, Manfred; Mazal, Peter R

    2007-01-01

    In order to enable a detailed analysis of radio frequency (RF) absorption in the human middle and inner ear organs, a numerical model of these organs was developed at a spatial resolution of 0.1 mm, based on a real human tissue sample. The dielectric properties of the liquids (perilymph and endolymph) inside the bony labyrinth were measured on samples of ten freshly deceased humans. After inserting this model into a commercially available numerical head model, FDTD-based computations for exposure scenarios with generic models of handheld devices operated close to the head in the frequency range 400-3700 MHz were carried out. For typical output power values of real handheld mobile communication devices the obtained results showed only very small amounts of absorbed RF power in the middle and inner ear organs. Highest absorption in the middle and inner ear was found for the 400 MHz irradiation. In this case, the RF power absorbed inside the labyrinth and the vestibulocochlear nerve was as low as 166 μW and 12 μW, respectively, when considering a device of 500 mW output power operated close to the ear. For typical mobile phone frequencies (900 MHz and 1850 MHz) and output power values (250 mW and 125 mW) the corresponding values of absorbed RF power were found to be more than one order of magnitude lower than the values given above. These results indicate that temperature-related biologically relevant effects on the middle and inner ear, induced by the RF emissions of typical handheld mobile communication devices, are unlikely

  10. Land cover mapping and change detection in urban watersheds using QuickBird high spatial resolution satellite imagery

    Science.gov (United States)

    Hester, David Barry

    The objective of this research was to develop methods for urban land cover analysis using QuickBird high spatial resolution satellite imagery. Such imagery has emerged as a rich commercially available remote sensing data source and has enjoyed high-profile broadcast news media and Internet applications, but methods of quantitative analysis have not been thoroughly explored. The research described here consists of three studies focused on the use of pan-sharpened 61-cm spatial resolution QuickBird imagery, the spatial resolution of which is the highest of any commercial satellite. In the first study, a per-pixel land cover classification method is developed for use with this imagery. This method utilizes a per-pixel classification approach to generate an accurate six-category high spatial resolution land cover map of a developing suburban area. The primary objective of the second study was to develop an accurate land cover change detection method for use with QuickBird land cover products. This work presents an efficient fuzzy framework for transforming map uncertainty into accurate and meaningful high spatial resolution land cover change analysis. The third study described here is an urban planning application of the high spatial resolution QuickBird-based land cover product developed in the first study. This work both meaningfully connects this exciting new data source to urban watershed management and makes an important empirical contribution to the study of suburban watersheds. Its analysis of residential roads and driveways as well as retail parking lots sheds valuable light on the impact of transportation-related land use on the suburban landscape. Broadly, these studies provide new methods for using state-of-the-art remote sensing data to inform land cover analysis and urban planning. These methods are widely adaptable and produce land cover products that are both meaningful and accurate. As additional high spatial resolution satellites are launched and the

  11. Molecular dynamics with deterministic and stochastic numerical methods

    CERN Document Server

    Leimkuhler, Ben

    2015-01-01

    This book describes the mathematical underpinnings of algorithms used for molecular dynamics simulation, including both deterministic and stochastic numerical methods. Molecular dynamics is one of the most versatile and powerful methods of modern computational science and engineering and is used widely in chemistry, physics, materials science and biology. Understanding the foundations of numerical methods means knowing how to select the best one for a given problem (from the wide range of techniques on offer) and how to create new, efficient methods to address particular challenges as they arise in complex applications.  Aimed at a broad audience, this book presents the basic theory of Hamiltonian mechanics and stochastic differential equations, as well as topics including symplectic numerical methods, the handling of constraints and rigid bodies, the efficient treatment of Langevin dynamics, thermostats to control the molecular ensemble, multiple time-stepping, and the dissipative particle dynamics method...

  12. Super-resolution biomolecular crystallography with low-resolution data.

    Science.gov (United States)

    Schröder, Gunnar F; Levitt, Michael; Brunger, Axel T

    2010-04-22

    X-ray diffraction plays a pivotal role in the understanding of biological systems by revealing atomic structures of proteins, nucleic acids and their complexes, with much recent interest in very large assemblies like the ribosome. As crystals of such large assemblies often diffract weakly (resolution worse than 4 A), we need methods that work at such low resolution. In macromolecular assemblies, some of the components may be known at high resolution, whereas others are unknown: current refinement methods fail as they require a high-resolution starting structure for the entire complex. Determining the structure of such complexes, which are often of key biological importance, should be possible in principle as the number of independent diffraction intensities at a resolution better than 5 A generally exceeds the number of degrees of freedom. Here we introduce a method that adds specific information from known homologous structures but allows global and local deformations of these homology models. Our approach uses the observation that local protein structure tends to be conserved as sequence and function evolve. Cross-validation with R(free) (the free R-factor) determines the optimum deformation and influence of the homology model. For test cases at 3.5-5 A resolution with known structures at high resolution, our method gives significant improvements over conventional refinement in the model as monitored by coordinate accuracy, the definition of secondary structure and the quality of electron density maps. For re-refinements of a representative set of 19 low-resolution crystal structures from the Protein Data Bank, we find similar improvements. Thus, a structure derived from low-resolution diffraction data can have quality similar to a high-resolution structure. Our method is applicable to the study of weakly diffracting crystals using X-ray micro-diffraction as well as data from new X-ray light sources. Use of homology information is not restricted to X

  13. Multilevel markov chain monte carlo method for high-contrast single-phase flow problems

    KAUST Repository

    Efendiev, Yalchin R.

    2014-12-19

    In this paper we propose a general framework for the uncertainty quantification of quantities of interest for high-contrast single-phase flow problems. It is based on the generalized multiscale finite element method (GMsFEM) and multilevel Monte Carlo (MLMC) methods. The former provides a hierarchy of approximations of different resolution, whereas the latter gives an efficient way to estimate quantities of interest using samples on different levels. The number of basis functions in the online GMsFEM stage can be varied to determine the solution resolution and the computational cost, and to efficiently generate samples at different levels. In particular, it is cheap to generate samples on coarse grids but with low resolution, and it is expensive to generate samples on fine grids with high accuracy. By suitably choosing the number of samples at different levels, one can leverage the expensive computation in larger fine-grid spaces toward smaller coarse-grid spaces, while retaining the accuracy of the final Monte Carlo estimate. Further, we describe a multilevel Markov chain Monte Carlo method, which sequentially screens the proposal with different levels of approximations and reduces the number of evaluations required on fine grids, while combining the samples at different levels to arrive at an accurate estimate. The framework seamlessly integrates the multiscale features of the GMsFEM with the multilevel feature of the MLMC methods following the work in [26], and our numerical experiments illustrate its efficiency and accuracy in comparison with standard Monte Carlo estimates. © Global Science Press Limited 2015.

  14. A characteristics of East Asian climate using high-resolution regional climate model

    Science.gov (United States)

    Yhang, Y.

    2013-12-01

    Climate research, particularly application studies for water, agriculture, forestry, fishery and energy management require fine scale multi-decadal information of meteorological, oceanographic and land states. Unfortunately, spatially and temporally homogeneous multi-decadal observations of these variables in high horizontal resolution are non-existent. Some long term surface records of temperature and precipitation exist, but the number of observation is very limited and the measurements are often contaminated by changes in instrumentation over time. Some climatologically important variables, such as soil moisture, surface evaporation, and radiation are not even measured over most of East Asia. Reanalysis is one approach to obtaining long term homogeneous analysis of needed variables. However, the horizontal resolution of global reanalysis is of the order of 100 to 200 km, too coarse for many application studies. Regional climate models (RCMs) are able to provide valuable regional finescale information, especially in regions where the climate variables are strongly regulated by the underlying topography and the surface heterogeneity. In this study, we will provide accurately downscaled regional climate over East Asia using the Global/Regional Integrated Model system [GRIMs; Hong et al. 2013]. A mixed layer model is embedded within the GRIMs in order to improve air-sea interaction. A detailed description of the characteristics of the East Asian summer and winter climate will be presented through the high-resolution numerical simulations. The increase in horizontal resolution is expected to provide the high-quality data that can be used in various application areas such as hydrology or environmental model forcing.

  15. Mathematical analysis and numerical methods for science and technology

    CERN Document Server

    Dautray, Robert

    These 6 volumes - the result of a 10 year collaboration between the authors, two of France's leading scientists and both distinguished international figures - compile the mathematical knowledge required by researchers in mechanics, physics, engineering, chemistry and other branches of application of mathematics for the theoretical and numerical resolution of physical models on computers. Since the publication in 1924 of the "Methoden der mathematischen Physik" by Courant and Hilbert, there has been no other comprehensive and up-to-date publication presenting the mathematical tools needed in applications of mathematics in directly implementable form. The advent of large computers has in the meantime revolutionised methods of computation and made this gap in the literature intolerable: the objective of the present work is to fill just this gap. Many phenomena in physical mathematics may be modeled by a system of partial differential equations in distributed systems: a model here means a set of equations, which ...

  16. Numerical investigations on contactless methods for measuring critical current density in HTS: application of modified constitutive-relation method

    International Nuclear Information System (INIS)

    Kamitani, A.; Takayama, T.; Itoh, T.; Ikuno, S.

    2011-01-01

    A fast method is proposed for calculating the shielding current density in an HTS. The J-E constitutive relation is modified so as not to change the solution. A numerical code is developed on the basis of the proposed method. The permanent magnet method is successfully simulated by means of the code. A fast method has been proposed for calculating the shielding current density in a high-temperature superconducting thin film. An initial-boundary-value problem of the shielding current density cannot be always solved by means of the Runge-Kutta method even when an adaptive step-size control algorithm is incorporated to the method. In order to suppress an overflow in the algorithm, the J-E constitutive relation is modified so that its solution may satisfy the original constitutive relation. A numerical code for analyzing the shielding current density has been developed on the basis of this method and, as an application of the code, the permanent magnet method for measuring the critical current density has been investigated numerically.

  17. Tensor viscosity method for convection in numerical fluid dynamics

    International Nuclear Information System (INIS)

    Dukowicz, J.K.; Ramshaw, J.D.

    1979-01-01

    A new method, called the tensor viscosity method, is described for differencing the convective terms in multidimensional numerical fluid dynamics. The method is the proper generalization to two or three dimensions of interpolated donor cell differencing in one dimension, and is designed to achieve numerical stability with minimal numerical damping. It is a single-step method that is distinguished by simplicity and case of implementation, even in the case of an arbitrary non-rectangular mesh. It should therefore be useful in finite-element as well as finite-difference formulations

  18. Efficient numerical method for district heating system hydraulics

    International Nuclear Information System (INIS)

    Stevanovic, Vladimir D.; Prica, Sanja; Maslovaric, Blazenka; Zivkovic, Branislav; Nikodijevic, Srdjan

    2007-01-01

    An efficient method for numerical simulation and analyses of the steady state hydraulics of complex pipeline networks is presented. It is based on the loop model of the network and the method of square roots for solving the system of linear equations. The procedure is presented in the comprehensive mathematical form that could be straightforwardly programmed into a computer code. An application of the method to energy efficiency analyses of a real complex district heating system is demonstrated. The obtained results show a potential for electricity savings in pumps operation. It is shown that the method is considerably more effective than the standard Hardy Cross method still widely used in engineering practice. Because of the ease of implementation and high efficiency, the method presented in this paper is recommended for hydraulic steady state calculations of complex networks

  19. Numerical simulation methods for phase-transitional flow

    NARCIS (Netherlands)

    Pecenko, A.

    2010-01-01

    The object of the present dissertation is a numerical study of multiphase flow of one fluid component. In particular, the research described in this thesis focuses on the development of numerical methods that are based on a diffuse-interface model (DIM). With this approach, the modeling problem

  20. A Framework to Combine Low- and High-resolution Spectroscopy for the Atmospheres of Transiting Exoplanets

    NARCIS (Netherlands)

    Brogi, M.; Line, M.; Bean, J.; Désert, J.-M.; Schwarz, H.

    2017-01-01

    Current observations of the atmospheres of close-in exoplanets are predominantly obtained with two techniques: low-resolution spectroscopy with space telescopes and high-resolution spectroscopy from the ground. Although the observables delivered by the two methods are in principle highly

  1. Assessing numerical methods used in nuclear aerosol transport models

    International Nuclear Information System (INIS)

    McDonald, B.H.

    1987-01-01

    Several computer codes are in use for predicting the behaviour of nuclear aerosols released into containment during postulated accidents in water-cooled reactors. Each of these codes uses numerical methods to discretize and integrate the equations that govern the aerosol transport process. Computers perform only algebraic operations and generate only numbers. It is in the numerical methods that sense can be made of these numbers and where they can be related to the actual solution of the equations. In this report, the numerical methods most commonly used in the aerosol transport codes are examined as special cases of a general solution procedure, the Method of Weighted Residuals. It would appear that the numerical methods used in the codes are all capable of producing reasonable answers to the mathematical problem when used with skill and care. 27 refs

  2. NCAR High-resolution Land Data Assimilation System and Its Recent Applications

    Science.gov (United States)

    Chen, F.; Manning, K.; Barlage, M.; Gochis, D.; Tewari, M.

    2008-05-01

    A High-Resolution Land Data Assimilation System (HRLDAS) has been developed at NCAR to meet the need for high-resolution initial conditions of land state (soil moisture and temperature) by today's numerical weather prediction models coupled to a land surface model such as the WRF/Noah coupled modeling system. Intended for conterminous US application, HRLDAS uses observed hourly 4-km national precipitation analysis and satellite-derived surface-solar-downward radiation to drive, in uncoupled mode, the Noah land surface model to simulate long-term evolution of soil state. The advantage of HRLDAS is its use of 1-km resolution land-use and soil texture maps and 4-km rainfall data. As a result, it is able to capture fine-scale heterogeneity at the surface and in the soil. The ultimate goal of HRLDAS development is to characterize soil moisture/temperature and vegetation variability at small scales (~4km) over large areas to provide improved initial land and vegetation conditions for the WRF/Noah coupled model. Hence, HRLDAS is configured after the WRF/Noah coupled model configuration to ensure the consistency in model resolution, physical configuration (e.g., terrain height), soil model, and parameters between the uncoupled soil initialization system and its coupled forecast counterpart. We will discuss various characteristics of HRLDAS, including its spin-up and sensitivity to errors in forcing data. We will describe recent enhancement in terms of hydrological modeling and the use of remote sensing data. We will discuss recent applications of HRLDAS for flood forecast, agriculture, and arctic land system.

  3. Numerical analysis of creep brittle rupture by the finite element method

    International Nuclear Information System (INIS)

    Goncalves, O.J.A.; Owen, D.R.J.

    1983-01-01

    In this work an implicit algorithm is proposed for the numerical analysis of creep brittle rupture problems by the finite element method. This kind of structural failure, typical in components operating at high temperatures for long periods of time, is modelled using either a three dimensional generalization of the Kachanov-Rabotnov equations due to Leckie and Hayhurst or the Monkman-Grant fracture criterion together with the Linear Life Fraction Rule. The finite element equations are derived by the displacement method and isoparametric elements are used for the spatial discretization. Geometric nonlinear effects (large displacements) are accounted for by an updated Lagrangian formulation. Attention is also focussed on the solution of the highly stiff differential equations that govern damage growth. Finally the numerical results of a three-dimensional analysis of a pressurized thin cylinder containing oxidised pits in its external wall are discussed. (orig.)

  4. Evaluation of a high resolution genotyping method for Chlamydia trachomatis using routine clinical samples.

    Directory of Open Access Journals (Sweden)

    Yibing Wang

    2011-02-01

    Full Text Available Genital chlamydia infection is the most commonly diagnosed sexually transmitted infection in the UK. C. trachomatis genital infections are usually caused by strains which fall into two pathovars: lymphogranuloma venereum (LGV and the genitourinary genotypes D-K. Although these genotypes can be discriminated by outer membrane protein gene (ompA sequencing or multi-locus sequence typing (MLST, neither protocol affords the high-resolution genotyping required for local epidemiology and accurate contact-tracing.We evaluated variable number tandem repeat (VNTR and ompA sequencing (now called multi-locus VNTR analysis and ompA or "MLVA-ompA" to study local epidemiology in Southampton over a period of six months. One hundred and fifty seven endocervical swabs that tested positive for C. trachomatis from both the Southampton genitourinary medicine (GUM clinic and local GP surgeries were tested by COBAS Taqman 48 (Roche PCR for the presence of C. trachomatis. Samples tested as positive by the commercial NAATs test were genotyped, where possible, by a MLVA-ompA sequencing technique. Attempts were made to isolate C. trachomatis from all 157 samples in cell culture, and 68 (43% were successfully recovered by repeatable passage in culture. Of the 157 samples, 93 (i.e. 59% were fully genotyped by MLVA-ompA. Only one mixed infection (E & D in a single sample was confirmed. There were two distinct D genotypes for the ompA gene. Most frequent ompA genotypes were D, E and F, comprising 20%, 41% and 16% of the type-able samples respectively. Within all genotypes we detected numerous MLVA sub-types.Amongst the common genotypes, there are a significant number of defined MLVA sub-types, which may reflect particular background demographics including age group, geography, high-risk sexual behavior, and sexual networks.

  5. High resolution data acquisition

    Science.gov (United States)

    Thornton, Glenn W.; Fuller, Kenneth R.

    1993-01-01

    A high resolution event interval timing system measures short time intervals such as occur in high energy physics or laser ranging. Timing is provided from a clock (38) pulse train (37) and analog circuitry (44) for generating a triangular wave (46) synchronously with the pulse train (37). The triangular wave (46) has an amplitude and slope functionally related to the time elapsed during each clock pulse in the train. A converter (18, 32) forms a first digital value of the amplitude and slope of the triangle wave at the start of the event interval and a second digital value of the amplitude and slope of the triangle wave at the end of the event interval. A counter (26) counts the clock pulse train (37) during the interval to form a gross event interval time. A computer (52) then combines the gross event interval time and the first and second digital values to output a high resolution value for the event interval.

  6. Numerical methods in multibody dynamics

    CERN Document Server

    Eich-Soellner, Edda

    1998-01-01

    Today computers play an important role in the development of complex mechanical systems, such as cars, railway vehicles or machines. Efficient simulation of these systems is only possible when based on methods that explore the strong link between numerics and computational mechanics. This book gives insight into modern techniques of numerical mathematics in the light of an interesting field of applications: multibody dynamics. The important interaction between modeling and solution techniques is demonstrated by using a simplified multibody model of a truck. Different versions of this mechanical model illustrate all key concepts in static and dynamic analysis as well as in parameter identification. The book focuses in particular on constrained mechanical systems. Their formulation in terms of differential-algebraic equations is the backbone of nearly all chapters. The book is written for students and teachers in numerical analysis and mechanical engineering as well as for engineers in industrial research labor...

  7. To the development of numerical methods in problems of radiation transport

    International Nuclear Information System (INIS)

    Germogenova, T.A.

    1990-01-01

    Review of studies on the development of numerical methods and the discrete ordinate method in particular, used for solution of radiation protection physics problems is given. Consideration is given to the problems, which arise when calculating fields of penetrating radiation and when studying processes of charged-particle transport and cascade processes, generated by high-energy primary radiation

  8. High resolution resistivity measurements at the Down Ampney research site

    International Nuclear Information System (INIS)

    Hallam, J.R.; Jackson, P.D.; Rainsbury, M.; Raines, M.

    1991-01-01

    A new high resolution resistivity surveying method is described for fault detection and characterisation. The resolution is shown to be significantly higher than conventional apparent resistivity profiling when applied to geological discontinuities such as faults. Nominal fault locations have been determined to an accuracy of 0.5 m, as proven by drilling. Two dimensional profiling and image enhancement of the resulting 2-D data set indicated the possibility of subsidiary fractures and/or lateral changes within the clay to clay' fault zone. The increased resolution allows greater confidence to be placed on both the fault detection and lateral perturbations derived from processed resistance and resistivity images. (Author)

  9. Structure of high-resolution NMR spectra

    CERN Document Server

    Corio, PL

    2012-01-01

    Structure of High-Resolution NMR Spectra provides the principles, theories, and mathematical and physical concepts of high-resolution nuclear magnetic resonance spectra.The book presents the elementary theory of magnetic resonance; the quantum mechanical theory of angular momentum; the general theory of steady state spectra; and multiple quantum transitions, double resonance and spin echo experiments.Physicists, chemists, and researchers will find the book a valuable reference text.

  10. Convection in multiphase fluid flows using lattice Boltzmann methods

    NARCIS (Netherlands)

    Biferale, L.; Perlekar, P.; Sbragaglia, M.; Toschi, F.

    2012-01-01

    We present high-resolution numerical simulations of convection in multiphase flows (boiling) using a novel algorithm based on a lattice Boltzmann method. We first study the thermodynamical and kinematic properties of the algorithm. Then, we perform a series of 3D numerical simulations changing the

  11. MULTI-SCALE SEGMENTATION OF HIGH RESOLUTION REMOTE SENSING IMAGES BY INTEGRATING MULTIPLE FEATURES

    Directory of Open Access Journals (Sweden)

    Y. Di

    2017-05-01

    Full Text Available Most of multi-scale segmentation algorithms are not aiming at high resolution remote sensing images and have difficulty to communicate and use layers’ information. In view of them, we proposes a method of multi-scale segmentation of high resolution remote sensing images by integrating multiple features. First, Canny operator is used to extract edge information, and then band weighted distance function is built to obtain the edge weight. According to the criterion, the initial segmentation objects of color images can be gained by Kruskal minimum spanning tree algorithm. Finally segmentation images are got by the adaptive rule of Mumford–Shah region merging combination with spectral and texture information. The proposed method is evaluated precisely using analog images and ZY-3 satellite images through quantitative and qualitative analysis. The experimental results show that the multi-scale segmentation of high resolution remote sensing images by integrating multiple features outperformed the software eCognition fractal network evolution algorithm (highest-resolution network evolution that FNEA on the accuracy and slightly inferior to FNEA on the efficiency.

  12. 3D high-resolution two-photon crosslinked hydrogel structures for biological studies.

    Science.gov (United States)

    Brigo, Laura; Urciuolo, Anna; Giulitti, Stefano; Della Giustina, Gioia; Tromayer, Maximilian; Liska, Robert; Elvassore, Nicola; Brusatin, Giovanna

    2017-06-01

    Hydrogels are widely used as matrices for cell growth due to the their tuneable chemical and physical properties, which mimic the extracellular matrix of natural tissue. The microfabrication of hydrogels into arbitrarily complex 3D structures is becoming essential for numerous biological applications, and in particular for investigating the correlation between cell shape and cell function in a 3D environment. Micrometric and sub-micrometric resolution hydrogel scaffolds are required to deeply investigate molecular mechanisms behind cell-matrix interaction and downstream cellular processes. We report the design and development of high resolution 3D gelatin hydrogel woodpile structures by two-photon crosslinking. Hydrated structures of lateral linewidth down to 0.5µm, lateral and axial resolution down to a few µm are demonstrated. According to the processing parameters, different degrees of polymerization are obtained, resulting in hydrated scaffolds of variable swelling and deformation. The 3D hydrogels are biocompatible and promote cell adhesion and migration. Interestingly, according to the polymerization degree, 3D hydrogel woodpile structures show variable extent of cell adhesion and invasion. Human BJ cell lines show capability of deforming 3D micrometric resolved hydrogel structures. The design and development of high resolution 3D gelatin hydrogel woodpile structures by two-photon crosslinking is reported. Significantly, topological and mechanical conditions of polymerized gelatin structures were suitable for cell accommodation in the volume of the woodpiles, leading to a cell density per unit area comparable to the bare substrate. The fabricated structures, presenting micrometric features of high resolution, are actively deformed by cells, both in terms of cell invasion within rods and of cell attachment in-between contiguous woodpiles. Possible biological targets for this 3D approach are customized 3D tissue models, or studies of cell adhesion

  13. Direct numerical simulation of the Rayleigh-Taylor instability with the spectral element method

    International Nuclear Information System (INIS)

    Zhang Xu; Tan Duowang

    2009-01-01

    A novel method is proposed to simulate Rayleigh-Taylor instabilities using a specially-developed unsteady three-dimensional high-order spectral element method code. The numerical model used consists of Navier-Stokes equations and a transport-diffusive equation. The code is first validated with the results of linear stability perturbation theory. Then several characteristics of the Rayleigh-Taylor instabilities are studied using this three-dimensional unsteady code, including instantaneous turbulent structures and statistical turbulent mixing heights under different initial wave numbers. These results indicate that turbulent structures of Rayleigh-Taylor instabilities are strongly dependent on the initial conditions. The results also suggest that a high-order numerical method should provide the capability of simulating small scale fluctuations of Rayleigh-Taylor instabilities of turbulent flows. (authors)

  14. High accuracy mantle convection simulation through modern numerical methods

    KAUST Repository

    Kronbichler, Martin; Heister, Timo; Bangerth, Wolfgang

    2012-01-01

    Numerical simulation of the processes in the Earth's mantle is a key piece in understanding its dynamics, composition, history and interaction with the lithosphere and the Earth's core. However, doing so presents many practical difficulties related

  15. Numerical models for high beta magnetohydrodynamic flow

    International Nuclear Information System (INIS)

    Brackbill, J.U.

    1987-01-01

    The fundamentals of numerical magnetohydrodynamics for highly conducting, high-beta plasmas are outlined. The discussions emphasize the physical properties of the flow, and how elementary concepts in numerical analysis can be applied to the construction of finite difference approximations that capture these features. The linear and nonlinear stability of explicit and implicit differencing in time is examined, the origin and effect of numerical diffusion in the calculation of convective transport is described, and a technique for maintaining solenoidality in the magnetic field is developed. Many of the points are illustrated by numerical examples. The techniques described are applicable to the time-dependent, high-beta flows normally encountered in magnetically confined plasmas, plasma switches, and space and astrophysical plasmas. 40 refs

  16. Optimized cleanup method for the determination of short chain polychlorinated n-alkanes in sediments by high resolution gas chromatography/electron capture negative ion-low resolution mass spectrometry

    International Nuclear Information System (INIS)

    Gao Yuan; Zhang Haijun; Chen Jiping; Zhang Qing; Tian Yuzeng; Qi Peipei; Yu Zhengkun

    2011-01-01

    Graphical abstract: The sediment sample could be purified by the optimized cleanup method, and satisfying cleanup efficiency was obtained. Highlights: → The elution characters of sPCAs and interfering substances were evaluated on three adsorbents. → An optimized cleanup method was developed for sPCAs with satisfying cleanup efficiency. → The cleanup method combined with HRGC/ECNI-LRMS was applied for sPCAs analysis. → The sPCAs levels range from 53.6 ng g -1 to 289.3 ng g -1 in tested sediment samples. - Abstract: The performances of three adsorbents, i.e. silica gel, neutral and basic alumina, in the separation of short chain polychlorinated n-alkanes (sPCAs) from potential interfering substances such as polychlorinated biphenyls (PCBs) and organochlorine pesticides were evaluated. To increase the cleanup efficiency, a two-step cleanup method using silica gel column and subsequent basic alumina column was developed. All the PCB and organochlorine pesticides could be removed by this cleanup method. The very satisfying cleanup efficiency of sPCAs has been achieved and the recovery in the cleanup method reached 92.7%. The method detection limit (MDL) for sPCAs in sediments was determined to be 14 ng g -1 . Relative standard deviation (R.S.D.) of 5.3% was obtained for the mass fraction of sPCAs by analyzing four replicates of a spiked sediment sample. High resolution gas chromatography/electron capture negative ion-low resolution mass spectrometry (HRGC/ECNI-LRMS) was used for sPCAs quantification by monitoring [M-HCl]· - ions. When applied to the sediment samples from the mouth of the Daliao River, the optimized cleanup method in conjunction with HRGC/ECNI-LRMS allowed for highly selective identifications for sPCAs. The sPCAs levels in sediment samples are reported to range from 53.6 ng g -1 to 289.3 ng g -1 . C 10 - and C 11 -PCAs are the dominant residue in most of investigated sediment samples.

  17. Line broadening interference for high-resolution nuclear magnetic resonance spectra under inhomogeneous magnetic fields

    International Nuclear Information System (INIS)

    Wei, Zhiliang; Yang, Jian; Lin, Yanqin; Chen, Zhong; Chen, Youhe

    2015-01-01

    Nuclear magnetic resonance spectroscopy serves as an important tool for analyzing chemicals and biological metabolites. However, its performance is subject to the magnetic-field homogeneity. Under inhomogeneous fields, peaks are broadened to overlap each other, introducing difficulties for assignments. Here, we propose a method termed as line broadening interference (LBI) to provide high-resolution information under inhomogeneous magnetic fields by employing certain gradients in the indirect dimension to interfere the magnetic-field inhomogeneity. The conventional spectral-line broadening is thus interfered to be non-diagonal, avoiding the overlapping among adjacent resonances. Furthermore, an inhomogeneity correction algorithm is developed based on pattern recognition to recover the high-resolution information from LBI spectra. Theoretical deductions are performed to offer systematic and detailed analyses on the proposed method. Moreover, experiments are conducted to prove the feasibility of the proposed method for yielding high-resolution spectra in inhomogeneous magnetic fields

  18. Improving the singles rate method for modeling accidental coincidences in high-resolution PET

    International Nuclear Information System (INIS)

    Oliver, Josep F; Rafecas, Magdalena

    2010-01-01

    Random coincidences ('randoms') are one of the main sources of image degradation in PET imaging. In order to correct for this effect, an accurate method to estimate the contribution of random events is necessary. This aspect becomes especially relevant for high-resolution PET scanners where the highest image quality is sought and accurate quantitative analysis is undertaken. One common approach to estimate randoms is the so-called singles rate method (SR) widely used because of its good statistical properties. SR is based on the measurement of the singles rate in each detector element. However, recent studies suggest that SR systematically overestimates the correct random rate. This overestimation can be particularly marked for low energy thresholds, below 250 keV used in some applications and could entail a significant image degradation. In this work, we investigate the performance of SR as a function of the activity, geometry of the source and energy acceptance window used. We also investigate the performance of an alternative method, which we call 'singles trues' (ST) that improves SR by properly modeling the presence of true coincidences in the sample. Nevertheless, in any real data acquisition the knowledge of which singles are members of a true coincidence is lost. Therefore, we propose an iterative method, STi, that provides an estimation based on ST but which only requires the knowledge of measurable quantities: prompts and singles. Due to inter-crystal scatter, for wide energy windows ST only partially corrects SR overestimations. While SR deviations are in the range 86-300% (depending on the source geometry), the ST deviations are systematically smaller and contained in the range 4-60%. STi fails to reproduce the ST results, although for not too high activities the deviation with respect to ST is only a few percent. For conventional energy windows, i.e. those without inter-crystal scatter, the ST method corrects the SR overestimations, and deviations from

  19. NEUTRINO-DRIVEN CONVECTION IN CORE-COLLAPSE SUPERNOVAE: HIGH-RESOLUTION SIMULATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Radice, David; Ott, Christian D. [TAPIR, Walter Burke Institute for Theoretical Physics, Mailcode 350-17, California Institute of Technology, Pasadena, CA 91125 (United States); Abdikamalov, Ernazar [Department of Physics, School of Science and Technology, Nazarbayev University, Astana 010000 (Kazakhstan); Couch, Sean M. [Department of Physics and Astronomy, Michigan State University, East Lansing, MI 48824 (United States); Haas, Roland [Max-Planck-Institut für Gravitationsphysik, Albert-Einstein-Institut, D-14476 Golm (Germany); Schnetter, Erik, E-mail: dradice@caltech.edu [Perimeter Institute for Theoretical Physics, Waterloo, ON (Canada)

    2016-03-20

    We present results from high-resolution semiglobal simulations of neutrino-driven convection in core-collapse supernovae. We employ an idealized setup with parameterized neutrino heating/cooling and nuclear dissociation at the shock front. We study the internal dynamics of neutrino-driven convection and its role in redistributing energy and momentum through the gain region. We find that even if buoyant plumes are able to locally transfer heat up to the shock, convection is not able to create a net positive energy flux and overcome the downward transport of energy from the accretion flow. Turbulent convection does, however, provide a significant effective pressure support to the accretion flow as it favors the accumulation of energy, mass, and momentum in the gain region. We derive an approximate equation that is able to explain and predict the shock evolution in terms of integrals of quantities such as the turbulent pressure in the gain region or the effects of nonradial motion of the fluid. We use this relation as a way to quantify the role of turbulence in the dynamics of the accretion shock. Finally, we investigate the effects of grid resolution, which we change by a factor of 20 between the lowest and highest resolution. Our results show that the shallow slopes of the turbulent kinetic energy spectra reported in previous studies are a numerical artifact. Kolmogorov scaling is progressively recovered as the resolution is increased.

  20. NEUTRINO-DRIVEN CONVECTION IN CORE-COLLAPSE SUPERNOVAE: HIGH-RESOLUTION SIMULATIONS

    International Nuclear Information System (INIS)

    Radice, David; Ott, Christian D.; Abdikamalov, Ernazar; Couch, Sean M.; Haas, Roland; Schnetter, Erik

    2016-01-01

    We present results from high-resolution semiglobal simulations of neutrino-driven convection in core-collapse supernovae. We employ an idealized setup with parameterized neutrino heating/cooling and nuclear dissociation at the shock front. We study the internal dynamics of neutrino-driven convection and its role in redistributing energy and momentum through the gain region. We find that even if buoyant plumes are able to locally transfer heat up to the shock, convection is not able to create a net positive energy flux and overcome the downward transport of energy from the accretion flow. Turbulent convection does, however, provide a significant effective pressure support to the accretion flow as it favors the accumulation of energy, mass, and momentum in the gain region. We derive an approximate equation that is able to explain and predict the shock evolution in terms of integrals of quantities such as the turbulent pressure in the gain region or the effects of nonradial motion of the fluid. We use this relation as a way to quantify the role of turbulence in the dynamics of the accretion shock. Finally, we investigate the effects of grid resolution, which we change by a factor of 20 between the lowest and highest resolution. Our results show that the shallow slopes of the turbulent kinetic energy spectra reported in previous studies are a numerical artifact. Kolmogorov scaling is progressively recovered as the resolution is increased

  1. High-resolution multi-slice PET

    International Nuclear Information System (INIS)

    Yasillo, N.J.; Chintu Chen; Ordonez, C.E.; Kapp, O.H.; Sosnowski, J.; Beck, R.N.

    1992-01-01

    This report evaluates the progress to test the feasibility and to initiate the design of a high resolution multi-slice PET system. The following specific areas were evaluated: detector development and testing; electronics configuration and design; mechanical design; and system simulation. The design and construction of a multiple-slice, high-resolution positron tomograph will provide substantial improvements in the accuracy and reproducibility of measurements of the distribution of activity concentrations in the brain. The range of functional brain research and our understanding of local brain function will be greatly extended when the development of this instrumentation is completed

  2. SEM-microphotogrammetry, a new take on an old method for generating high-resolution 3D models from SEM images.

    Science.gov (United States)

    Ball, A D; Job, P A; Walker, A E L

    2017-08-01

    The method we present here uses a scanning electron microscope programmed via macros to automatically capture dozens of images at suitable angles to generate accurate, detailed three-dimensional (3D) surface models with micron-scale resolution. We demonstrate that it is possible to use these Scanning Electron Microscope (SEM) images in conjunction with commercially available software originally developed for photogrammetry reconstructions from Digital Single Lens Reflex (DSLR) cameras and to reconstruct 3D models of the specimen. These 3D models can then be exported as polygon meshes and eventually 3D printed. This technique offers the potential to obtain data suitable to reconstruct very tiny features (e.g. diatoms, butterfly scales and mineral fabrics) at nanometre resolution. Ultimately, we foresee this as being a useful tool for better understanding spatial relationships at very high resolution. However, our motivation is also to use it to produce 3D models to be used in public outreach events and exhibitions, especially for the blind or partially sighted. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society.

  3. High resolution NMR spectroscopy of synthetic polymers in bulk

    International Nuclear Information System (INIS)

    Komorski, R.A.

    1986-01-01

    The contents of this book are: Overview of high-resolution NMR of solid polymers; High-resolution NMR of glassy amorphous polymers; Carbon-13 solid-state NMR of semicrystalline polymers; Conformational analysis of polymers of solid-state NMR; High-resolution NMR studies of oriented polymers; High-resolution solid-state NMR of protons in polymers; and Deuterium NMR of solid polymers. This work brings together the various approaches for high-resolution NMR studies of bulk polymers into one volume. Heavy emphasis is, of course, given to 13C NMR studies both above and below Tg. Standard high-power pulse and wide-line techniques are not covered

  4. Application of numerical inverse method in calculation of composition-dependent interdiffusion coefficients in finite diffusion couples

    DEFF Research Database (Denmark)

    Liu, Yuanrong; Chen, Weimin; Zhong, Jing

    2017-01-01

    The previously developed numerical inverse method was applied to determine the composition-dependent interdiffusion coefficients in single-phase finite diffusion couples. The numerical inverse method was first validated in a fictitious binary finite diffusion couple by pre-assuming four standard...... sets of interdiffusion coefficients. After that, the numerical inverse method was then adopted in a ternary Al-Cu-Ni finite diffusion couple. Based on the measured composition profiles, the ternary interdiffusion coefficients along the entire diffusion path of the target ternary diffusion couple were...... obtained by using the numerical inverse approach. The comprehensive comparisons between the computations and the experiments indicate that the numerical inverse method is also applicable to high-throughput determination of the composition-dependent interdiffusion coefficients in finite diffusion couples....

  5. A new omni-directional multi-camera system for high resolution surveillance

    Science.gov (United States)

    Cogal, Omer; Akin, Abdulkadir; Seyid, Kerem; Popovic, Vladan; Schmid, Alexandre; Ott, Beat; Wellig, Peter; Leblebici, Yusuf

    2014-05-01

    Omni-directional high resolution surveillance has a wide application range in defense and security fields. Early systems used for this purpose are based on parabolic mirror or fisheye lens where distortion due to the nature of the optical elements cannot be avoided. Moreover, in such systems, the image resolution is limited to a single image sensor's image resolution. Recently, the Panoptic camera approach that mimics the eyes of flying insects using multiple imagers has been presented. This approach features a novel solution for constructing a spherically arranged wide FOV plenoptic imaging system where the omni-directional image quality is limited by low-end sensors. In this paper, an overview of current Panoptic camera designs is provided. New results for a very-high resolution visible spectrum imaging and recording system inspired from the Panoptic approach are presented. The GigaEye-1 system, with 44 single cameras and 22 FPGAs, is capable of recording omni-directional video in a 360°×100° FOV at 9.5 fps with a resolution over (17,700×4,650) pixels (82.3MP). Real-time video capturing capability is also verified at 30 fps for a resolution over (9,000×2,400) pixels (21.6MP). The next generation system with significantly higher resolution and real-time processing capacity, called GigaEye-2, is currently under development. The important capacity of GigaEye-1 opens the door to various post-processing techniques in surveillance domain such as large perimeter object tracking, very-high resolution depth map estimation and high dynamicrange imaging which are beyond standard stitching and panorama generation methods.

  6. Physical qualification and improvements of the numerical model of a method of characteristics for the resolution of the neutron transport equation in non-structured grids

    International Nuclear Information System (INIS)

    Santandrea, Simone

    2001-01-01

    This research thesis addresses the resolution of the neutron transport equation inside reactor cells in non-structured grids and in general geometry by using the method of characteristics (MoC) and two acceleration methods developed during this research. The author introduces the MoC with a flat approximation of the neutron collision source within each computation area. This formulation leads to a linear approximation. The next part presents the mathematical framework for the use of the Lanczos iterative scheme. A new acceleration method is then introduced. The last part reports realistic cases with a high spatial and angular heterogeneity. Results obtained by using the Apollo2-TDT code are compared with those obtained with the Tripoli4 Monte-Carlo code [fr

  7. Numerical Analysis on the High-Strength Concrete Beams Ultimate Behaviour

    Science.gov (United States)

    Smarzewski, Piotr; Stolarski, Adam

    2017-10-01

    Development of technologies of high-strength concrete (HSC) beams production, with the aim of creating a secure and durable material, is closely linked with the numerical models of real objects. The three-dimensional nonlinear finite element models of reinforced high-strength concrete beams with a complex geometry has been investigated in this study. The numerical analysis is performed using the ANSYS finite element package. The arc-length (A-L) parameters and the adaptive descent (AD) parameters are used with Newton-Raphson method to trace the complete load-deflection curves. Experimental and finite element modelling results are compared graphically and numerically. Comparison of these results indicates the correctness of failure criteria assumed for the high-strength concrete and the steel reinforcement. The results of numerical simulation are sensitive to the modulus of elasticity and the shear transfer coefficient for an open crack assigned to high-strength concrete. The full nonlinear load-deflection curves at mid-span of the beams, the development of strain in compressive concrete and the development of strain in tensile bar are in good agreement with the experimental results. Numerical results for smeared crack patterns are qualitatively agreeable as to the location, direction, and distribution with the test data. The model was capable of predicting the introduction and propagation of flexural and diagonal cracks. It was concluded that the finite element model captured successfully the inelastic flexural behaviour of the beams to failure.

  8. High-emulation mask recognition with high-resolution hyperspectral video capture system

    Science.gov (United States)

    Feng, Jiao; Fang, Xiaojing; Li, Shoufeng; Wang, Yongjin

    2014-11-01

    We present a method for distinguishing human face from high-emulation mask, which is increasingly used by criminals for activities such as stealing card numbers and passwords on ATM. Traditional facial recognition technique is difficult to detect such camouflaged criminals. In this paper, we use the high-resolution hyperspectral video capture system to detect high-emulation mask. A RGB camera is used for traditional facial recognition. A prism and a gray scale camera are used to capture spectral information of the observed face. Experiments show that mask made of silica gel has different spectral reflectance compared with the human skin. As multispectral image offers additional spectral information about physical characteristics, high-emulation mask can be easily recognized.

  9. High resolution integral holography using Fourier ptychographic approach.

    Science.gov (United States)

    Li, Zhaohui; Zhang, Jianqi; Wang, Xiaorui; Liu, Delian

    2014-12-29

    An innovative approach is proposed for calculating high resolution computer generated integral holograms by using the Fourier Ptychographic (FP) algorithm. The approach initializes a high resolution complex hologram with a random guess, and then stitches together low resolution multi-view images, synthesized from the elemental images captured by integral imaging (II), to recover the high resolution hologram through an iterative retrieval with FP constrains. This paper begins with an analysis of the principle of hologram synthesis from multi-projections, followed by an accurate determination of the constrains required in the Fourier ptychographic integral-holography (FPIH). Next, the procedure of the approach is described in detail. Finally, optical reconstructions are performed and the results are demonstrated. Theoretical analysis and experiments show that our proposed approach can reconstruct 3D scenes with high resolution.

  10. Large-grazing-angle, multi-image Kirkpatrick-Baez microscope as the front end to a high-resolution streak camera for OMEGA

    International Nuclear Information System (INIS)

    Gotchev, O.V.; Hayes, L.J.; Jaanimagi, P.A.; Knauer, J.P.; Marshall, F.J.; Meyerhofer, D.D.

    2003-01-01

    A high-resolution x-ray microscope with a large grazing angle has been developed, characterized, and fielded at the Laboratory for Laser Energetics. It increases the sensitivity and spatial resolution in planar direct-drive hydrodynamic stability experiments, relevant to inertial confinement fusion research. It has been designed to work as the optical front end of the PJX - a high-current, high-dynamic-range x-ray streak camera. Optical design optimization, results from numerical ray tracing, mirror-coating choice, and characterization have been described previously [O. V. Gotchev, et al., Rev. Sci. Instrum. 74, 2178 (2003)]. This work highlights the optics' unique mechanical design and flexibility and considers certain applications that benefit from it. Characterization of the microscope's resolution in terms of its modulation transfer function over the field of view is shown. Recent results from hydrodynamic stability experiments, diagnosed with the optic and the PJX, are provided to confirm the microscope's advantages as a high-resolution, high-throughput x-ray optical front end for streaked imaging

  11. Numerical Methods for a Class of Differential Algebraic Equations

    Directory of Open Access Journals (Sweden)

    Lei Ren

    2017-01-01

    Full Text Available This paper is devoted to the study of some efficient numerical methods for the differential algebraic equations (DAEs. At first, we propose a finite algorithm to compute the Drazin inverse of the time varying DAEs. Numerical experiments are presented by Drazin inverse and Radau IIA method, which illustrate that the precision of the Drazin inverse method is higher than the Radau IIA method. Then, Drazin inverse, Radau IIA, and Padé approximation are applied to the constant coefficient DAEs, respectively. Numerical results demonstrate that the Padé approximation is powerful for solving constant coefficient DAEs.

  12. Finite detector based projection model for super resolution CT

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Hengyong; Wang, Ge [Wake Forest Univ. Health Sciences, Winston-Salem, NC (United States). Dept. of Radiology; Virgina Tech, Blacksburg, VA (United States). Biomedical Imaging Div.

    2011-07-01

    For finite detector and focal spot sizes, here we propose a projection model for super resolution CT. First, for a given X-ray source point, a projection datum is modeled as an area integral over a narrow fan-beam connecting the detector elemental borders and the X-ray source point. Then, the final projection value is expressed as the integral obtained in the first step over the whole focal spot support. An ordered-subset simultaneous algebraic reconstruction technique (OS-SART) is developed using the proposed projection model. In the numerical simulation, our method produces super spatial resolution and suppresses high-frequency artifacts. (orig.)

  13. Oversampling in the computed tomography measurements applied for bone structure studies as a method of spatial resolution improvement

    International Nuclear Information System (INIS)

    Tatoń, Grzegorz; Rokita, Eugeniusz; Rok, Tomasz; Beckmann, Felix

    2012-01-01

    Our purpose was to check the potential ability of oversampling as a method for computed tomography axial resolution improvement. The method of achieving isotropic and fine resolution, when the scanning system is characterized by anisotropic resolutions is proposed. In case of typical clinical system the axial resolution is much lower than the planar one. The idea relies on the scanning with a wide overlapping layers and subsequent resolution recovery on the level of scanning step. Simulated three-dimensional images, as well as the real microtomographic images of rat femoral bone were used in proposed solution tests. Original high resolution images were virtually scanned with a wide beam and a small step in order to simulate the real measurements. The low resolution image series were subsequently processed in order to back to the original fine one. Original, virtually scanned and recovered images resolutions were compared with the use of modulation transfer function (MTF). A good ability of oversampling as a method for the resolution recovery was showed. It was confirmed by comparing the resolving powers after and before resolution recovery. The MTF analysis showed resolution improvement. The resolution improvement was achieved but the image noise raised considerably, which is clearly visible on image histograms. Despite this disadvantage the proposed method can be successfully used in practice, especially in the trabecular bone studies because of high contrast between trabeculae and intertrabecular spaces

  14. Inverse transformation algorithm of transient electromagnetic field and its high-resolution continuous imaging interpretation method

    International Nuclear Information System (INIS)

    Qi, Zhipeng; Li, Xiu; Lu, Xushan; Zhang, Yingying; Yao, Weihua

    2015-01-01

    We introduce a new and potentially useful method for wave field inverse transformation and its application in transient electromagnetic method (TEM) 3D interpretation. The diffusive EM field is known to have a unique integral representation in terms of a fictitious wave field that satisfies a wave equation. The continuous imaging of TEM can be accomplished using the imaging methods in seismic interpretation after the diffusion equation is transformed into a fictitious wave equation. The interpretation method based on the imaging of a fictitious wave field could be used as a fast 3D inversion method. Moreover, the fictitious wave field possesses some wave field features making it possible for the application of a wave field interpretation method in TEM to improve the prospecting resolution.Wave field transformation is a key issue in the migration imaging of a fictitious wave field. The equation in the wave field transformation belongs to the first class Fredholm integration equation, which is a typical ill-posed equation. Additionally, TEM has a large dynamic time range, which also facilitates the weakness of this ill-posed problem. The wave field transformation is implemented by using pre-conditioned regularized conjugate gradient method. The continuous imaging of a fictitious wave field is implemented by using Kirchhoff integration. A synthetic aperture and deconvolution algorithm is also introduced to improve the interpretation resolution. We interpreted field data by the method proposed in this paper, and obtained a satisfying interpretation result. (paper)

  15. Numerical simulation of "an American haboob"

    OpenAIRE

    Vukovic, A.; Vujadinovic, M.; Pejanovic, G.; Andric, J.; Kumjian, M. R.; Djurdjevic, V.; Dacic, M.; Prasad, A. K.; El-Askary, H. M.; Paris, B. C.; Petkovic, S.; Nickovic, S.; Sprigg, W. A.

    2014-01-01

    A dust storm of fearful proportions hit Phoenix in the early evening hours of 5 July 2011. This storm, an American haboob, was predicted hours in advance because numerical, land–atmosphere modeling, computing power and remote sensing of dust events have improved greatly over the past decade. High-resolution numerical models are required for accurate simulation of the small scales of the haboob process, with high velocity surface winds produced by strong convection and severe...

  16. A modified compressible smoothed particle hydrodynamics method and its application on the numerical simulation of low and high velocity impacts

    International Nuclear Information System (INIS)

    Amanifard, N.; Haghighat Namini, V.

    2012-01-01

    In this study a Modified Compressible Smoothed Particle Hydrodynamics method is introduced which is applicable in problems involving shock wave structures and elastic-plastic deformations of solids. As a matter of fact, algorithm of the method is based on an approach which descritizes the momentum equation into three parts and solves each part separately and calculates their effects on the velocity field and displacement of particles. The most exclusive feature of the method is exactly removing artificial viscosity of the formulations and representing good compatibility with other reasonable numerical methods without any rigorous numerical fractures or tensile instabilities while Modified Compressible Smoothed Particle Hydrodynamics does not use any extra modifications. Two types of problems involving elastic-plastic deformations and shock waves are presented here to demonstrate the capability of Modified Compressible Smoothed Particle Hydrodynamics in simulation of such problems and its ability to capture shock. The problems that are proposed here are low and high velocity impacts between aluminum projectiles and semi infinite aluminum beams. Elastic-perfectly plastic model is chosen for constitutive model of the aluminum and the results of simulations are compared with other reasonable studies in these cases.

  17. High-heat tank safety issue resolution program plan. Revision 2

    International Nuclear Information System (INIS)

    Wang, O.S.

    1994-12-01

    The purpose of this program plan is to provide a guide for selecting corrective actions that will mitigate and/or remediate the high-heat waste tank safety issue for single-shell tank 241-C-106. The heat source of approximately 110,000 Btu/hr is the radioactive decay of the stored waste material (primarily 90 Sr) inadvertently transferred into the tank in the later 1960s. Currently, forced ventilation, with added water to promote thermal conductivity and evaporation cooling, is used for heat removal. The method is very effective and economical. At this time, the only viable solution identified to permanently resolve this safety issue is the removal of heat-generating waste in the tank. This solution is being aggressively pursued as the only remediation method to this safety issue, and tank 241-C-106 has been selected as the first single-shell tank for retrieval. The current cooling method and other alternatives are addressed in this program as means to mitigate this safety issue before retrieval. This program plan has three parts. The first part establishes program objectives and defines safety issue, drivers, and resolution criteria and strategy. The second part evaluates the high-heat safety issue and its mitigation and remediation methods and other alternatives according to resolution logic. The third part identifies major tasks and alternatives for mitigation and resolution of the safety issue. A table of best-estimate schedules for the key tasks is also included in this program plan

  18. Spatial Ensemble Postprocessing of Precipitation Forecasts Using High Resolution Analyses

    Science.gov (United States)

    Lang, Moritz N.; Schicker, Irene; Kann, Alexander; Wang, Yong

    2017-04-01

    Ensemble prediction systems are designed to account for errors or uncertainties in the initial and boundary conditions, imperfect parameterizations, etc. However, due to sampling errors and underestimation of the model errors, these ensemble forecasts tend to be underdispersive, and to lack both reliability and sharpness. To overcome such limitations, statistical postprocessing methods are commonly applied to these forecasts. In this study, a full-distributional spatial post-processing method is applied to short-range precipitation forecasts over Austria using Standardized Anomaly Model Output Statistics (SAMOS). Following Stauffer et al. (2016), observation and forecast fields are transformed into standardized anomalies by subtracting a site-specific climatological mean and dividing by the climatological standard deviation. Due to the need of fitting only a single regression model for the whole domain, the SAMOS framework provides a computationally inexpensive method to create operationally calibrated probabilistic forecasts for any arbitrary location or for all grid points in the domain simultaneously. Taking advantage of the INCA system (Integrated Nowcasting through Comprehensive Analysis), high resolution analyses are used for the computation of the observed climatology and for model training. The INCA system operationally combines station measurements and remote sensing data into real-time objective analysis fields at 1 km-horizontal resolution and 1 h-temporal resolution. The precipitation forecast used in this study is obtained from a limited area model ensemble prediction system also operated by ZAMG. The so called ALADIN-LAEF provides, by applying a multi-physics approach, a 17-member forecast at a horizontal resolution of 10.9 km and a temporal resolution of 1 hour. The performed SAMOS approach statistically combines the in-house developed high resolution analysis and ensemble prediction system. The station-based validation of 6 hour precipitation sums

  19. Quadruplex MAPH: improvement of throughput in high-resolution copy number screening

    Directory of Open Access Journals (Sweden)

    Walker Susan

    2009-09-01

    Full Text Available Abstract Background Copy number variation (CNV in the human genome is recognised as a widespread and important source of human genetic variation. Now the challenge is to screen for these CNVs at high resolution in a reliable, accurate and cost-effective way. Results Multiplex Amplifiable Probe Hybridisation (MAPH is a sensitive, high-resolution technology appropriate for screening for CNVs in a defined region, for a targeted population. We have developed MAPH to a highly multiplexed format ("QuadMAPH" that allows the user a four-fold increase in the number of loci tested simultaneously. We have used this method to analyse a genomic region of 210 kb, including the MSH2 gene and 120 kb of flanking DNA. We show that the QuadMAPH probes report copy number with equivalent accuracy to simplex MAPH, reliably demonstrating diploid copy number in control samples and accurately detecting deletions in Hereditary Non-Polyposis Colorectal Cancer (HNPCC samples. Conclusion QuadMAPH is an accurate, high-resolution method that allows targeted screening of large numbers of subjects without the expense of genome-wide approaches. Whilst we have applied this technique to a region of the human genome, it is equally applicable to the genomes of other organisms.

  20. High-spatial resolution and high-spectral resolution detector for use in the measurement of solar flare hard x rays

    International Nuclear Information System (INIS)

    Desai, U.D.; Orwig, L.E.

    1988-01-01

    In the areas of high spatial resolution, the evaluation of a hard X-ray detector with 65 micron spatial resolution for operation in the energy range from 30 to 400 keV is proposed. The basic detector is a thick large-area scintillator faceplate, composed of a matrix of high-density scintillating glass fibers, attached to a proximity type image intensifier tube with a resistive-anode digital readout system. Such a detector, combined with a coded-aperture mask, would be ideal for use as a modest-sized hard X-ray imaging instrument up to X-ray energies as high as several hundred keV. As an integral part of this study it was also proposed that several techniques be critically evaluated for X-ray image coding which could be used with this detector. In the area of high spectral resolution, it is proposed to evaluate two different types of detectors for use as X-ray spectrometers for solar flares: planar silicon detectors and high-purity germanium detectors (HPGe). Instruments utilizing these high-spatial-resolution detectors for hard X-ray imaging measurements from 30 to 400 keV and high-spectral-resolution detectors for measurements over a similar energy range would be ideally suited for making crucial solar flare observations during the upcoming maximum in the solar cycle