Magnetic flux reconstruction methods for shaped tokamaks
International Nuclear Information System (INIS)
Tsui, Chi-Wa.
1993-12-01
The use of a variational method permits the Grad-Shafranov (GS) equation to be solved by reducing the problem of solving the 2D non-linear partial differential equation to the problem of minimizing a function of several variables. This high speed algorithm approximately solves the GS equation given a parameterization of the plasma boundary and the current profile (p' and FF' functions). The author treats the current profile parameters as unknowns. The goal is to reconstruct the internal magnetic flux surfaces of a tokamak plasma and the toroidal current density profile from the external magnetic measurements. This is a classic problem of inverse equilibrium determination. The current profile parameters can be evaluated by several different matching procedures. Matching of magnetic flux and field at the probe locations using the Biot-Savart law and magnetic Green's function provides a robust method of magnetic reconstruction. The matching of poloidal magnetic field on the plasma surface provides a unique method of identifying the plasma current profile. However, the power of this method is greatly compromised by the experimental errors of the magnetic signals. The Casing Principle provides a very fast way to evaluate the plasma contribution to the magnetic signals. It has the potential of being a fast matching method. The performance of this method is hindered by the accuracy of the poloidal magnetic field computed from the equilibrium solver. A flux reconstruction package has been implemented which integrates a vacuum field solver using a filament model for the plasma, a multi-layer perception neural network as an interface, and the volume integration of plasma current density using Green's functions as a matching method for the current profile parameters. The flux reconstruction package is applied to compare with the ASEQ and EFIT data. The results are promising
International Nuclear Information System (INIS)
Dorning, J.J.
1991-01-01
A simultaneous pin lattice cell and fuel bundle homogenization theory has been developed for use with nodal diffusion calculations of practical reactors. The theoretical development of the homogenization theory, which is based on multiple-scales asymptotic expansion methods carried out through fourth order in a small parameter, starts from the transport equation and systematically yields: a cell-homogenized bundled diffusion equation with self-consistent expressions for the cell-homogenized cross sections and diffusion tensor elements; and a bundle-homogenized global reactor diffusion equation with self-consistent expressions for the bundle-homogenized cross sections and diffusion tensor elements. The continuity of the angular flux at cell and bundle interfaces also systematically yields jump conditions for the scaler flux or so-called flux discontinuity factors on the cell and bundle interfaces in terms of the two adjacent cell or bundle eigenfunctions. The expressions required for the reconstruction of the angular flux or the 'de-homogenization' theory were obtained as an integral part of the development; hence the leading order transport theory angular flux is easily reconstructed throughout the reactor including the regions in the interior of the fuel bundles or computational nodes and in the interiors of the pin lattice cells. The theoretical development shows that the exact transport theory angular flux is obtained to first order from the whole-reactor nodal diffusion calculations, done using the homogenized nuclear data and discontinuity factors, is a product of three computed quantities: a ''cell shape function''; a ''bundle shape function''; and a ''global shape function''. 10 refs
Methods for the reconstruction of large scale anisotropies of the cosmic ray flux
Energy Technology Data Exchange (ETDEWEB)
Over, Sven
2010-01-15
In cosmic ray experiments the arrival directions, among other properties, of cosmic ray particles from detected air shower events are reconstructed. The question of uniformity in the distribution of arrival directions is of large importance for models that try to explain cosmic radiation. In this thesis, methods for the reconstruction of parameters of a dipole-like flux distribution of cosmic rays from a set of recorded air shower events are studied. Different methods are presented and examined by means of detailed Monte Carlo simulations. Particular focus is put on the implications of spurious experimental effects. Modifications of existing methods and new methods are proposed. The main goal of this thesis is the development of the horizontal Rayleigh analysis method. Unlike other methods, this method is based on the analysis of local viewing directions instead of global sidereal directions. As a result, the symmetries of the experimental setup can be better utilised. The calculation of the sky coverage (exposure function) is not necessary in this analysis. The performance of the method is tested by means of further Monte Carlo simulations. The new method performs similarly good or only marginally worse than established methods in case of ideal measurement conditions. However, the simulation of certain experimental effects can cause substantial misestimations of the dipole parameters by the established methods, whereas the new method produces no systematic deviations. The invulnerability to certain effects offers additional advantages, as certain data selection cuts become dispensable. (orig.)
Hu, Qiang
2017-09-01
We develop an approach of the Grad-Shafranov (GS) reconstruction for toroidal structures in space plasmas, based on in situ spacecraft measurements. The underlying theory is the GS equation that describes two-dimensional magnetohydrostatic equilibrium, as widely applied in fusion plasmas. The geometry is such that the arbitrary cross-section of the torus has rotational symmetry about the rotation axis, Z, with a major radius, r0. The magnetic field configuration is thus determined by a scalar flux function, Ψ, and a functional F that is a single-variable function of Ψ. The algorithm is implemented through a two-step approach: i) a trial-and-error process by minimizing the residue of the functional F(Ψ) to determine an optimal Z-axis orientation, and ii) for the chosen Z, a χ2 minimization process resulting in a range of r0. Benchmark studies of known analytic solutions to the toroidal GS equation with noise additions are presented to illustrate the two-step procedure and to demonstrate the performance of the numerical GS solver, separately. For the cases presented, the errors in Z and r0 are 9° and 22%, respectively, and the relative percent error in the numerical GS solutions is smaller than 10%. We also make public the computer codes for these implementations and benchmark studies.
Reconstruction of vacuum magnetic flux in QUEST
International Nuclear Information System (INIS)
Ishiguro, Masaki; Hanada, Kazuaki; Nakamura, Kazuo
2010-01-01
It is important to determine the best method for reconstructing the magnetic flux when eddy currents are significantly induced during magnetic measurement in spherical tokamaks (STs). Four methods for this reconstruction are investigated, and the calculated magnetic fluxes are compared to those measured in the cavity of a vacuum vessel. The results show that the best method is the one that uses currents from virtual coils for reconstruction. In this method, the placement of the virtual coils is optimized with numerical simulations using the Akaike information criterion (AIC), which indicates the goodness of fit of models used to fit measured data. The virtual coils are set on a line 15 cm outside the vacuum vessel. (author)
International Nuclear Information System (INIS)
Jacqmin, R.P.
1991-01-01
The safety and optimal performance of large, commercial, light-water reactors require the knowledge at all time of the neutron-flux distribution in the core. In principle, this information can be obtained by solving the time-dependent neutron diffusion equations. However, this approach is complicated and very expensive. Sufficiently accurate, real-time calculations (time scale of approximately one second) are not yet possible on desktop computers, even with fast-running, nodal kinetics codes. A semi-experimental, nodal synthesis method which avoids the solution of the time-dependent, neutron diffusion equations is described. The essential idea of this method is to approximate instantaneous nodal group-fluxes by a linear combination of K, precomputed, three-dimensional, static expansion-functions. The time-dependent coefficients of the combination are found from the requirement that the reconstructed flux-distribution agree in a least-squares sense with the readings of J (≥K) fixed, prompt-responding neutron-detectors. Possible numerical difficulties with the least-squares solution of the ill-conditioned, J-by-K system of equations are brought under complete control by the use of a singular-value-decomposition technique. This procedure amounts to the rearrangement of the original, linear combination of K expansion functions into an equivalent more convenient, linear combination of R (≤K) orthogonalized ''modes'' of decreasing magnitude. Exceedingly small modes are zeroed to eliminate any risk of roundoff-error amplification, and to assure consistency with the limited accuracy of the data. Additional modes are zeroed when it is desirable to limit the sensitivity of the results to measurement noise
Energy Technology Data Exchange (ETDEWEB)
Jacqmin, Robert P. [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States)
1991-12-10
The safety and optimal performance of large, commercial, light-water reactors require the knowledge at all time of the neutron-flux distribution in the core. In principle, this information can be obtained by solving the time-dependent neutron diffusion equations. However, this approach is complicated and very expensive. Sufficiently accurate, real-time calculations (time scale of approximately one second) are not yet possible on desktop computers, even with fast-running, nodal kinetics codes. A semi-experimental, nodal synthesis method which avoids the solution of the time-dependent, neutron diffusion equations is described. The essential idea of this method is to approximate instantaneous nodal group-fluxes by a linear combination of K, precomputed, three-dimensional, static expansion-functions. The time-dependent coefficients of the combination are found from the requirement that the reconstructed flux-distribution agree in a least-squares sense with the readings of J (≥K) fixed, prompt-responding neutron-detectors. Possible numerical difficulties with the least-squares solution of the ill-conditioned, J-by-K system of equations are brought under complete control by the use of a singular-value-decomposition technique. This procedure amounts to the rearrangement of the original, linear combination of K expansion functions into an equivalent more convenient, linear combination of R (≤K) orthogonalized ``modes`` of decreasing magnitude. Exceedingly small modes are zeroed to eliminate any risk of roundoff-error amplification, and to assure consistency with the limited accuracy of the data. Additional modes are zeroed when it is desirable to limit the sensitivity of the results to measurement noise.
Energy Technology Data Exchange (ETDEWEB)
Jacqmin, R.P.
1991-12-10
The safety and optimal performance of large, commercial, light-water reactors require the knowledge at all time of the neutron-flux distribution in the core. In principle, this information can be obtained by solving the time-dependent neutron diffusion equations. However, this approach is complicated and very expensive. Sufficiently accurate, real-time calculations (time scale of approximately one second) are not yet possible on desktop computers, even with fast-running, nodal kinetics codes. A semi-experimental, nodal synthesis method which avoids the solution of the time-dependent, neutron diffusion equations is described. The essential idea of this method is to approximate instantaneous nodal group-fluxes by a linear combination of K, precomputed, three-dimensional, static expansion-functions. The time-dependent coefficients of the combination are found from the requirement that the reconstructed flux-distribution agree in a least-squares sense with the readings of J ({ge}K) fixed, prompt-responding neutron-detectors. Possible numerical difficulties with the least-squares solution of the ill-conditioned, J-by-K system of equations are brought under complete control by the use of a singular-value-decomposition technique. This procedure amounts to the rearrangement of the original, linear combination of K expansion functions into an equivalent more convenient, linear combination of R ({le}K) orthogonalized modes'' of decreasing magnitude. Exceedingly small modes are zeroed to eliminate any risk of roundoff-error amplification, and to assure consistency with the limited accuracy of the data. Additional modes are zeroed when it is desirable to limit the sensitivity of the results to measurement noise.
International Nuclear Information System (INIS)
Zhang, H.; Rizwan-uddin; Dorning, J.J.
1995-01-01
A diffusion equation-based systematic homogenization theory and a self-consistent dehomogenization theory for fuel assemblies have been developed for use with coarse-mesh nodal diffusion calculations of light water reactors. The theoretical development is based on a multiple-scales asymptotic expansion carried out through second order in a small parameter, the ratio of the average diffusion length to the reactor characteristic dimension. By starting from the neutron diffusion equation for a three-dimensional heterogeneous medium and introducing two spatial scales, the development systematically yields an assembly-homogenized global diffusion equation with self-consistent expressions for the assembly-homogenized diffusion tensor elements and cross sections and assembly-surface-flux discontinuity factors. The rector eigenvalue 1/k eff is shown to be obtained to the second order in the small parameter, and the heterogeneous diffusion theory flux is shown to be obtained to leading order in that parameter. The latter of these two results provides a natural procedure for the reconstruction of the local fluxes and the determination of pin powers, even though homogenized assemblies are used in the global nodal diffusion calculation
Group-decoupled multi-group pin power reconstruction utilizing nodal solution 1D flux profiles
International Nuclear Information System (INIS)
Yu, Lulin; Lu, Dong; Zhang, Shaohong; Wang, Dezhong
2014-01-01
Highlights: • A direct fitting multi-group pin power reconstruction method is developed. • The 1D nodal solution flux profiles are used as the condition. • The least square fit problem is analytically solved. • A slowing down source improvement method is applied. • The method shows good accuracy for even challenging problems. - Abstract: A group-decoupled direct fitting method is developed for multi-group pin power reconstruction, which avoids both the complication of obtaining 2D analytic multi-group flux solution and any group-coupled iteration. A unique feature of the method is that in addition to nodal volume and surface average fluxes and corner fluxes, transversely-integrated 1D nodal solution flux profiles are also used as the condition to determine the 2D intra-nodal flux distribution. For each energy group, a two-dimensional expansion with a nine-term polynomial and eight hyperbolic functions is used to perform a constrained least square fit to the 1D intra-nodal flux solution profiles. The constraints are on the conservation of nodal volume and surface average fluxes and corner fluxes. Instead of solving the constrained least square fit problem numerically, we solve it analytically by fully utilizing the symmetry property of the expansion functions. Each of the 17 unknown expansion coefficients is expressed in terms of nodal volume and surface average fluxes, corner fluxes and transversely-integrated flux values. To determine the unknown corner fluxes, a set of linear algebraic equations involving corner fluxes is established via using the current conservation condition on all corners. Moreover, an optional slowing down source improvement method is also developed to further enhance the accuracy of the reconstructed flux distribution if needed. Two test examples are shown with very good results. One is a four-group BWR mini-core problem with all control blades inserted and the other is the seven-group OECD NEA MOX benchmark, C5G7
[Reconstructive methods after Fournier gangrene].
Wallner, C; Behr, B; Ring, A; Mikhail, B D; Lehnhardt, M; Daigeler, A
2016-04-01
Fournier's gangrene is a variant of the necrotizing fasciitis restricted to the perineal and genital region. It presents as an acute life-threatening disease and demands rapid surgical debridement, resulting in large soft tissue defects. Various reconstructive methods have to be applied to reconstitute functionality and aesthetics. The objective of this work is to identify different reconstructive methods in the literature and compare them to our current concepts for reconstructing defects caused by Fournier gangrene. Analysis of the current literature and our reconstructive methods on Fournier gangrene. The Fournier gangrene is an emergency requiring rapid, calculated antibiotic treatment and radical surgical debridement. After the acute phase of the disease, appropriate reconstructive methods are indicated. The planning of the reconstruction of the defect depends on many factors, especially functional and aesthetic demands. Scrotal reconstruction requires a higher aesthetic and functional reconstructive degree than perineal cutaneous wounds. In general, thorough wound hygiene, proper pre-operative planning, and careful consideration of the patient's demands are essential for successful reconstruction. In the literature, various methods for reconstruction after Fournier gangrene are described. Reconstruction with a flap is required for a good functional result in complex regions as the scrotum and penis, while cutaneous wounds can be managed through skin grafting. Patient compliance and tissue demand are crucial factors in the decision-making process.
Perturbation methods for power and reactivity reconstruction
International Nuclear Information System (INIS)
Palmiotti, G.; Salvatores, M.; Estiot, J.C.; Broccoli, U.; Bruna, G.; Gomit, J.M.
1987-01-01
This paper deals with recent developments and applications in perturbation methods. Two types of methods are used. The first one is an explicit method, which allows the explicit reconstruction of a perturbed flux using a linear combination of a library of functions. In our application, these functions are the harmonics (i.e. the high order eigenfunctions of the system). The second type is based on the Generalized Perturbation Theory GPT and needs the calculation of an importance function for each integral parameter of interest. Recent developments of a particularly useful high order formulation allows to obtain satisfactory results also for very large perturbations
Garcillán-Barcia, M. Pilar; Mora, Azucena; Blanco, Jorge; Coque, Teresa M.; de la Cruz, Fernando
2014-01-01
Bacterial whole genome sequence (WGS) methods are rapidly overtaking classical sequence analysis. Many bacterial sequencing projects focus on mobilome changes, since macroevolutionary events, such as the acquisition or loss of mobile genetic elements, mainly plasmids, play essential roles in adaptive evolution. Existing WGS analysis protocols do not assort contigs between plasmids and the main chromosome, thus hampering full analysis of plasmid sequences. We developed a method (called plasmid constellation networks or PLACNET) that identifies, visualizes and analyzes plasmids in WGS projects by creating a network of contig interactions, thus allowing comprehensive plasmid analysis within WGS datasets. The workflow of the method is based on three types of data: assembly information (including scaffold links and coverage), comparison to reference sequences and plasmid-diagnostic sequence features. The resulting network is pruned by expert analysis, to eliminate confounding data, and implemented in a Cytoscape-based graphic representation. To demonstrate PLACNET sensitivity and efficacy, the plasmidome of the Escherichia coli lineage ST131 was analyzed. ST131 is a globally spread clonal group of extraintestinal pathogenic E. coli (ExPEC), comprising different sublineages with ability to acquire and spread antibiotic resistance and virulence genes via plasmids. Results show that plasmids flux in the evolution of this lineage, which is wide open for plasmid exchange. MOBF12/IncF plasmids were pervasive, adding just by themselves more than 350 protein families to the ST131 pangenome. Nearly 50% of the most frequent γ–proteobacterial plasmid groups were found to be present in our limited sample of ten analyzed ST131 genomes, which represent the main ST131 sublineages. PMID:25522143
Lanza, Val F; de Toro, María; Garcillán-Barcia, M Pilar; Mora, Azucena; Blanco, Jorge; Coque, Teresa M; de la Cruz, Fernando
2014-12-01
Bacterial whole genome sequence (WGS) methods are rapidly overtaking classical sequence analysis. Many bacterial sequencing projects focus on mobilome changes, since macroevolutionary events, such as the acquisition or loss of mobile genetic elements, mainly plasmids, play essential roles in adaptive evolution. Existing WGS analysis protocols do not assort contigs between plasmids and the main chromosome, thus hampering full analysis of plasmid sequences. We developed a method (called plasmid constellation networks or PLACNET) that identifies, visualizes and analyzes plasmids in WGS projects by creating a network of contig interactions, thus allowing comprehensive plasmid analysis within WGS datasets. The workflow of the method is based on three types of data: assembly information (including scaffold links and coverage), comparison to reference sequences and plasmid-diagnostic sequence features. The resulting network is pruned by expert analysis, to eliminate confounding data, and implemented in a Cytoscape-based graphic representation. To demonstrate PLACNET sensitivity and efficacy, the plasmidome of the Escherichia coli lineage ST131 was analyzed. ST131 is a globally spread clonal group of extraintestinal pathogenic E. coli (ExPEC), comprising different sublineages with ability to acquire and spread antibiotic resistance and virulence genes via plasmids. Results show that plasmids flux in the evolution of this lineage, which is wide open for plasmid exchange. MOBF12/IncF plasmids were pervasive, adding just by themselves more than 350 protein families to the ST131 pangenome. Nearly 50% of the most frequent γ-proteobacterial plasmid groups were found to be present in our limited sample of ten analyzed ST131 genomes, which represent the main ST131 sublineages.
Directory of Open Access Journals (Sweden)
Val F Lanza
2014-12-01
Full Text Available Bacterial whole genome sequence (WGS methods are rapidly overtaking classical sequence analysis. Many bacterial sequencing projects focus on mobilome changes, since macroevolutionary events, such as the acquisition or loss of mobile genetic elements, mainly plasmids, play essential roles in adaptive evolution. Existing WGS analysis protocols do not assort contigs between plasmids and the main chromosome, thus hampering full analysis of plasmid sequences. We developed a method (called plasmid constellation networks or PLACNET that identifies, visualizes and analyzes plasmids in WGS projects by creating a network of contig interactions, thus allowing comprehensive plasmid analysis within WGS datasets. The workflow of the method is based on three types of data: assembly information (including scaffold links and coverage, comparison to reference sequences and plasmid-diagnostic sequence features. The resulting network is pruned by expert analysis, to eliminate confounding data, and implemented in a Cytoscape-based graphic representation. To demonstrate PLACNET sensitivity and efficacy, the plasmidome of the Escherichia coli lineage ST131 was analyzed. ST131 is a globally spread clonal group of extraintestinal pathogenic E. coli (ExPEC, comprising different sublineages with ability to acquire and spread antibiotic resistance and virulence genes via plasmids. Results show that plasmids flux in the evolution of this lineage, which is wide open for plasmid exchange. MOBF12/IncF plasmids were pervasive, adding just by themselves more than 350 protein families to the ST131 pangenome. Nearly 50% of the most frequent γ-proteobacterial plasmid groups were found to be present in our limited sample of ten analyzed ST131 genomes, which represent the main ST131 sublineages.
ASME method for particle reconstruction
International Nuclear Information System (INIS)
Ierusalimov, A.P.
2009-01-01
The method of approximate solution of motion equation (ASME) was used to reconstruct the parameters for charged particles. It provides a good precision for momentum, angular and space parameters of particles in coordinate detectors. The application of the method for CBM, HADES and MPD/NICA setups is discussed
Phylogenetic reconstruction methods: an overview.
De Bruyn, Alexandre; Martin, Darren P; Lefeuvre, Pierre
2014-01-01
Initially designed to infer evolutionary relationships based on morphological and physiological characters, phylogenetic reconstruction methods have greatly benefited from recent developments in molecular biology and sequencing technologies with a number of powerful methods having been developed specifically to infer phylogenies from macromolecular data. This chapter, while presenting an overview of basic concepts and methods used in phylogenetic reconstruction, is primarily intended as a simplified step-by-step guide to the construction of phylogenetic trees from nucleotide sequences using fairly up-to-date maximum likelihood methods implemented in freely available computer programs. While the analysis of chloroplast sequences from various Vanilla species is used as an illustrative example, the techniques covered here are relevant to the comparative analysis of homologous sequences datasets sampled from any group of organisms.
International Nuclear Information System (INIS)
Mansur, Ralph S.; Barros, Ricardo C.
2011-01-01
We describe a method to determine the neutron scalar flux in a slab using monoenergetic diffusion model. To achieve this goal we used three ingredients in the computational code that we developed on the Scilab platform: a spectral nodal method that generates numerical solution for the one-speed slab-geometry fixed source diffusion problem with no spatial truncation errors; a spatial reconstruction scheme to yield detailed profile of the coarse-mesh solution; and an angular reconstruction scheme to yield approximately the neutron angular flux profile at a given location of the slab migrating in a given direction. Numerical results are given to illustrate the efficiency of the offered code. (author)
Methods for reconstruction of the density distribution of nuclear power
International Nuclear Information System (INIS)
Pessoa, Paulo O.; Silva, Fernando C.; Martinez, Aquilino S.
2015-01-01
Highlights: • Two methods for reconstruction of the pin power distribution are presented. • The ARM method uses analytical solution of the 2D diffusion equation. • The PRM method uses polynomial solution without boundary conditions. • The maximum errors in pin power reconstruction occur in the peripheral water region. • The errors are significantly less in the inner area of the core. - Abstract: In analytical reconstruction method (ARM), the two-dimensional (2D) neutron diffusion equation is analytically solved for two energy groups (2G) and homogeneous nodes with dimensions of a fuel assembly (FA). The solution employs a 2D fourth-order expansion for the axial leakage term. The Nodal Expansion Method (NEM) provides the solution average values as the four average partial currents on the surfaces of the node, the average flux in the node and the multiplying factor of the problem. The expansion coefficients for the axial leakage are determined directly from NEM method or can be determined in the reconstruction method. A new polynomial reconstruction method (PRM) is implemented based on the 2D expansion for the axial leakage term. The ARM method use the four average currents on the surfaces of the node and four average fluxes in corners of the node as boundary conditions and the average flux in the node as a consistency condition. To determine the average fluxes in corners of the node an analytical solution is employed. This analytical solution uses the average fluxes on the surfaces of the node as boundary conditions and discontinuities in corners are incorporated. The polynomial and analytical solutions to the PRM and ARM methods, respectively, represent the homogeneous flux distributions. The detailed distributions inside a FA are estimated by product of homogeneous distribution by local heterogeneous form function. Moreover, the form functions of power are used. The results show that the methods have good accuracy when compared with reference values and
Mengaldo, Gianmarco; De Grazia, Daniele; Moura, Rodrigo C.; Sherwin, Spencer J.
2018-04-01
This study focuses on the dispersion and diffusion characteristics of high-order energy-stable flux reconstruction (ESFR) schemes via the spatial eigensolution analysis framework proposed in [1]. The analysis is performed for five ESFR schemes, where the parameter 'c' dictating the properties of the specific scheme recovered is chosen such that it spans the entire class of ESFR methods, also referred to as VCJH schemes, proposed in [2]. In particular, we used five values of 'c', two that correspond to its lower and upper bounds and the others that identify three schemes that are linked to common high-order methods, namely the ESFR recovering two versions of discontinuous Galerkin methods and one recovering the spectral difference scheme. The performance of each scheme is assessed when using different numerical intercell fluxes (e.g. different levels of upwinding), ranging from "under-" to "over-upwinding". In contrast to the more common temporal analysis, the spatial eigensolution analysis framework adopted here allows one to grasp crucial insights into the diffusion and dispersion properties of FR schemes for problems involving non-periodic boundary conditions, typically found in open-flow problems, including turbulence, unsteady aerodynamics and aeroacoustics.
Crystal growth of emerald by flux method
International Nuclear Information System (INIS)
Inoue, Mikio; Narita, Eiichi; Okabe, Taijiro; Morishita, Toshihiko.
1979-01-01
Emerald crystals have been formed in two binary fluxes of Li 2 O-MoO 2 and Li 2 O-V 2 O 5 using the slow cooling method and the temperature gradient method under various conditions. In the flux of Li 2 O-MoO 3 carried out in the range of 2 -- 5 of molar ratios (MoO 3 /Li 2 O), emerald was crystallized in the temperature range from 750 to 950 0 C, and the suitable crystallization conditions were found to be the molar ratio of 3 -- 4 and the temperature about 900 0 C. In the flux of Li 2 O-V 2 O 5 carried out in the range of 1.7 -- 5 of molar ratios (V 2 O 5 /Li 2 O), emerald was crystallized in the temperature range from 900 to 1150 0 . The suitable crystals were obtained at the molar ratio of 3 and the temperature range of 1000 -- 1100 0 C. The crystallization temperature rised with an increase in the molar ratio of the both fluxes. The emeralds grown in two binary fluxes were transparent green, having the density of 2.68, the refractive index of 1.56, and the two distinct bands in the visible spectrum at 430 and 600nm. The emerald grown in Li 2 O-V 2 O 5 flux was more bluish green than that grown in Li 2 O-MoO 3 flux. The size of the spontaneously nucleated emerald grown in the former flux was larger than the latter, when crystallized by the slow cooling method. As for the solubility of beryl in the two fluxes, Li 2 O-V 2 O 5 flux was superior to Li 2 O-MoO 3 flux whose small solubility of SiO 2 caused an experimental problem to the temperature gradient method. The suitability of the two fluxes for the crystal growth of emerald by the flux method was discussed from the view point of various properties of above-mentioned two fluxes. (author)
Method for position emission mammography image reconstruction
Smith, Mark Frederick
2004-10-12
An image reconstruction method comprising accepting coincidence datat from either a data file or in real time from a pair of detector heads, culling event data that is outside a desired energy range, optionally saving the desired data for each detector position or for each pair of detector pixels on the two detector heads, and then reconstructing the image either by backprojection image reconstruction or by iterative image reconstruction. In the backprojection image reconstruction mode, rays are traced between centers of lines of response (LOR's), counts are then either allocated by nearest pixel interpolation or allocated by an overlap method and then corrected for geometric effects and attenuation and the data file updated. If the iterative image reconstruction option is selected, one implementation is to compute a grid Siddon retracing, and to perform maximum likelihood expectation maiximization (MLEM) computed by either: a) tracing parallel rays between subpixels on opposite detector heads; or b) tracing rays between randomized endpoint locations on opposite detector heads.
On the properties of energy stable flux reconstruction schemes for implicit large eddy simulation
Vermeire, B. C.; Vincent, P. E.
2016-12-01
We begin by investigating the stability, order of accuracy, and dispersion and dissipation characteristics of the extended range of energy stable flux reconstruction (E-ESFR) schemes in the context of implicit large eddy simulation (ILES). We proceed to demonstrate that subsets of the E-ESFR schemes are more stable than collocation nodal discontinuous Galerkin methods recovered with the flux reconstruction approach (FRDG) for marginally-resolved ILES simulations of the Taylor-Green vortex. These schemes are shown to have reduced dissipation and dispersion errors relative to FRDG schemes of the same polynomial degree and, simultaneously, have increased Courant-Friedrichs-Lewy (CFL) limits. Finally, we simulate turbulent flow over an SD7003 aerofoil using two of the most stable E-ESFR schemes identified by the aforementioned Taylor-Green vortex experiments. Results demonstrate that subsets of E-ESFR schemes appear more stable than the commonly used FRDG method, have increased CFL limits, and are suitable for ILES of complex turbulent flows on unstructured grids.
Apparatus and method for reconstructing data
International Nuclear Information System (INIS)
1981-01-01
A method and apparatus is described for constructing a two-dimensional picture of an object slice from linear projections of radiation not absorbed or scattered by the object, using convolution methods of data reconstruction, useful in the fields of medical radiology, microscopy, and non-destructive testing. (U.K.)
Finite difference applied to the reconstruction method of the nuclear power density distribution
International Nuclear Information System (INIS)
Pessoa, Paulo O.; Silva, Fernando C.; Martinez, Aquilino S.
2016-01-01
Highlights: • A method for reconstruction of the power density distribution is presented. • The method uses discretization by finite differences of 2D neutrons diffusion equation. • The discretization is performed homogeneous meshes with dimensions of a fuel cell. • The discretization is combined with flux distributions on the four node surfaces. • The maximum errors in reconstruction occur in the peripheral water region. - Abstract: In this reconstruction method the two-dimensional (2D) neutron diffusion equation is discretized by finite differences, employed to two energy groups (2G) and meshes with fuel-pin cell dimensions. The Nodal Expansion Method (NEM) makes use of surface discontinuity factors of the node and provides for reconstruction method the effective multiplication factor of the problem and the four surface average fluxes in homogeneous nodes with size of a fuel assembly (FA). The reconstruction process combines the discretized 2D diffusion equation by finite differences with fluxes distribution on four surfaces of the nodes. These distributions are obtained for each surfaces from a fourth order one-dimensional (1D) polynomial expansion with five coefficients to be determined. The conditions necessary for coefficients determination are three average fluxes on consecutive surfaces of the three nodes and two fluxes in corners between these three surface fluxes. Corner fluxes of the node are determined using a third order 1D polynomial expansion with four coefficients. This reconstruction method uses heterogeneous nuclear parameters directly providing the heterogeneous neutron flux distribution and the detailed nuclear power density distribution within the FAs. The results obtained with this method has good accuracy and efficiency when compared with reference values.
Methods and applications in high flux neutron imaging
International Nuclear Information System (INIS)
Ballhausen, H.
2007-01-01
This treatise develops new methods for high flux neutron radiography and high flux neutron tomography and describes some of their applications in actual experiments. Instead of single images, time series can be acquired with short exposure times due to the available high intensity. To best use the increased amount of information, new estimators are proposed, which extract accurate results from the recorded ensembles, even if the individual piece of data is very noisy and in addition severely affected by systematic errors such as an influence of gamma background radiation. The spatial resolution of neutron radiographies, usually limited by beam divergence and inherent resolution of the scintillator, can be significantly increased by scanning the sample with a pinhole-micro-collimator. This technique circumvents any limitations in present detector design and, due to the available high intensity, could be successfully tested. Imaging with scattered neutrons as opposed to conventional total attenuation based imaging determines separately the absorption and scattering cross sections within the sample. For the first time even coherent angle dependent scattering could be visualized space-resolved. New applications of high flux neutron imaging are presented, such as materials engineering experiments on innovative metal joints, time-resolved tomography on multilayer stacks of fuel cells under operation, and others. A new implementation of an algorithm for the algebraic reconstruction of tomography data executes even in case of missing information, such as limited angle tomography, and returns quantitative reconstructions. The setup of the world-leading high flux radiography and tomography facility at the Institut Laue-Langevin is presented. A comprehensive appendix covers the physical and technical foundations of neutron imaging. (orig.)
Reconstructing Heat Fluxes Over Lake Erie During the Lake Effect Snow Event of November 2014
Fitzpatrick, L.; Fujisaki-Manome, A.; Gronewold, A.; Anderson, E. J.; Spence, C.; Chen, J.; Shao, C.; Posselt, D. J.; Wright, D. M.; Lofgren, B. M.; Schwab, D. J.
2017-12-01
The extreme North American winter storm of November 2014 triggered a record lake effect snowfall (LES) event in southwest New York. This study examined the evaporation from Lake Erie during the record lake effect snowfall event, November 17th-20th, 2014, by reconstructing heat fluxes and evaporation rates over Lake Erie using the unstructured grid, Finite-Volume Community Ocean Model (FVCOM). Nine different model runs were conducted using combinations of three different flux algorithms: the Met Flux Algorithm (COARE), a method routinely used at NOAA's Great Lakes Environmental Research Laboratory (SOLAR), and the Los Alamos Sea Ice Model (CICE); and three different meteorological forcings: the Climate Forecast System version 2 Operational Analysis (CFSv2), Interpolated observations (Interp), and the High Resolution Rapid Refresh (HRRR). A few non-FVCOM model outputs were also included in the evaporation analysis from an atmospheric reanalysis (CFSv2) and the large lake thermodynamic model (LLTM). Model-simulated water temperature and meteorological forcing data (wind direction and air temperature) were validated with buoy data at three locations in Lake Erie. The simulated sensible and latent heat fluxes were validated with the eddy covariance measurements at two offshore sites; Long Point Lighthouse in north central Lake Erie and Toledo water crib intake in western Lake Erie. The evaluation showed a significant increase in heat fluxes over three days, with the peak on the 18th of November. Snow water equivalent data from the National Snow Analyses at the National Operational Hydrologic Remote Sensing Center showed a spike in water content on the 20th of November, two days after the peak heat fluxes. The ensemble runs presented a variation in spatial pattern of evaporation, lake-wide average evaporation, and resulting cooling of the lake. Overall, the evaporation tended to be larger in deep water than shallow water near the shore. The lake-wide average evaporations
Comparison between Evapotranspiration Fluxes Assessment Methods
Casola, A.; Longobardi, A.; Villani, P.
2009-11-01
Knowledge of hydrological processes acting in the water balance is determinant for a rational water resources management plan. Among these, the water losses as vapour, in the form of evapotranspiration, play an important role in the water balance and the heat transfers between the land surface and the atmosphere. Mass and energy interactions between soil, atmosphere and vegetation, in fact, influence all hydrological processes modificating rainfall interception, infiltration, evapotraspiration, surface runoff and groundwater recharge.A numbers of methods have been developed in scientific literature for modelling evapotranspiration. They can be divided in three main groups: i) traditional meteorological models, ii) energy fluxes balance models, considering interaction between vegetation and the atmosphere, and iii) remote sensing based models. The present analysis preliminary performs a study of fluxes directions and an evaluation of energy balance closure in a typical Mediterranean short vegetation area, using data series recorded from an eddy covariance station, located in the Campania region, Southern Italy. The analysis was performed on different seasons of the year with the aim to assess climatic forcing features impact on fluxes balance, to evaluate the smaller imbalance and to highlight influencing factors and sampling errors on balance closure. The present study also concerns evapotranspiration fluxes assessment at the point scale. Evapotranspiration is evaluated both from empirical relationships (Penmann-Montheit, Penmann F AO, Prestley&Taylor) calibrated with measured energy fluxes at mentioned experimental site, and from measured latent heat data scaled by the latent heat of vaporization. These results are compared with traditional and reliable well known models at the plot scale (Coutagne, Turc, Thorthwaite).
Maximum entropy method in momentum density reconstruction
International Nuclear Information System (INIS)
Dobrzynski, L.; Holas, A.
1997-01-01
The Maximum Entropy Method (MEM) is applied to the reconstruction of the 3-dimensional electron momentum density distributions observed through the set of Compton profiles measured along various crystallographic directions. It is shown that the reconstruction of electron momentum density may be reliably carried out with the aid of simple iterative algorithm suggested originally by Collins. A number of distributions has been simulated in order to check the performance of MEM. It is shown that MEM can be recommended as a model-free approach. (author). 13 refs, 1 fig
Image reconstruction methods in positron tomography
International Nuclear Information System (INIS)
Townsend, D.W.; Defrise, M.
1993-01-01
In the two decades since the introduction of the X-ray scanner into radiology, medical imaging techniques have become widely established as essential tools in the diagnosis of disease. As a consequence of recent technological and mathematical advances, the non-invasive, three-dimensional imaging of internal organs such as the brain and the heart is now possible, not only for anatomical investigations using X-ray but also for studies which explore the functional status of the body using positron-emitting radioisotopes. This report reviews the historical and physical basis of medical imaging techniques using positron-emitting radioisotopes. Mathematical methods which enable three-dimensional distributions of radioisotopes to be reconstructed from projection data (sinograms) acquired by detectors suitably positioned around the patient are discussed. The extension of conventional two-dimensional tomographic reconstruction algorithms to fully three-dimensional reconstruction is described in detail. (orig.)
A New Method for Coronal Magnetic Field Reconstruction
Yi, Sibaek; Choe, Gwang-Son; Cho, Kyung-Suk; Kim, Kap-Sung
2017-08-01
A precise way of coronal magnetic field reconstruction (extrapolation) is an indispensable tool for understanding of various solar activities. A variety of reconstruction codes have been developed so far and are available to researchers nowadays, but they more or less bear this and that shortcoming. In this paper, a new efficient method for coronal magnetic field reconstruction is presented. The method imposes only the normal components of magnetic field and current density at the bottom boundary to avoid the overspecification of the reconstruction problem, and employs vector potentials to guarantee the divergence-freeness. In our method, the normal component of current density is imposed, not by adjusting the tangential components of A, but by adjusting its normal component. This allows us to avoid a possible numerical instability that on and off arises in codes using A. In real reconstruction problems, the information for the lateral and top boundaries is absent. The arbitrariness of the boundary conditions imposed there as well as various preprocessing brings about the diversity of resulting solutions. We impose the source surface condition at the top boundary to accommodate flux imbalance, which always shows up in magnetograms. To enhance the convergence rate, we equip our code with a gradient-method type accelerator. Our code is tested on two analytical force-free solutions. When the solution is given only at the bottom boundary, our result surpasses competitors in most figures of merits devised by Schrijver et al. (2006). We have also applied our code to a real active region NOAA 11974, in which two M-class flares and a halo CME took place. The EUV observation shows a sudden appearance of an erupting loop before the first flare. Our numerical solutions show that two entwining flux tubes exist before the flare and their shackling is released after the CME with one of them opened up. We suggest that the erupting loop is created by magnetic reconnection between
New method for initial density reconstruction
Shi, Yanlong; Cautun, Marius; Li, Baojiu
2018-01-01
A theoretically interesting and practically important question in cosmology is the reconstruction of the initial density distribution provided a late-time density field. This is a long-standing question with a revived interest recently, especially in the context of optimally extracting the baryonic acoustic oscillation (BAO) signals from observed galaxy distributions. We present a new efficient method to carry out this reconstruction, which is based on numerical solutions to the nonlinear partial differential equation that governs the mapping between the initial Lagrangian and final Eulerian coordinates of particles in evolved density fields. This is motivated by numerical simulations of the quartic Galileon gravity model, which has similar equations that can be solved effectively by multigrid Gauss-Seidel relaxation. The method is based on mass conservation, and does not assume any specific cosmological model. Our test shows that it has a performance comparable to that of state-of-the-art algorithms that were very recently put forward in the literature, with the reconstructed density field over ˜80 % (50%) correlated with the initial condition at k ≲0.6 h /Mpc (1.0 h /Mpc ). With an example, we demonstrate that this method can significantly improve the accuracy of BAO reconstruction.
On an image reconstruction method for ECT
Sasamoto, Akira; Suzuki, Takayuki; Nishimura, Yoshihiro
2007-04-01
An image by Eddy Current Testing(ECT) is a blurred image to original flaw shape. In order to reconstruct fine flaw image, a new image reconstruction method has been proposed. This method is based on an assumption that a very simple relationship between measured data and source were described by a convolution of response function and flaw shape. This assumption leads to a simple inverse analysis method with deconvolution.In this method, Point Spread Function (PSF) and Line Spread Function(LSF) play a key role in deconvolution processing. This study proposes a simple data processing to determine PSF and LSF from ECT data of machined hole and line flaw. In order to verify its validity, ECT data for SUS316 plate(200x200x10mm) with artificial machined hole and notch flaw had been acquired by differential coil type sensors(produced by ZETEC Inc). Those data were analyzed by the proposed method. The proposed method restored sharp discrete multiple hole image from interfered data by multiple holes. Also the estimated width of line flaw has been much improved compared with original experimental data. Although proposed inverse analysis strategy is simple and easy to implement, its validity to holes and line flaw have been shown by many results that much finer image than original image have been reconstructed.
Apparatus and method for reconstructing data
International Nuclear Information System (INIS)
Pavkovich, J.M.
1977-01-01
The apparatus and method for reconstructing data are described. A fan beam of radiation is passed through an object, the beam lying in the same quasi-plane as the object slice to be examined. Radiation not absorbed in the object slice is recorded on oppositely situated detectors aligned with the source of radiation. Relative rotation is provided between the source-detector configuration and the object. Reconstruction means are coupled to the detector means, and may comprise a general purpose computer, a special purpose computer, and control logic for interfacing between said computers and controlling the respective functioning thereof for performing a convolution and back projection based upon non-absorbed radiation detected by said detector means, whereby the reconstruction means converts values of the non-absorbed radiation into values of absorbed radiation at each of an arbitrarily large number of points selected within the object slice. Display means are coupled to the reconstruction means for providing a visual or other display or representation of the quantities of radiation absorbed at the points considered in the object. (Auth.)
Advances on geometric flux optical design method
García-Botella, Ángel; Fernández-Balbuena, Antonio Álvarez; Vázquez, Daniel
2017-09-01
Nonimaging optics is focused on the study of methods to design concentrators or illuminators systems. It can be included in the area of photometry and radiometry and it is governed by the laws of geometrical optics. The field vector method, which starts with the definition of the irradiance vector E, is one of the techniques used in nonimaging optics. Called "Geometrical flux vector" it has provide ideal designs. The main property of this model is, its ability to estimate how radiant energy is transferred by the optical system, from the concepts of field line, flux tube and pseudopotential surface, overcoming traditional raytrace methods. Nevertheless this model has been developed only at an academic level, where characteristic optical parameters are ideal not real and the studied geometries are simple. The main objective of the present paper is the application of the vector field method to the analysis and design of real concentration and illumination systems. We propose the development of a calculation tool for optical simulations by vector field, using algorithms based on Fermat`s principle, as an alternative to traditional tools for optical simulations by raytrace, based on reflection and refraction law. This new tool provides, first, traditional simulations results: efficiency, illuminance/irradiance calculations, angular distribution of light- with lower computation time, photometrical information needs about a few tens of field lines, in comparison with million rays needed nowadays. On the other hand the tool will provides new information as vector field maps produced by the system, composed by field lines and quasipotential surfaces. We show our first results with the vector field simulation tool.
Frerichs, H.; Effenberg, F.; Feng, Y.; Schmitz, O.; Stephey, L.; Reiter, D.; Börner, P.; The W7-X Team
2017-12-01
The interpretation of spectroscopic measurements in the edge region of high-temperature plasmas can be guided by modeling with the EMC3-EIRENE code. A versatile synthetic diagnostic module, initially developed for the generation of synthetic camera images, has been extended for the evaluation of the inverse problem in which the observable photon flux is related back to the originating particle flux (recycling). An application of this synthetic diagnostic to the startup phase (inboard) limiter in Wendelstein 7-X (W7-X) is presented, and reconstruction of recycling from synthetic observation of \\renewcommand{\
Object oriented reconstruction software for the Instrumented Flux Return of BABAR
Nardo, E D; Lista, L
2001-01-01
BABAR experiment is the first High Energy Physics experiment to extensively use object oriented technology and the C++ programming language for online and offline software. Object orientation permits to reach a high level of flexibility and maintainability of the code, which is a key point in a large project with many developers. These goals are reached with the introduction of reusable code elements, with abstraction of code behaviours and polymorphism. Software design, before code implementation, is the key task that determines the achievement of such a goal. We present the experience with the application of object oriented technology and design patterns to the reconstruction software of the Instrumented Flux Return detector of BABAR experiment. The use of abstract interfaces improved the development of reconstruction code and permitted to flexibly apply modification to reconstruction strategies, and eventually to reduce the maintenance load. The experience during the last years of development is presented....
A multiscale mortar multipoint flux mixed finite element method
Wheeler, Mary Fanett; Xue, Guangri; Yotov, Ivan
2012-01-01
In this paper, we develop a multiscale mortar multipoint flux mixed finite element method for second order elliptic problems. The equations in the coarse elements (or subdomains) are discretized on a fine grid scale by a multipoint flux mixed finite
Analytical method for reconstruction pin to pin of the nuclear power density distribution
Energy Technology Data Exchange (ETDEWEB)
Pessoa, Paulo O.; Silva, Fernando C.; Martinez, Aquilino S., E-mail: ppessoa@con.ufrj.br, E-mail: fernando@con.ufrj.br, E-mail: aquilino@imp.ufrj.br [Coordenacao dos Programas de Pos-Graduacao em Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil)
2013-07-01
An accurate and efficient method for reconstructing pin to pin of the nuclear power density distribution, involving the analytical solution of the diffusion equation for two-dimensional neutron energy groups in homogeneous nodes, is presented. The boundary conditions used for analytic as solution are the four currents or fluxes on the surface of the node, which are obtained by Nodal Expansion Method (known as NEM) and four fluxes at the vertices of a node calculated using the finite difference method. The analytical solution found is the homogeneous distribution of neutron flux. Detailed distributions pin to pin inside a fuel assembly are estimated by the product of homogeneous flux distribution by local heterogeneous form function. Furthermore, the form functions of flux and power are used. The results obtained with this method have a good accuracy when compared with reference values. (author)
Analytical method for reconstruction pin to pin of the nuclear power density distribution
International Nuclear Information System (INIS)
Pessoa, Paulo O.; Silva, Fernando C.; Martinez, Aquilino S.
2013-01-01
An accurate and efficient method for reconstructing pin to pin of the nuclear power density distribution, involving the analytical solution of the diffusion equation for two-dimensional neutron energy groups in homogeneous nodes, is presented. The boundary conditions used for analytic as solution are the four currents or fluxes on the surface of the node, which are obtained by Nodal Expansion Method (known as NEM) and four fluxes at the vertices of a node calculated using the finite difference method. The analytical solution found is the homogeneous distribution of neutron flux. Detailed distributions pin to pin inside a fuel assembly are estimated by the product of homogeneous flux distribution by local heterogeneous form function. Furthermore, the form functions of flux and power are used. The results obtained with this method have a good accuracy when compared with reference values. (author)
An alternative method for the measurement of neutron flux
Indian Academy of Sciences (India)
A simple and easy method for measuring the neutron flux is presented. This paper deals with the experimental verification of neutron dose rate–flux relationship for a non-dissipative medium. Though the neutron flux cannot be obtained from the dose rate in a dissipative medium, experimental result shows that for ...
International Nuclear Information System (INIS)
Kulacsy, K.; Lux, I.
1997-01-01
A new, approximate method is given to calculate the in-core flux from the current of SPNDs, with a delay of only a few seconds. The stability of this stepwise algorithm is proven to be satisfactory, and the results of tests performed both on synthetic and on real data are presented. The reconstructed flux is found to follow both steady state and transient fluxes well. (author)
Research of ART method in CT image reconstruction
International Nuclear Information System (INIS)
Li Zhipeng; Cong Peng; Wu Haifeng
2005-01-01
This paper studied Algebraic Reconstruction Technique (ART) in CT image reconstruction. Discussed the ray number influence on image quality. And the adopting of smooth method got high quality CT image. (authors)
A volume of fluid method based on multidimensional advection and spline interface reconstruction
International Nuclear Information System (INIS)
Lopez, J.; Hernandez, J.; Gomez, P.; Faura, F.
2004-01-01
A new volume of fluid method for tracking two-dimensional interfaces is presented. The method involves a multidimensional advection algorithm based on the use of edge-matched flux polygons to integrate the volume fraction evolution equation, and a spline-based reconstruction algorithm. The accuracy and efficiency of the proposed method are analyzed using different tests, and the results are compared with those obtained recently by other authors. Despite its simplicity, the proposed method represents a significant improvement, and compares favorably with other volume of fluid methods as regards the accuracy and efficiency of both the advection and reconstruction steps
Method of reconstructing a moving pulse
Energy Technology Data Exchange (ETDEWEB)
Howard, S J; Horton, R D; Hwang, D Q; Evans, R W; Brockington, S J; Johnson, J [UC Davis Department of Applied Science, Livermore, CA, 94551 (United States)
2007-11-15
We present a method of analyzing a set of N time signals f{sub i}(t) that consist of local measurements of the same physical observable taken at N sequential locations Z{sub i} along the length of an experimental device. The result is an algorithm for reconstructing an approximation F(z,t) of the field f(z,t) in the inaccessible regions between the points of measurement. We also explore the conditions needed for this approximation to hold, and test the algorithm under a variety of conditions. We apply this method to analyze the magnetic field measurements taken on the Compact Toroid Injection eXperiment (CTIX) plasma accelerator; providing a direct means of visualizing experimental data, quantifying global properties, and benchmarking simulation.
Class of reconstructed discontinuous Galerkin methods in computational fluid dynamics
International Nuclear Information System (INIS)
Luo, Hong; Xia, Yidong; Nourgaliev, Robert
2011-01-01
A class of reconstructed discontinuous Galerkin (DG) methods is presented to solve compressible flow problems on arbitrary grids. The idea is to combine the efficiency of the reconstruction methods in finite volume methods and the accuracy of the DG methods to obtain a better numerical algorithm in computational fluid dynamics. The beauty of the resulting reconstructed discontinuous Galerkin (RDG) methods is that they provide a unified formulation for both finite volume and DG methods, and contain both classical finite volume and standard DG methods as two special cases of the RDG methods, and thus allow for a direct efficiency comparison. Both Green-Gauss and least-squares reconstruction methods and a least-squares recovery method are presented to obtain a quadratic polynomial representation of the underlying linear discontinuous Galerkin solution on each cell via a so-called in-cell reconstruction process. The devised in-cell reconstruction is aimed to augment the accuracy of the discontinuous Galerkin method by increasing the order of the underlying polynomial solution. These three reconstructed discontinuous Galerkin methods are used to compute a variety of compressible flow problems on arbitrary meshes to assess their accuracy. The numerical experiments demonstrate that all three reconstructed discontinuous Galerkin methods can significantly improve the accuracy of the underlying second-order DG method, although the least-squares reconstructed DG method provides the best performance in terms of both accuracy, efficiency, and robustness. (author)
Image-reconstruction methods in positron tomography
Townsend, David W; CERN. Geneva
1993-01-01
Physics and mathematics for medical imaging In the two decades since the introduction of the X-ray scanner into radiology, medical imaging techniques have become widely established as essential tools in the diagnosis of disease. As a consequence of recent technological and mathematical advances, the non-invasive, three-dimensional imaging of internal organs such as the brain and the heart is now possible, not only for anatomical investigations using X-rays but also for studies which explore the functional status of the body using positron-emitting radioisotopes and nuclear magnetic resonance. Mathematical methods which enable three-dimentional distributions to be reconstructed from projection data acquired by radiation detectors suitably positioned around the patient will be described in detail. The lectures will trace the development of medical imaging from simpleradiographs to the present-day non-invasive measurement of in vivo boichemistry. Powerful techniques to correlate anatomy and function that are cur...
New weighting methods for phylogenetic tree reconstruction using multiple loci.
Misawa, Kazuharu; Tajima, Fumio
2012-08-01
Efficient determination of evolutionary distances is important for the correct reconstruction of phylogenetic trees. The performance of the pooled distance required for reconstructing a phylogenetic tree can be improved by applying large weights to appropriate distances for reconstructing phylogenetic trees and small weights to inappropriate distances. We developed two weighting methods, the modified Tajima-Takezaki method and the modified least-squares method, for reconstructing phylogenetic trees from multiple loci. By computer simulations, we found that both of the new methods were more efficient in reconstructing correct topologies than the no-weight method. Hence, we reconstructed hominoid phylogenetic trees from mitochondrial DNA using our new methods, and found that the levels of bootstrap support were significantly increased by the modified Tajima-Takezaki and by the modified least-squares method.
[Development and current situation of reconstruction methods following total sacrectomy].
Huang, Siyi; Ji, Tao; Guo, Wei
2018-05-01
To review the development of the reconstruction methods following total sacrectomy, and to provide reference for finding a better reconstruction method following total sacrectomy. The case reports and biomechanical and finite element studies of reconstruction following total sacrectomy at home and abroad were searched. Development and current situation were summarized. After developing for nearly 30 years, great progress has been made in the reconstruction concept and fixation techniques. The fixation methods can be summarized as the following three strategies: spinopelvic fixation (SPF), posterior pelvic ring fixation (PPRF), and anterior spinal column fixation (ASCF). SPF has undergone technical progress from intrapelvic rod and hook constructs to pedicle and iliac screw-rod systems. PPRF and ASCF could improve the stability of the reconstruction system. Reconstruction following total sacrectomy remains a challenge. Reconstruction combining SPF, PPRF, and ASCF is the developmental direction to achieve mechanical stability. How to gain biological fixation to improve the long-term stability is an urgent problem to be solved.
DEFF Research Database (Denmark)
Hellebust, Taran Paulsen; Tanderup, Kari; Bergstrand, Eva Stabell
2007-01-01
in multiplanar reconstructed images (MPR) and (3) library plans, using pre-defined applicator geometry (LIB). The doses to the lead pellets were calculated. The relative standard deviation (SD) for all reconstruction methods was less than 3.7% in the dose points. The relative SD for the LIB method...
Multicore Performance of Block Algebraic Iterative Reconstruction Methods
DEFF Research Database (Denmark)
Sørensen, Hans Henrik B.; Hansen, Per Christian
2014-01-01
Algebraic iterative methods are routinely used for solving the ill-posed sparse linear systems arising in tomographic image reconstruction. Here we consider the algebraic reconstruction technique (ART) and the simultaneous iterative reconstruction techniques (SIRT), both of which rely on semiconv......Algebraic iterative methods are routinely used for solving the ill-posed sparse linear systems arising in tomographic image reconstruction. Here we consider the algebraic reconstruction technique (ART) and the simultaneous iterative reconstruction techniques (SIRT), both of which rely...... on semiconvergence. Block versions of these methods, based on a partitioning of the linear system, are able to combine the fast semiconvergence of ART with the better multicore properties of SIRT. These block methods separate into two classes: those that, in each iteration, access the blocks in a sequential manner...... a fixed relaxation parameter in each method, namely, the one that leads to the fastest semiconvergence. Computational results show that for multicore computers, the sequential approach is preferable....
The higher order flux mapping method in large size PHWRs
International Nuclear Information System (INIS)
Kulkarni, A.K.; Balaraman, V.; Purandare, H.D.
1997-01-01
A new higher order method is proposed for obtaining flux map using single set of expansion mode. In this procedure, one can make use of the difference between predicted value of detector reading and their actual values for determining the strength of local fluxes around detector site. The local fluxes are arising due to constant perturbation changes (both extrinsic and intrinsic) taking place in the reactor. (author)
Evaluation of proxy-based millennial reconstruction methods
Energy Technology Data Exchange (ETDEWEB)
Lee, Terry C.K.; Tsao, Min [University of Victoria, Department of Mathematics and Statistics, Victoria, BC (Canada); Zwiers, Francis W. [Environment Canada, Climate Research Division, Toronto, ON (Canada)
2008-08-15
A range of existing statistical approaches for reconstructing historical temperature variations from proxy data are compared using both climate model data and real-world paleoclimate proxy data. We also propose a new method for reconstruction that is based on a state-space time series model and Kalman filter algorithm. The state-space modelling approach and the recently developed RegEM method generally perform better than their competitors when reconstructing interannual variations in Northern Hemispheric mean surface air temperature. On the other hand, a variety of methods are seen to perform well when reconstructing surface air temperature variability on decadal time scales. An advantage of the new method is that it can incorporate additional, non-temperature, information into the reconstruction, such as the estimated response to external forcing, thereby permitting a simultaneous reconstruction and detection analysis as well as future projection. An application of these extensions is also demonstrated in the paper. (orig.)
International Nuclear Information System (INIS)
Hellebust, Taran Paulsen; Tanderup, Kari; Bergstrand, Eva Stabell; Knutsen, Bjoern Helge; Roeislien, Jo; Olsen, Dag Rune
2007-01-01
The purpose of this study is to investigate whether the method of applicator reconstruction and/or the applicator orientation influence the dose calculation to points around the applicator for brachytherapy of cervical cancer with CT-based treatment planning. A phantom, containing a fixed ring applicator set and six lead pellets representing dose points, was used. The phantom was CT scanned with the ring applicator at four different angles related to the image plane. In each scan the applicator was reconstructed by three methods: (1) direct reconstruction in each image (DR) (2) reconstruction in multiplanar reconstructed images (MPR) and (3) library plans, using pre-defined applicator geometry (LIB). The doses to the lead pellets were calculated. The relative standard deviation (SD) for all reconstruction methods was less than 3.7% in the dose points. The relative SD for the LIB method was significantly lower (p < 0.05) than for the DR and MPR methods for all but two points. All applicator orientations had similar dose calculation reproducibility. Using library plans for applicator reconstruction gives the most reproducible dose calculation. However, with restrictive guidelines for applicator reconstruction the uncertainties for all methods are low compared to other factors influencing the accuracy of brachytherapy
Software Architecture Reconstruction Method, a Survey
Zainab Nayyar; Nazish Rafique
2014-01-01
Architecture reconstruction belongs to a reverse engineering process, in which we move from code to architecture level for reconstructing architecture. Software architectures are the blue prints of projects which depict the external overview of the software system. Mostly maintenance and testing cause the software to deviate from its original architecture, because sometimes for enhancing the functionality of a system the software deviates from its documented specifications, some new modules a...
Particle fluxes above forests: Observations, methodological considerations and method comparisons
International Nuclear Information System (INIS)
Pryor, S.C.; Larsen, S.E.; Sorensen, L.L.; Barthelmie, R.J.
2008-01-01
This paper reports a study designed to test, evaluate and compare micro-meteorological methods for determining the particle number flux above forest canopies. Half-hour average particle number fluxes above a representative broad-leaved forest in Denmark derived using eddy covariance range from -7 x 10 7 m -2 s -1 (1st percentile) to 5 x 10 7 m -2 s -1 (99th percentile), and have a median value of -1.6 x 10 6 m -2 s -1 . The statistical uncertainties associated with the particle number flux estimates are larger than those for momentum fluxes and imply that in this data set approximately half of the particle number fluxes are not statistically different to zero. Particle number fluxes from relaxed eddy accumulation (REA) and eddy covariance are highly correlated and of almost identical magnitude. Flux estimates from the co-spectral and dissipation methods are also correlated with those from eddy covariance but exhibit higher absolute magnitude of fluxes. - Number fluxes of ultra-fine particles over a forest computed using four micro-meteorological techniques are highly correlated but vary in magnitude
Zwanenburg, Philip; Nadarajah, Siva
2016-02-01
The aim of this paper is to demonstrate the equivalence between filtered Discontinuous Galerkin (DG) schemes and the Energy Stable Flux Reconstruction (ESFR) schemes, expanding on previous demonstrations in 1D [1] and for straight-sided elements in 3D [2]. We first derive the DG and ESFR schemes in strong form and compare the respective flux penalization terms while highlighting the implications of the fundamental assumptions for stability in the ESFR formulations, notably that all ESFR scheme correction fields can be interpreted as modally filtered DG correction fields. We present the result in the general context of all higher dimensional curvilinear element formulations. Through a demonstration that there exists a weak form of the ESFR schemes which is both discretely and analytically equivalent to the strong form, we then extend the results obtained for the strong formulations to demonstrate that ESFR schemes can be interpreted as a DG scheme in weak form where discontinuous edge flux is substituted for numerical edge flux correction. Theoretical derivations are then verified with numerical results obtained from a 2D Euler testcase with curved boundaries. Given the current choice of high-order DG-type schemes and the question as to which might be best to use for a specific application, the main significance of this work is the bridge that it provides between them. Clearly outlining the similarities between the schemes results in the important conclusion that it is always less efficient to use ESFR schemes, as opposed to the weak DG scheme, when solving problems implicitly.
International Nuclear Information System (INIS)
Nascimento, F.M.; Sergeenkov, S.; Araujo-Moreira, F.M.
2012-01-01
By using a specially designed algorithm (based on utilizing the so-called Hierarchical Data Format), we report on successful reconstruction of 3D profiles of local flux distribution within artificially prepared arrays of unshunted Nb-AlO x -Nb Josephson junctions from 2D surface images obtained via the scanning SQUID microscope. The analysis of the obtained results suggest that for large sweep areas, the local flux distribution significantly deviates from the conventional picture and exhibits a more complicated avalanche-type behavior with a prominent dendritic structure. -- Highlights: ► The penetration of external magnetic field into an array of Nb-AlO x -Nb Josephson junctions is studied. ► Using Scanning SQUID Microscope, 2D images of local flux distribution within array are obtained. ► Using specially designed pattern recognition algorithm, 3D flux profiles are reconstructed from 2D images.
Testing an inversion method for estimating electron energy fluxes from all-sky camera images
Directory of Open Access Journals (Sweden)
N. Partamies
2004-06-01
Full Text Available An inversion method for reconstructing the precipitating electron energy flux from a set of multi-wavelength digital all-sky camera (ASC images has recently been developed by tomografia. Preliminary tests suggested that the inversion is able to reconstruct the position and energy characteristics of the aurora with reasonable accuracy. This study carries out a thorough testing of the method and a few improvements for its emission physics equations. We compared the precipitating electron energy fluxes as estimated by the inversion method to the energy flux data recorded by the Defense Meteorological Satellite Program (DMSP satellites during four passes over auroral structures. When the aurorae appear very close to the local zenith, the fluxes inverted from the blue (427.8nm filtered ASC images or blue and green line (557.7nm images together give the best agreement with the measured flux values. The fluxes inverted from green line images alone are clearly larger than the measured ones. Closer to the horizon the quality of the inversion results from blue images deteriorate to the level of the ones from green images. In addition to the satellite data, the precipitating electron energy fluxes were estimated from the electron density measurements by the EISCAT Svalbard Radar (ESR. These energy flux values were compared to the ones of the inversion method applied to over 100 ASC images recorded at the nearby ASC station in Longyearbyen. The energy fluxes deduced from these two types of data are in general of the same order of magnitude. In 35% of all of the blue and green image inversions the relative errors were less than 50% and in 90% of the blue and green image inversions less than 100%. This kind of systematic testing of the inversion method is the first step toward using all-sky camera images in the way in which global UV images have recently been used to estimate the energy fluxes. The advantages of ASCs, compared to the space-born imagers, are
Blind compressed sensing image reconstruction based on alternating direction method
Liu, Qinan; Guo, Shuxu
2018-04-01
In order to solve the problem of how to reconstruct the original image under the condition of unknown sparse basis, this paper proposes an image reconstruction method based on blind compressed sensing model. In this model, the image signal is regarded as the product of a sparse coefficient matrix and a dictionary matrix. Based on the existing blind compressed sensing theory, the optimal solution is solved by the alternative minimization method. The proposed method solves the problem that the sparse basis in compressed sensing is difficult to represent, which restrains the noise and improves the quality of reconstructed image. This method ensures that the blind compressed sensing theory has a unique solution and can recover the reconstructed original image signal from a complex environment with a stronger self-adaptability. The experimental results show that the image reconstruction algorithm based on blind compressed sensing proposed in this paper can recover high quality image signals under the condition of under-sampling.
High resolution x-ray CMT: Reconstruction methods
Energy Technology Data Exchange (ETDEWEB)
Brown, J.K.
1997-02-01
This paper qualitatively discusses the primary characteristics of methods for reconstructing tomographic images from a set of projections. These reconstruction methods can be categorized as either {open_quotes}analytic{close_quotes} or {open_quotes}iterative{close_quotes} techniques. Analytic algorithms are derived from the formal inversion of equations describing the imaging process, while iterative algorithms incorporate a model of the imaging process and provide a mechanism to iteratively improve image estimates. Analytic reconstruction algorithms are typically computationally more efficient than iterative methods; however, analytic algorithms are available for a relatively limited set of imaging geometries and situations. Thus, the framework of iterative reconstruction methods is better suited for high accuracy, tomographic reconstruction codes.
Limb reconstruction with the Ilizarov method
Oostenbroek, H.J.
2014-01-01
In chapter 1, the background and origins of this study are explained. The aims of the study are defined. In chapter 2, an analysis of the complications rate of limb reconstruction in a cohort of 37 consecutive growing children was done. Several patient and deformity factors were investigated by
AIR Tools - A MATLAB package of algebraic iterative reconstruction methods
DEFF Research Database (Denmark)
Hansen, Per Christian; Saxild-Hansen, Maria
2012-01-01
We present a MATLAB package with implementations of several algebraic iterative reconstruction methods for discretizations of inverse problems. These so-called row action methods rely on semi-convergence for achieving the necessary regularization of the problem. Two classes of methods are impleme......We present a MATLAB package with implementations of several algebraic iterative reconstruction methods for discretizations of inverse problems. These so-called row action methods rely on semi-convergence for achieving the necessary regularization of the problem. Two classes of methods...... are implemented: Algebraic Reconstruction Techniques (ART) and Simultaneous Iterative Reconstruction Techniques (SIRT). In addition we provide a few simplified test problems from medical and seismic tomography. For each iterative method, a number of strategies are available for choosing the relaxation parameter...
Adaptive multiresolution method for MAP reconstruction in electron tomography
Energy Technology Data Exchange (ETDEWEB)
Acar, Erman, E-mail: erman.acar@tut.fi [Department of Signal Processing, Tampere University of Technology, P.O. Box 553, FI-33101 Tampere (Finland); BioMediTech, Tampere University of Technology, Biokatu 10, 33520 Tampere (Finland); Peltonen, Sari; Ruotsalainen, Ulla [Department of Signal Processing, Tampere University of Technology, P.O. Box 553, FI-33101 Tampere (Finland); BioMediTech, Tampere University of Technology, Biokatu 10, 33520 Tampere (Finland)
2016-11-15
3D image reconstruction with electron tomography holds problems due to the severely limited range of projection angles and low signal to noise ratio of the acquired projection images. The maximum a posteriori (MAP) reconstruction methods have been successful in compensating for the missing information and suppressing noise with their intrinsic regularization techniques. There are two major problems in MAP reconstruction methods: (1) selection of the regularization parameter that controls the balance between the data fidelity and the prior information, and (2) long computation time. One aim of this study is to provide an adaptive solution to the regularization parameter selection problem without having additional knowledge about the imaging environment and the sample. The other aim is to realize the reconstruction using sequences of resolution levels to shorten the computation time. The reconstructions were analyzed in terms of accuracy and computational efficiency using a simulated biological phantom and publically available experimental datasets of electron tomography. The numerical and visual evaluations of the experiments show that the adaptive multiresolution method can provide more accurate results than the weighted back projection (WBP), simultaneous iterative reconstruction technique (SIRT), and sequential MAP expectation maximization (sMAPEM) method. The method is superior to sMAPEM also in terms of computation time and usability since it can reconstruct 3D images significantly faster without requiring any parameter to be set by the user. - Highlights: • An adaptive multiresolution reconstruction method is introduced for electron tomography. • The method provides more accurate results than the conventional reconstruction methods. • The missing wedge and noise problems can be compensated by the method efficiently.
Geometric reconstruction methods for electron tomography
Energy Technology Data Exchange (ETDEWEB)
Alpers, Andreas, E-mail: alpers@ma.tum.de [Zentrum Mathematik, Technische Universität München, D-85747 Garching bei München (Germany); Gardner, Richard J., E-mail: Richard.Gardner@wwu.edu [Department of Mathematics, Western Washington University, Bellingham, WA 98225-9063 (United States); König, Stefan, E-mail: koenig@ma.tum.de [Zentrum Mathematik, Technische Universität München, D-85747 Garching bei München (Germany); Pennington, Robert S., E-mail: robert.pennington@uni-ulm.de [Center for Electron Nanoscopy, Technical University of Denmark, DK-2800 Kongens Lyngby (Denmark); Boothroyd, Chris B., E-mail: ChrisBoothroyd@cantab.net [Ernst Ruska-Centre for Microscopy and Spectroscopy with Electrons and Peter Grünberg Institute, Forschungszentrum Jülich, D-52425 Jülich (Germany); Houben, Lothar, E-mail: l.houben@fz-juelich.de [Ernst Ruska-Centre for Microscopy and Spectroscopy with Electrons and Peter Grünberg Institute, Forschungszentrum Jülich, D-52425 Jülich (Germany); Dunin-Borkowski, Rafal E., E-mail: rdb@fz-juelich.de [Ernst Ruska-Centre for Microscopy and Spectroscopy with Electrons and Peter Grünberg Institute, Forschungszentrum Jülich, D-52425 Jülich (Germany); Joost Batenburg, Kees, E-mail: Joost.Batenburg@cwi.nl [Centrum Wiskunde and Informatica, NL-1098XG, Amsterdam, The Netherlands and Vision Lab, Department of Physics, University of Antwerp, B-2610 Wilrijk (Belgium)
2013-05-15
Electron tomography is becoming an increasingly important tool in materials science for studying the three-dimensional morphologies and chemical compositions of nanostructures. The image quality obtained by many current algorithms is seriously affected by the problems of missing wedge artefacts and non-linear projection intensities due to diffraction effects. The former refers to the fact that data cannot be acquired over the full 180° tilt range; the latter implies that for some orientations, crystalline structures can show strong contrast changes. To overcome these problems we introduce and discuss several algorithms from the mathematical fields of geometric and discrete tomography. The algorithms incorporate geometric prior knowledge (mainly convexity and homogeneity), which also in principle considerably reduces the number of tilt angles required. Results are discussed for the reconstruction of an InAs nanowire. - Highlights: ► Four algorithms for electron tomography are introduced that utilize prior knowledge. ► Objects are assumed to be homogeneous; convexity and regularity is also discussed. ► We are able to reconstruct slices of a nanowire from as few as four projections. ► Algorithms should be selected based on the specific reconstruction task at hand.
Geometric reconstruction methods for electron tomography
International Nuclear Information System (INIS)
Alpers, Andreas; Gardner, Richard J.; König, Stefan; Pennington, Robert S.; Boothroyd, Chris B.; Houben, Lothar; Dunin-Borkowski, Rafal E.; Joost Batenburg, Kees
2013-01-01
Electron tomography is becoming an increasingly important tool in materials science for studying the three-dimensional morphologies and chemical compositions of nanostructures. The image quality obtained by many current algorithms is seriously affected by the problems of missing wedge artefacts and non-linear projection intensities due to diffraction effects. The former refers to the fact that data cannot be acquired over the full 180° tilt range; the latter implies that for some orientations, crystalline structures can show strong contrast changes. To overcome these problems we introduce and discuss several algorithms from the mathematical fields of geometric and discrete tomography. The algorithms incorporate geometric prior knowledge (mainly convexity and homogeneity), which also in principle considerably reduces the number of tilt angles required. Results are discussed for the reconstruction of an InAs nanowire. - Highlights: ► Four algorithms for electron tomography are introduced that utilize prior knowledge. ► Objects are assumed to be homogeneous; convexity and regularity is also discussed. ► We are able to reconstruct slices of a nanowire from as few as four projections. ► Algorithms should be selected based on the specific reconstruction task at hand
Assessing the Accuracy of Ancestral Protein Reconstruction Methods
Williams, Paul D; Pollock, David D; Blackburne, Benjamin P; Goldstein, Richard A
2006-01-01
The phylogenetic inference of ancestral protein sequences is a powerful technique for the study of molecular evolution, but any conclusions drawn from such studies are only as good as the accuracy of the reconstruction method. Every inference method leads to errors in the ancestral protein sequence, resulting in potentially misleading estimates of the ancestral protein's properties. To assess the accuracy of ancestral protein reconstruction methods, we performed computational population evolu...
Advances in the Surface Renewal Flux Measurement Method
Shapland, T. M.; McElrone, A.; Paw U, K. T.; Snyder, R. L.
2011-12-01
The measurement of ecosystem-scale energy and mass fluxes between the planetary surface and the atmosphere is crucial for understanding geophysical processes. Surface renewal is a flux measurement technique based on analyzing the turbulent coherent structures that interact with the surface. It is a less expensive technique because it does not require fast-response velocity measurements, but only a fast-response scalar measurement. It is therefore also a useful tool for the study of the global cycling of trace gases. Currently, surface renewal requires calibration against another flux measurement technique, such as eddy covariance, to account for the linear bias of its measurements. We present two advances in the surface renewal theory and methodology that bring the technique closer to becoming a fully independent flux measurement method. The first advance develops the theory of turbulent coherent structure transport associated with the different scales of coherent structures. A novel method was developed for identifying the scalar change rate within structures at different scales. Our results suggest that for canopies less than one meter in height, the second smallest coherent structure scale dominates the energy and mass flux process. Using the method for resolving the scalar exchange rate of the second smallest coherent structure scale, calibration is unnecessary for surface renewal measurements over short canopies. This study forms the foundation for analysis over more complex surfaces. The second advance is a sensor frequency response correction for measuring the sensible heat flux via surface renewal. Inexpensive fine-wire thermocouples are frequently used to record high frequency temperature data in the surface renewal technique. The sensible heat flux is used in conjunction with net radiation and ground heat flux measurements to determine the latent heat flux as the energy balance residual. The robust thermocouples commonly used in field experiments
How to choose methods for lake greenhouse gas flux measurements?
Bastviken, David
2017-04-01
Lake greenhouse gas (GHG) fluxes are increasingly recognized as important for lake ecosystems as well as for large scale carbon and GHG budgets. However, many of our flux estimates are uncertain and it can be discussed if the presently available data is representative for the systems studied or not. Data are also very limited for some important flux pathways. Hence, many ongoing efforts try to better constrain fluxes and understand flux regulation. A fundamental challenge towards improved knowledge and when starting new studies is what methods to choose. A variety of approaches to measure aquatic GHG exchange is used and data from different methods and methodological approaches have often been treated as equally valid to create large datasets for extrapolations and syntheses. However, data from different approaches may cover different flux pathways or spatio-temporal domains and are thus not always comparable. Method inter-comparisons and critical method evaluations addressing these issues are rare. Emerging efforts to organize systematic multi-lake monitoring networks for GHG fluxes leads to method choices that may set the foundation for decades of data generation and therefore require fundamental evaluation of different approaches. The method choices do not only regard the equipment but also for example consideration of overall measurement design and field approaches, relevant spatial and temporal resolution for different flux components, and accessory variables to measure. In addition, consideration of how to design monitoring approaches being affordable, suitable for widespread (global) use, and comparable across regions is needed. Inspired by discussions with Prof. Dr. Cristian Blodau during the EGU General Assembly 2016, this presentation aims to (1) illustrate fundamental pros and cons for a number of common methods, (2) show how common methodological approaches originally adapted for other environments can be improved for lake flux measurements, (3) suggest
Anatomically-aided PET reconstruction using the kernel method.
Hutchcroft, Will; Wang, Guobao; Chen, Kevin T; Catana, Ciprian; Qi, Jinyi
2016-09-21
This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.
A comparison of ancestral state reconstruction methods for quantitative characters.
Royer-Carenzi, Manuela; Didier, Gilles
2016-09-07
Choosing an ancestral state reconstruction method among the alternatives available for quantitative characters may be puzzling. We present here a comparison of seven of them, namely the maximum likelihood, restricted maximum likelihood, generalized least squares under Brownian, Brownian-with-trend and Ornstein-Uhlenbeck models, phylogenetic independent contrasts and squared parsimony methods. A review of the relations between these methods shows that the maximum likelihood, the restricted maximum likelihood and the generalized least squares under Brownian model infer the same ancestral states and can only be distinguished by the distributions accounting for the reconstruction uncertainty which they provide. The respective accuracy of the methods is assessed over character evolution simulated under a Brownian motion with (and without) directional or stabilizing selection. We give the general form of ancestral state distributions conditioned on leaf states under the simulation models. Ancestral distributions are used first, to give a theoretical lower bound of the expected reconstruction error, and second, to develop an original evaluation scheme which is more efficient than comparing the reconstructed and the simulated states. Our simulations show that: (i) the distributions of the reconstruction uncertainty provided by the methods generally make sense (some more than others); (ii) it is essential to detect the presence of an evolutionary trend and to choose a reconstruction method accordingly; (iii) all the methods show good performances on characters under stabilizing selection; (iv) without trend or stabilizing selection, the maximum likelihood method is generally the most accurate. Copyright © 2016 Elsevier Ltd. All rights reserved.
Magnetic flux concentration methods for magnetic energy harvesting module
Directory of Open Access Journals (Sweden)
Wakiwaka Hiroyuki
2013-01-01
Full Text Available This paper presents magnetic flux concentration methods for magnetic energy harvesting module. The purpose of this study is to harvest 1 mW energy with a Brooks coil 2 cm in diameter from environmental magnetic field at 60 Hz. Because the harvesting power is proportional to the square of the magnetic flux density, we consider the use of a magnetic flux concentration coil and a magnetic core. The magnetic flux concentration coil consists of an aircore Brooks coil and a resonant capacitor. When a uniform magnetic field crossed the coil, the magnetic flux distribution around the coil was changed. It is found that the magnetic field in an area is concentrated larger than 20 times compared with the uniform magnetic field. Compared with the aircore coil, our designed magnetic core makes the harvested energy tenfold. According to ICNIRP2010 guideline, the acceptable level of magnetic field is 0.2 mT in the frequency range between 25 Hz and 400 Hz. Without the two magnetic flux concentration methods, the corresponding energy is limited to 1 µW. In contrast, our experimental results successfully demonstrate energy harvesting of 1 mW from a magnetic field of 0.03 mT at 60 Hz.
Alternative method for reconstruction of antihydrogen annihilation vertices
Amole, C; Andresen , G B; Baquero-Ruiz, M; Bertsche, W; Bowe, P D; Butler, E; Cesar, C L; Chapman, S; Charlton, M; Deller, A; Eriksson, S; Fajans, J; Friesen, T; Fujiwara, M C; Gill, D R; Gutierrez, A; Hangst, J S; Hardy, W N; Hayano, R S; Hayden, M E; Humphries, A J; Hydomako, R; Jonsell, S; Kurchaninov, L; Madsen, N; Menary, S; Nolan, P; Olchanski, K; Olin, A; Povilus, A; Pusa, P; Robicheaux, F; Sarid, E; Silveira, D M; So, C; Storey, J W; Thompson, R I; van der Werf, D P; Wurtele, J S; Yamazaki,Y
2012-01-01
The ALPHA experiment, located at CERN, aims to compare the properties of antihydrogen atoms with those of hydrogen atoms. The neutral antihydrogen atoms are trapped using an octupole magnetic trap. The trap region is surrounded by a three layered silicon detector used to reconstruct the antiproton annihilation vertices. This paper describes a method we have devised that can be used for reconstructing annihilation vertices with a good resolution and is more efficient than the standard method currently used for the same purpose.
Alternative method for reconstruction of antihydrogen annihilation vertices
Energy Technology Data Exchange (ETDEWEB)
Amole, C., E-mail: chanpreet.amole@cern.ch [York University, Department of Physics and Astronomy (Canada); Ashkezari, M. D. [Simon Fraser University, Department of Physics (Canada); Andresen, G. B. [Aarhus University, Department of Physics and Astronomy (Denmark); Baquero-Ruiz, M. [University of California, Department of Physics (United States); Bertsche, W. [Swansea University, Department of Physics (United Kingdom); Bowe, P. D. [Aarhus University, Department of Physics and Astronomy (Denmark); Butler, E. [CERN, Physics Department (Switzerland); Cesar, C. L. [Universidade Federal do Rio de Janeiro, Instituto de Fisica (Brazil); Chapman, S. [University of California, Department of Physics (United States); Charlton, M.; Deller, A.; Eriksson, S. [Swansea University, Department of Physics (United Kingdom); Fajans, J. [University of California, Department of Physics (United States); Friesen, T.; Fujiwara, M. C. [University of Calgary, Department of Physics and Astronomy (Canada); Gill, D. R. [TRIUMF (Canada); Gutierrez, A. [University of British Columbia, Department of Physics and Astronomy (Canada); Hangst, J. S. [Aarhus University, Department of Physics and Astronomy (Denmark); Hardy, W. N. [University of British Columbia, Department of Physics and Astronomy (Canada); Hayano, R. S. [University of Tokyo, Department of Physics (Japan); Collaboration: ALPHA Collaboration; and others
2012-12-15
The ALPHA experiment, located at CERN, aims to compare the properties of antihydrogen atoms with those of hydrogen atoms. The neutral antihydrogen atoms are trapped using an octupole magnetic trap. The trap region is surrounded by a three layered silicon detector used to reconstruct the antiproton annihilation vertices. This paper describes a method we have devised that can be used for reconstructing annihilation vertices with a good resolution and is more efficient than the standard method currently used for the same purpose.
Review of unfolding methods for neutron flux dosimetry
International Nuclear Information System (INIS)
Stallmann, F.W.; Kam, F.B.K.
1975-01-01
The primary method in reactor dosimetry is the foil activation technique. To translate the activation measurements into neutron fluxes, a special data processing technique called unfolding is needed. Some general observations about the problems and the reliability of this approach to reactor dosimetry are presented. Current unfolding methods are reviewed. 12 references. (auth)
Reconstruction methods for phase-contrast tomography
Energy Technology Data Exchange (ETDEWEB)
Raven, C.
1997-02-01
Phase contrast imaging with coherent x-rays can be distinguished in outline imaging and holography, depending on the wavelength {lambda}, the object size d and the object-to-detector distance r. When r << d{sup 2}{lambda}, phase contrast occurs only in regions where the refractive index fastly changes, i.e. at interfaces and edges in the sample. With increasing object-to-detector distance we come in the area of holographic imaging. The image contrast outside the shadow region of the object is due to interference of the direct, undiffracted beam and a beam diffracted by the object, or, in terms of holography, the interference of a reference wave with the object wave. Both, outline imaging and holography, offer the possibility to obtain three dimensional information of the sample in conjunction with a tomographic technique. But the data treatment and the kind of information one can obtain from the reconstruction is different.
Filter-based reconstruction methods for tomography
Pelt, D.M.
2016-01-01
In X-ray tomography, a three-dimensional image of the interior of an object is computed from multiple X-ray images, acquired over a range of angles. Two types of methods are commonly used to compute such an image: analytical methods and iterative methods. Analytical methods are computationally
Comparison of Force Reconstruction Methods for a Lumped Mass Beam
Directory of Open Access Journals (Sweden)
Vesta I. Bateman
1997-01-01
Full Text Available Two extensions of the force reconstruction method, the sum of weighted accelerations technique (SWAT, are presented in this article. SWAT requires the use of the structure’s elastic mode shapes for reconstruction of the applied force. Although based on the same theory, the two new techniques do not rely on mode shapes to reconstruct the applied force and may be applied to structures whose mode shapes are not available. One technique uses the measured force and acceleration responses with the rigid body mode shapes to calculate the scalar weighting vector, so the technique is called SWAT-CAL (SWAT using a calibrated force input. The second technique uses the free-decay time response of the structure with the rigid body mode shapes to calculate the scalar weighting vector and is called SWAT-TEEM (SWAT using time eliminated elastic modes. All three methods are used to reconstruct forces for a simple structure.
Choosing the best ancestral character state reconstruction method.
Royer-Carenzi, Manuela; Pontarotti, Pierre; Didier, Gilles
2013-03-01
Despite its intrinsic difficulty, ancestral character state reconstruction is an essential tool for testing evolutionary hypothesis. Two major classes of approaches to this question can be distinguished: parsimony- or likelihood-based approaches. We focus here on the second class of methods, more specifically on approaches based on continuous-time Markov modeling of character evolution. Among them, we consider the most-likely-ancestor reconstruction, the posterior-probability reconstruction, the likelihood-ratio method, and the Bayesian approach. We discuss and compare the above-mentioned methods over several phylogenetic trees, adding the maximum-parsimony method performance in the comparison. Under the assumption that the character evolves according a continuous-time Markov process, we compute and compare the expectations of success of each method for a broad range of model parameter values. Moreover, we show how the knowledge of the evolution model parameters allows to compute upper bounds of reconstruction performances, which are provided as references. The results of all these reconstruction methods are quite close one to another, and the expectations of success are not so far from their theoretical upper bounds. But the performance ranking heavily depends on the topology of the studied tree, on the ancestral node that is to be inferred and on the parameter values. Consequently, we propose a protocol providing for each parameter value the best method in terms of expectation of success, with regard to the phylogenetic tree and the ancestral node to infer. Copyright © 2012 Elsevier Inc. All rights reserved.
Two-dimensional semi-analytic nodal method for multigroup pin power reconstruction
International Nuclear Information System (INIS)
Seung Gyou, Baek; Han Gyu, Joo; Un Chul, Lee
2007-01-01
A pin power reconstruction method applicable to multigroup problems involving square fuel assemblies is presented. The method is based on a two-dimensional semi-analytic nodal solution which consists of eight exponential terms and 13 polynomial terms. The 13 polynomial terms represent the particular solution obtained under the condition of a 2-dimensional 13 term source expansion. In order to achieve better approximation of the source distribution, the least square fitting method is employed. The 8 exponential terms represent a part of the analytically obtained homogeneous solution and the 8 coefficients are determined by imposing constraints on the 4 surface average currents and 4 corner point fluxes. The surface average currents determined from a transverse-integrated nodal solution are used directly whereas the corner point fluxes are determined during the course of the reconstruction by employing an iterative scheme that would realize the corner point balance condition. The outgoing current based corner point flux determination scheme is newly introduced. The accuracy of the proposed method is demonstrated with the L336C5 benchmark problem. (authors)
Assessing the accuracy of ancestral protein reconstruction methods.
Directory of Open Access Journals (Sweden)
Paul D Williams
2006-06-01
Full Text Available The phylogenetic inference of ancestral protein sequences is a powerful technique for the study of molecular evolution, but any conclusions drawn from such studies are only as good as the accuracy of the reconstruction method. Every inference method leads to errors in the ancestral protein sequence, resulting in potentially misleading estimates of the ancestral protein's properties. To assess the accuracy of ancestral protein reconstruction methods, we performed computational population evolution simulations featuring near-neutral evolution under purifying selection, speciation, and divergence using an off-lattice protein model where fitness depends on the ability to be stable in a specified target structure. We were thus able to compare the thermodynamic properties of the true ancestral sequences with the properties of "ancestral sequences" inferred by maximum parsimony, maximum likelihood, and Bayesian methods. Surprisingly, we found that methods such as maximum parsimony and maximum likelihood that reconstruct a "best guess" amino acid at each position overestimate thermostability, while a Bayesian method that sometimes chooses less-probable residues from the posterior probability distribution does not. Maximum likelihood and maximum parsimony apparently tend to eliminate variants at a position that are slightly detrimental to structural stability simply because such detrimental variants are less frequent. Other properties of ancestral proteins might be similarly overestimated. This suggests that ancestral reconstruction studies require greater care to come to credible conclusions regarding functional evolution. Inferred functional patterns that mimic reconstruction bias should be reevaluated.
Assessing the accuracy of ancestral protein reconstruction methods.
Williams, Paul D; Pollock, David D; Blackburne, Benjamin P; Goldstein, Richard A
2006-06-23
The phylogenetic inference of ancestral protein sequences is a powerful technique for the study of molecular evolution, but any conclusions drawn from such studies are only as good as the accuracy of the reconstruction method. Every inference method leads to errors in the ancestral protein sequence, resulting in potentially misleading estimates of the ancestral protein's properties. To assess the accuracy of ancestral protein reconstruction methods, we performed computational population evolution simulations featuring near-neutral evolution under purifying selection, speciation, and divergence using an off-lattice protein model where fitness depends on the ability to be stable in a specified target structure. We were thus able to compare the thermodynamic properties of the true ancestral sequences with the properties of "ancestral sequences" inferred by maximum parsimony, maximum likelihood, and Bayesian methods. Surprisingly, we found that methods such as maximum parsimony and maximum likelihood that reconstruct a "best guess" amino acid at each position overestimate thermostability, while a Bayesian method that sometimes chooses less-probable residues from the posterior probability distribution does not. Maximum likelihood and maximum parsimony apparently tend to eliminate variants at a position that are slightly detrimental to structural stability simply because such detrimental variants are less frequent. Other properties of ancestral proteins might be similarly overestimated. This suggests that ancestral reconstruction studies require greater care to come to credible conclusions regarding functional evolution. Inferred functional patterns that mimic reconstruction bias should be reevaluated.
New method to analyze internal disruptions with tomographic reconstructions
Energy Technology Data Exchange (ETDEWEB)
Tanzi, C.P. [EURATOM-FOM Association, FOM-Instituut voor Plasmafysica Rijnhuizen, P.O. BOX 1207, 3430 BE Nieuwegein (The Netherlands); de Blank, H.J. [Max-Planck-Institut fuer Plasmaphysik, EURATOM-IPP Association, 85740 Garching (Germany)
1997-03-01
Sawtooth crashes have been investigated on the Rijnhuizen Tokamak Project (RTP) [N. J. Lopes Cardozo {ital et al.}, {ital Proceedings of the 14th International Conference on Plasma Physics and Controlled Nuclear Fusion Research}, W{umlt u}rzburg, 1992 (International Atomic Energy Agency, Vienna, 1993), Vol. 1, p. 271]. Internal disruptions in tokamak plasmas often exhibit an m=1 poloidal mode structure prior to the collapse which can be clearly identified by means of multicamera soft x-ray diagnostics. In this paper tomographic reconstructions of such m=1 modes are analyzed with a new method, based on magnetohydrodynamic (MHD) invariants computed from the two-dimensional emissivity profiles, which quantifies the amount of profile flattening not only after the crash but also during the precursor oscillations. The results are interpreted by comparing them with two models which simulate the measurements of the m=1 redistribution of soft x-ray emissivity prior to the sawtooth crash. One model is based on the magnetic reconnection model of Kadomtsev. The other involves ideal MHD motion only. In cases where differences in magnetic topology between the two models cannot be seen in the tomograms, the analysis of profile flattening has an advantage. The analysis shows that in RTP the clearly observed m=1 displacement of some sawteeth requires the presence of convective ideal MHD motion, whereas other precursors are consistent with magnetic reconnection of up to 75{percent} of the magnetic flux within the q=1 surface. The possibility of ideal interchange combined with enhanced cross-field transport is not excluded. {copyright} {ital 1997 American Institute of Physics.}
New method to analyze internal disruptions with tomographic reconstructions
International Nuclear Information System (INIS)
Tanzi, C.P.; de Blank, H.J.
1997-01-01
Sawtooth crashes have been investigated on the Rijnhuizen Tokamak Project (RTP) [N. J. Lopes Cardozo et al., Proceedings of the 14th International Conference on Plasma Physics and Controlled Nuclear Fusion Research, Wuerzburg, 1992 (International Atomic Energy Agency, Vienna, 1993), Vol. 1, p. 271]. Internal disruptions in tokamak plasmas often exhibit an m=1 poloidal mode structure prior to the collapse which can be clearly identified by means of multicamera soft x-ray diagnostics. In this paper tomographic reconstructions of such m=1 modes are analyzed with a new method, based on magnetohydrodynamic (MHD) invariants computed from the two-dimensional emissivity profiles, which quantifies the amount of profile flattening not only after the crash but also during the precursor oscillations. The results are interpreted by comparing them with two models which simulate the measurements of the m=1 redistribution of soft x-ray emissivity prior to the sawtooth crash. One model is based on the magnetic reconnection model of Kadomtsev. The other involves ideal MHD motion only. In cases where differences in magnetic topology between the two models cannot be seen in the tomograms, the analysis of profile flattening has an advantage. The analysis shows that in RTP the clearly observed m=1 displacement of some sawteeth requires the presence of convective ideal MHD motion, whereas other precursors are consistent with magnetic reconnection of up to 75% of the magnetic flux within the q=1 surface. The possibility of ideal interchange combined with enhanced cross-field transport is not excluded. copyright 1997 American Institute of Physics
Extension of the heat flux method to subatmospheric pressures
Bosschaart, K.J.; Goey, de L.P.H.
2004-01-01
The heat flux method for measuring laminar burning velocities has been extended to subatmospheric pressures, down to 80mbar. The new setup is described and adaptations necessary for the new conditions are analyzed. This includes a new burner plate to compensate for the decrease of sensitivity of the
Directory of Open Access Journals (Sweden)
Feng Zhao
2014-10-01
Full Text Available A method for canopy Fluorescence Spectrum Reconstruction (FSR is proposed in this study, which can be used to retrieve the solar-induced canopy fluorescence spectrum over the whole chlorophyll fluorescence emission region from 640–850 nm. Firstly, the radiance of the solar-induced chlorophyll fluorescence (Fs at five absorption lines of the solar spectrum was retrieved by a Spectral Fitting Method (SFM. The Singular Vector Decomposition (SVD technique was then used to extract three basis spectra from a training dataset simulated by the model SCOPE (Soil Canopy Observation, Photochemistry and Energy fluxes. Finally, these basis spectra were linearly combined to reconstruct the Fs spectrum, and the coefficients of them were determined by Weighted Linear Least Squares (WLLS fitting with the five retrieved Fs values. Results for simulated datasets indicate that the FSR method could accurately reconstruct the Fs spectra from hyperspectral measurements acquired by instruments of high Spectral Resolution (SR and Signal to Noise Ratio (SNR. The FSR method was also applied to an experimental dataset acquired in a diurnal experiment. The diurnal change of the reconstructed Fs spectra shows that the Fs radiance around noon was higher than that in the morning and afternoon, which is consistent with former studies. Finally, the potential and limitations of this method are discussed.
Image reconstruction in computerized tomography using the convolution method
International Nuclear Information System (INIS)
Oliveira Rebelo, A.M. de.
1984-03-01
In the present work an algoritin was derived, using the analytical convolution method (filtered back-projection) for two-dimensional or three-dimensional image reconstruction in computerized tomography applied to non-destructive testing and to the medical use. This mathematical model is based on the analytical Fourier transform method for image reconstruction. This model consists of a discontinuous system formed by an NxN array of cells (pixels). The attenuation in the object under study of a colimated gamma ray beam has been determined for various positions and incidence angles (projections) in terms of the interaction of the beam with the intercepted pixels. The contribution of each pixel to beam attenuation was determined using the weight function W ij which was used for simulated tests. Simulated tests using standard objects with attenuation coefficients in the range of 0,2 to 0,7 cm -1 were carried out using cell arrays of up to 25x25. One application was carried out in the medical area simulating image reconstruction of an arm phantom with attenuation coefficients in the range of 0,2 to 0,5 cm -1 using cell arrays of 41x41. The simulated results show that, in objects with a great number of interfaces and great variations of attenuation coefficients at these interfaces, a good reconstruction is obtained with the number of projections equal to the reconstruction matrix dimension. A good reconstruction is otherwise obtained with fewer projections. (author) [pt
A multiscale mortar multipoint flux mixed finite element method
Wheeler, Mary Fanett
2012-02-03
In this paper, we develop a multiscale mortar multipoint flux mixed finite element method for second order elliptic problems. The equations in the coarse elements (or subdomains) are discretized on a fine grid scale by a multipoint flux mixed finite element method that reduces to cell-centered finite differences on irregular grids. The subdomain grids do not have to match across the interfaces. Continuity of flux between coarse elements is imposed via a mortar finite element space on a coarse grid scale. With an appropriate choice of polynomial degree of the mortar space, we derive optimal order convergence on the fine scale for both the multiscale pressure and velocity, as well as the coarse scale mortar pressure. Some superconvergence results are also derived. The algebraic system is reduced via a non-overlapping domain decomposition to a coarse scale mortar interface problem that is solved using a multiscale flux basis. Numerical experiments are presented to confirm the theory and illustrate the efficiency and flexibility of the method. © EDP Sciences, SMAI, 2012.
Flux form Semi-Lagrangian methods for parabolic problems
Directory of Open Access Journals (Sweden)
Bonaventura Luca
2016-09-01
Full Text Available A semi-Lagrangian method for parabolic problems is proposed, that extends previous work by the authors to achieve a fully conservative, flux-form discretization of linear and nonlinear diffusion equations. A basic consistency and stability analysis is proposed. Numerical examples validate the proposed method and display its potential for consistent semi-Lagrangian discretization of advection diffusion and nonlinear parabolic problems.
A Multifactorial Analysis of Reconstruction Methods Applied After Total Gastrectomy
Directory of Open Access Journals (Sweden)
Oktay Büyükaşık
2010-12-01
Full Text Available Aim: The aim of this study was to evaluate the reconstruction methods applied after total gastrectomy in terms of postoperative symptomology and nutrition. Methods: This retrospective study was conducted on 31 patients who underwent total gastrectomy due to gastric cancer in 2. Clinic of General Surgery, SSK Ankara Training Hospital. 6 different reconstruction methods were used and analyzed in terms of age, sex and postoperative complications. One from esophagus and two biopsy specimens from jejunum were taken through upper gastrointestinal endoscopy from all cases, and late period morphological and microbiological changes were examined. Postoperative weight change, dumping symptoms, reflux esophagitis, solid/liquid dysphagia, early satiety, postprandial pain, diarrhea and anorexia were assessed. Results: Of 31 patients,18 were males and 13 females; the youngest one was 33 years old, while the oldest- 69 years old. It was found that reconstruction without pouch was performed in 22 cases and with pouch in 9 cases. Early satiety, postprandial pain, dumping symptoms, diarrhea and anemia were found most commonly in cases with reconstruction without pouch. The rate of bacterial colonization of the jejunal mucosa was identical in both groups. Reflux esophagitis was most commonly seen in omega esophagojejunostomy (EJ, while the least-in Roux-en-Y, Tooley and Tanner 19 EJ. Conclusion: Reconstruction with pouch performed after total gastrectomy is still a preferable method. (The Medical Bulletin of Haseki 2010; 48:126-31
Fingerprint image reconstruction for swipe sensor using Predictive Overlap Method
Directory of Open Access Journals (Sweden)
Mardiansyah Ahmad Zafrullah
2018-01-01
Full Text Available Swipe sensor is one of many biometric authentication sensor types that widely applied to embedded devices. The sensor produces an overlap on every pixel block of the image, so the picture requires a reconstruction process before heading to the feature extraction process. Conventional reconstruction methods require extensive computation, causing difficult to apply to embedded devices that have limited computing process. In this paper, image reconstruction is proposed using predictive overlap method, which determines the image block shift from the previous set of change data. The experiments were performed using 36 images generated by a swipe sensor with 128 x 8 pixels size of the area, where each image has an overlap in each block. The results reveal computation can increase up to 86.44% compared with conventional methods, with accuracy decreasing to 0.008% in average.
Matrix-based image reconstruction methods for tomography
International Nuclear Information System (INIS)
Llacer, J.; Meng, J.D.
1984-10-01
Matrix methods of image reconstruction have not been used, in general, because of the large size of practical matrices, ill condition upon inversion and the success of Fourier-based techniques. An exception is the work that has been done at the Lawrence Berkeley Laboratory for imaging with accelerated radioactive ions. An extension of that work into more general imaging problems shows that, with a correct formulation of the problem, positron tomography with ring geometries results in well behaved matrices which can be used for image reconstruction with no distortion of the point response in the field of view and flexibility in the design of the instrument. Maximum Likelihood Estimator methods of reconstruction, which use the system matrices tailored to specific instruments and do not need matrix inversion, are shown to result in good preliminary images. A parallel processing computer structure based on multiple inexpensive microprocessors is proposed as a system to implement the matrix-MLE methods. 14 references, 7 figures
COMPARISON OF HOLOGRAPHIC AND ITERATIVE METHODS FOR AMPLITUDE OBJECT RECONSTRUCTION
Directory of Open Access Journals (Sweden)
I. A. Shevkunov
2015-01-01
Full Text Available Experimental comparison of four methods for the wavefront reconstruction is presented. We considered two iterative and two holographic methods with different mathematical models and algorithms for recovery. The first two of these methods do not use a reference wave recording scheme that reduces requirements for stability of the installation. A major role in phase information reconstruction by such methods is played by a set of spatial intensity distributions, which are recorded as the recording matrix is being moved along the optical axis. The obtained data are used consistently for wavefront reconstruction using an iterative procedure. In the course of this procedure numerical distribution of the wavefront between the planes is performed. Thus, phase information of the wavefront is stored in every plane and calculated amplitude distributions are replaced for the measured ones in these planes. In the first of the compared methods, a two-dimensional Fresnel transform and iterative calculation in the object plane are used as a mathematical model. In the second approach, an angular spectrum method is used for numerical wavefront propagation, and the iterative calculation is carried out only between closely located planes of data registration. Two digital holography methods, based on the usage of the reference wave in the recording scheme and differing from each other by numerical reconstruction algorithm of digital holograms, are compared with the first two methods. The comparison proved that the iterative method based on 2D Fresnel transform gives results comparable with the result of common holographic method with the Fourier-filtering. It is shown that holographic method for reconstructing of the object complex amplitude in the process of the object amplitude reduction is the best among considered ones.
Flux-weakening control methods for hybrid excitation synchronous motor
Directory of Open Access Journals (Sweden)
Mingming Huang
2015-09-01
Full Text Available The hybrid excitation synchronous motor (HESM, which aim at combining the advantages of permanent magnet motor and wound excitation motor, have the characteristics of low-speed high-torque hill climbing and wide speed range. Firstly, a new kind of HESM is presented in the paper, and its structure and mathematical model are illustrated. Then, based on a space voltage vector control, a novel flux-weakening method for speed adjustment in the high speed region is presented. The unique feature of the proposed control method is that the HESM driving system keeps the q-axis back-EMF components invariable during the flux-weakening operation process. Moreover, a copper loss minimization algorithm is adopted to reduce the copper loss of the HESM in the high speed region. Lastly, the proposed method is validated by the simulation and the experimental results.
Virtanen, I. O. I.; Virtanen, I. I.; Pevtsov, A. A.; Yeates, A.; Mursula, K.
2017-07-01
Aims: We aim to use the surface flux transport model to simulate the long-term evolution of the photospheric magnetic field from historical observations. In this work we study the accuracy of the model and its sensitivity to uncertainties in its main parameters and the input data. Methods: We tested the model by running simulations with different values of meridional circulation and supergranular diffusion parameters, and studied how the flux distribution inside active regions and the initial magnetic field affected the simulation. We compared the results to assess how sensitive the simulation is to uncertainties in meridional circulation speed, supergranular diffusion, and input data. We also compared the simulated magnetic field with observations. Results: We find that there is generally good agreement between simulations and observations. Although the model is not capable of replicating fine details of the magnetic field, the long-term evolution of the polar field is very similar in simulations and observations. Simulations typically yield a smoother evolution of polar fields than observations, which often include artificial variations due to observational limitations. We also find that the simulated field is fairly insensitive to uncertainties in model parameters or the input data. Due to the decay term included in the model the effects of the uncertainties are somewhat minor or temporary, lasting typically one solar cycle.
Probability Density Function Method for Observing Reconstructed Attractor Structure
Institute of Scientific and Technical Information of China (English)
陆宏伟; 陈亚珠; 卫青
2004-01-01
Probability density function (PDF) method is proposed for analysing the structure of the reconstructed attractor in computing the correlation dimensions of RR intervals of ten normal old men. PDF contains important information about the spatial distribution of the phase points in the reconstructed attractor. To the best of our knowledge, it is the first time that the PDF method is put forward for the analysis of the reconstructed attractor structure. Numerical simulations demonstrate that the cardiac systems of healthy old men are about 6 - 6.5 dimensional complex dynamical systems. It is found that PDF is not symmetrically distributed when time delay is small, while PDF satisfies Gaussian distribution when time delay is big enough. A cluster effect mechanism is presented to explain this phenomenon. By studying the shape of PDFs, that the roles played by time delay are more important than embedding dimension in the reconstruction is clearly indicated. Results have demonstrated that the PDF method represents a promising numerical approach for the observation of the reconstructed attractor structure and may provide more information and new diagnostic potential of the analyzed cardiac system.
Least Squares Methods for Equidistant Tree Reconstruction
Fahey, Conor; Hosten, Serkan; Krieger, Nathan; Timpe, Leslie
2008-01-01
UPGMA is a heuristic method identifying the least squares equidistant phylogenetic tree given empirical distance data among $n$ taxa. We study this classic algorithm using the geometry of the space of all equidistant trees with $n$ leaves, also known as the Bergman complex of the graphical matroid for the complete graph $K_n$. We show that UPGMA performs an orthogonal projection of the data onto a maximal cell of the Bergman complex. We also show that the equidistant tree with the least (Eucl...
Variationally derived coarse mesh methods using an alternative flux representation
International Nuclear Information System (INIS)
Wojtowicz, G.; Holloway, J.P.
1995-01-01
Investigation of a previously reported variational technique for the solution of the 1-D, 1-group neutron transport equation in reactor lattices has inspired the development of a finite element formulation of the method. Compared to conventional homogenization methods in which node homogenized cross sections are used, the coefficients describing this system take on greater spatial dependence. However, the methods employ an alternative flux representation which allows the transport equation to be cast into a form whose solution has only a slow spatial variation and, hence, requires relatively few variables to describe. This alternative flux representation and the stationary property of a variational principle define a class of coarse mesh discretizations of transport theory capable of achieving order of magnitude reductions of eigenvalue and pointwise scalar flux errors as compared with diffusion theory while retaining diffusion theory's relatively low cost. Initial results of a 1-D spectral element approach are reviewed and used to motivate the finite element implementation which is more efficient and almost as accurate; one and two group results of this method are described
Comparison of four surgical methods for eyebrow reconstruction
Directory of Open Access Journals (Sweden)
Omranifard Mahmood
2007-01-01
Full Text Available Background: The eyebrow plays an important role in facial harmony and eye protection. Eyebrows can be injured by burn, trauma, tumour, tattooing and alopecia. Eyebrow reconstructions have been done via several techniques. Here, our experience with a fairly new method for eyebrow reconstruction is presented. Materials and Methods: This is a descriptive-analytical study which was done on 76 patients at the Al-Zahra and Imam Mousa Kazem hospitals at Isfahan University of Medical University, Isfahan, Iran, from 1994 to 2004. Totally 86 eyebrows were reconstructed. All patients were examined before and after the operation. Methods which are commonly applied in eyebrow reconstruction are as follows: 1. Superficial Temporal Artery Flap (Island, 2. Interpolitation Scalp Flap, 3. Graft. Our method which is named Forehead Facial Island Flap with inferior pedicle provides an easier approach for the surgeon and more ideal hair growth direction for the patient. Results: Significantly lower rates of complication along with greater patient satisfaction were obtained with Forehead Facial Island Flap. Conclusions: According to the acquired results, this method seems to be more technically practical and aesthetically favourable when compared to others.
A method for neutron dosimetry in ultrahigh flux environments
International Nuclear Information System (INIS)
Ougouag, A.M.; Wemple, C.A.; Rogers, J.W.
1996-01-01
A method for neutron dosimetry in ultrahigh flux environments is developed, and devices embodying it are proposed and simulated using a Monte Carlo code. The new approach no longer assumes a linear relationship between the fluence and the activity of the nuclides formed by irradiation. It accounts for depletion of the original ''foil'' material and for decay and depletion of the formed nuclides. In facilities where very high fluences are possible, the fluences inferred by activity measurements may be ambiguous. A method for resolving these ambiguities is also proposed and simulated. The new method and proposed devices should make possible the use of materials not traditionally considered desirable for neutron activation dosimetry
Neutron flux calculation by means of Monte Carlo methods
International Nuclear Information System (INIS)
Barz, H.U.; Eichhorn, M.
1988-01-01
In this report a survey of modern neutron flux calculation procedures by means of Monte Carlo methods is given. Due to the progress in the development of variance reduction techniques and the improvements of computational techniques this method is of increasing importance. The basic ideas in application of Monte Carlo methods are briefly outlined. In more detail various possibilities of non-analog games and estimation procedures are presented, problems in the field of optimizing the variance reduction techniques are discussed. In the last part some important international Monte Carlo codes and own codes of the authors are listed and special applications are described. (author)
Reconstruction of CT images by the Bayes- back projection method
Haruyama, M; Takase, M; Tobita, H
2002-01-01
In the course of research on quantitative assay of non-destructive measurement of radioactive waste, the have developed a unique program based on the Bayesian theory for reconstruction of transmission computed tomography (TCT) image. The reconstruction of cross-section images in the CT technology usually employs the Filtered Back Projection method. The new imaging reconstruction program reported here is based on the Bayesian Back Projection method, and it has a function of iterative improvement images by every step of measurement. Namely, this method has the capability of prompt display of a cross-section image corresponding to each angled projection data from every measurement. Hence, it is possible to observe an improved cross-section view by reflecting each projection data in almost real time. From the basic theory of Baysian Back Projection method, it can be not only applied to CT types of 1st, 2nd, and 3rd generation. This reported deals with a reconstruction program of cross-section images in the CT of ...
A Total Variation-Based Reconstruction Method for Dynamic MRI
Directory of Open Access Journals (Sweden)
Germana Landi
2008-01-01
Full Text Available In recent years, total variation (TV regularization has become a popular and powerful tool for image restoration and enhancement. In this work, we apply TV minimization to improve the quality of dynamic magnetic resonance images. Dynamic magnetic resonance imaging is an increasingly popular clinical technique used to monitor spatio-temporal changes in tissue structure. Fast data acquisition is necessary in order to capture the dynamic process. Most commonly, the requirement of high temporal resolution is fulfilled by sacrificing spatial resolution. Therefore, the numerical methods have to address the issue of images reconstruction from limited Fourier data. One of the most successful techniques for dynamic imaging applications is the reduced-encoded imaging by generalized-series reconstruction method of Liang and Lauterbur. However, even if this method utilizes a priori data for optimal image reconstruction, the produced dynamic images are degraded by truncation artifacts, most notably Gibbs ringing, due to the spatial low resolution of the data. We use a TV regularization strategy in order to reduce these truncation artifacts in the dynamic images. The resulting TV minimization problem is solved by the fixed point iteration method of Vogel and Oman. The results of test problems with simulated and real data are presented to illustrate the effectiveness of the proposed approach in reducing the truncation artifacts of the reconstructed images.
Comparing 3-dimensional virtual methods for reconstruction in craniomaxillofacial surgery.
Benazzi, Stefano; Senck, Sascha
2011-04-01
In the present project, the virtual reconstruction of digital osteomized zygomatic bones was simulated using different methods. A total of 15 skulls were scanned using computed tomography, and a virtual osteotomy of the left zygomatic bone was performed. Next, virtual reconstructions of the missing part using mirror imaging (with and without best fit registration) and thin plate spline interpolation functions were compared with the original left zygomatic bone. In general, reconstructions using thin plate spline warping showed better results than the mirroring approaches. Nevertheless, when dealing with skulls characterized by a low degree of asymmetry, mirror imaging and subsequent registration can be considered a valid and easy solution for zygomatic bone reconstruction. The mirroring tool is one of the possible alternatives in reconstruction, but it might not always be the optimal solution (ie, when the hemifaces are asymmetrical). In the present pilot study, we have verified that best fit registration of the mirrored unaffected hemiface and thin plate spline warping achieved better results in terms of fitting accuracy, overcoming the evident limits of the mirroring approach. Copyright © 2011 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
Directory of Open Access Journals (Sweden)
Akιn Ata
2007-12-01
Full Text Available Abstract Background It is a daunting task to identify all the metabolic pathways of brain energy metabolism and develop a dynamic simulation environment that will cover a time scale ranging from seconds to hours. To simplify this task and make it more practicable, we undertook stoichiometric modeling of brain energy metabolism with the major aim of including the main interacting pathways in and between astrocytes and neurons. Model The constructed model includes central metabolism (glycolysis, pentose phosphate pathway, TCA cycle, lipid metabolism, reactive oxygen species (ROS detoxification, amino acid metabolism (synthesis and catabolism, the well-known glutamate-glutamine cycle, other coupling reactions between astrocytes and neurons, and neurotransmitter metabolism. This is, to our knowledge, the most comprehensive attempt at stoichiometric modeling of brain metabolism to date in terms of its coverage of a wide range of metabolic pathways. We then attempted to model the basal physiological behaviour and hypoxic behaviour of the brain cells where astrocytes and neurons are tightly coupled. Results The reconstructed stoichiometric reaction model included 217 reactions (184 internal, 33 exchange and 216 metabolites (183 internal, 33 external distributed in and between astrocytes and neurons. Flux balance analysis (FBA techniques were applied to the reconstructed model to elucidate the underlying cellular principles of neuron-astrocyte coupling. Simulation of resting conditions under the constraints of maximization of glutamate/glutamine/GABA cycle fluxes between the two cell types with subsequent minimization of Euclidean norm of fluxes resulted in a flux distribution in accordance with literature-based findings. As a further validation of our model, the effect of oxygen deprivation (hypoxia on fluxes was simulated using an FBA-derivative approach, known as minimization of metabolic adjustment (MOMA. The results show the power of the
DG TOMO: A new method for tomographic reconstruction
International Nuclear Information System (INIS)
Freitas, D. de; Feschet, F.; Cachin, F.; Geissler, B.; Bapt, A.; Karidioula, I.; Martin, C.; Kelly, A.; Mestas, D.; Gerard, Y.; Reveilles, J.P.; Maublant, J.
2006-01-01
Aim: FBP and OSEM are the most popular tomographic reconstruction methods in scintigraphy. FBP is a simple method but artifacts of reconstruction are generated which corrections induce degradation of the spatial resolution. OSEM takes account of statistical fluctuations but noise strongly increases after a certain number of iterations. We compare a new method of tomographic reconstruction based on discrete geometry (DG TOMO) to FBP and OSEM. Materials and methods: Acquisitions were performed on a three-head gamma-camera (Philips) with a NEMA Phantom containing six spheres of sizes from 10 to 37 mm inner diameter, filled with around 325 MBq/l of technetium-99 m. The spheres were positioned in water containing 3 MBq/l of technetium-99 m. Acquisitions were realized during a 180 o -rotation around the phantom by 25-s steps. DG TOMO has been developed in our laboratory in order to minimize the number of projections at acquisition. Two tomographic reconstructions utilizing 32 and 16 projections with FBP, OSEM and DG TOMO were performed and transverse slices were compared. Results: FBP with 32 projections detects only the activity in the three largest spheres (diameter ≥22 mm). With 16 projections, the star effect is predominant and the contrast of the third sphere is very low. OSEM with 32 projections provides a better image but the three smallest spheres (diameter ≤17 mm) are difficult to distinguish. With 16 projections, the three smaller spheres are not detectable. The results of DG TOMO are similar to OSEM. Conclusion: Since the parameters of DG TOMO can be further optimized, this method appears as a promising alternative for tomoscintigraphy reconstruction
Parallel MR image reconstruction using augmented Lagrangian methods.
Ramani, Sathish; Fessler, Jeffrey A
2011-03-01
Magnetic resonance image (MRI) reconstruction using SENSitivity Encoding (SENSE) requires regularization to suppress noise and aliasing effects. Edge-preserving and sparsity-based regularization criteria can improve image quality, but they demand computation-intensive nonlinear optimization. In this paper, we present novel methods for regularized MRI reconstruction from undersampled sensitivity encoded data--SENSE-reconstruction--using the augmented Lagrangian (AL) framework for solving large-scale constrained optimization problems. We first formulate regularized SENSE-reconstruction as an unconstrained optimization task and then convert it to a set of (equivalent) constrained problems using variable splitting. We then attack these constrained versions in an AL framework using an alternating minimization method, leading to algorithms that can be implemented easily. The proposed methods are applicable to a general class of regularizers that includes popular edge-preserving (e.g., total-variation) and sparsity-promoting (e.g., l(1)-norm of wavelet coefficients) criteria and combinations thereof. Numerical experiments with synthetic and in vivo human data illustrate that the proposed AL algorithms converge faster than both general-purpose optimization algorithms such as nonlinear conjugate gradient (NCG) and state-of-the-art MFISTA.
Method and Apparatus of Implementing a Magnetic Shield Flux Sweeper
Sadleir, John E. (Inventor)
2018-01-01
The present invention relates to a method and apparatus of protecting magnetically sensitive devices with a shield, including: a non-superconducting metal or lower transition temperature (T.sub.c) material compared to a higher transition temperature material, disposed in a magnetic field; means for creating a spatially varying order parameter's |.PSI.(r,T)|.sup.2 in a non-superconducting metal or a lower transition temperature material; wherein a spatially varying order parameter is created by a proximity effect, such that the non-superconducting metal or the lower transition temperature material becomes superconductive as a temperature is lowered, creating a flux-free Meissner state at a center thereof, in order to sweep magnetic flux lines to the periphery.
Chan, Harley; Gilbert, Ralph W.; Pagedar, Nitin A.; Daly, Michael J.; Irish, Jonathan C.; Siewerdsen, Jeffrey H.
2010-02-01
esthetic appearance is one of the most important factors for reconstructive surgery. The current practice of maxillary reconstruction chooses radial forearm, fibula or iliac rest osteocutaneous to recreate three-dimensional complex structures of the palate and maxilla. However, these bone flaps lack shape similarity to the palate and result in a less satisfactory esthetic. Considering similarity factors and vasculature advantages, reconstructive surgeons recently explored the use of scapular tip myo-osseous free flaps to restore the excised site. We have developed a new method that quantitatively evaluates the morphological similarity of the scapula tip bone and palate based on a diagnostic volumetric computed tomography (CT) image. This quantitative result was further interpreted as a color map that rendered on the surface of a three-dimensional computer model. For surgical planning, this color interpretation could potentially assist the surgeon to maximize the orientation of the bone flaps for best fit of the reconstruction site. With approval from the Research Ethics Board (REB) of the University Health Network, we conducted a retrospective analysis with CT image obtained from 10 patients. Each patient had a CT scans including the maxilla and chest on the same day. Based on this image set, we simulated total, subtotal and hemi palate reconstruction. The procedure of simulation included volume segmentation, conversing the segmented volume to a stereo lithography (STL) model, manual registration, computation of minimum geometric distances and curvature between STL model. Across the 10 patients data, we found the overall root-mean-square (RMS) conformance was 3.71+/- 0.16 mm
International Nuclear Information System (INIS)
Fraysse, F.; Redondo, C.; Rubio, G.; Valero, E.
2016-01-01
This article is devoted to the numerical discretisation of the hyperbolic two-phase flow model of Baer and Nunziato. A special attention is paid on the discretisation of intercell flux functions in the framework of Finite Volume and Discontinuous Galerkin approaches, where care has to be taken to efficiently approximate the non-conservative products inherent to the model equations. Various upwind approximate Riemann solvers have been tested on a bench of discontinuous test cases. New discretisation schemes are proposed in a Discontinuous Galerkin framework following the criterion of Abgrall and the path-conservative formalism. A stabilisation technique based on artificial viscosity is applied to the high-order Discontinuous Galerkin method and compared against classical TVD-MUSCL Finite Volume flux reconstruction.
Energy Technology Data Exchange (ETDEWEB)
Fraysse, F., E-mail: francois.fraysse@rs2n.eu [RS2N, St. Zacharie (France); E. T. S. de Ingeniería Aeronáutica y del Espacio, Universidad Politécnica de Madrid, Madrid (Spain); Redondo, C.; Rubio, G.; Valero, E. [E. T. S. de Ingeniería Aeronáutica y del Espacio, Universidad Politécnica de Madrid, Madrid (Spain)
2016-12-01
This article is devoted to the numerical discretisation of the hyperbolic two-phase flow model of Baer and Nunziato. A special attention is paid on the discretisation of intercell flux functions in the framework of Finite Volume and Discontinuous Galerkin approaches, where care has to be taken to efficiently approximate the non-conservative products inherent to the model equations. Various upwind approximate Riemann solvers have been tested on a bench of discontinuous test cases. New discretisation schemes are proposed in a Discontinuous Galerkin framework following the criterion of Abgrall and the path-conservative formalism. A stabilisation technique based on artificial viscosity is applied to the high-order Discontinuous Galerkin method and compared against classical TVD-MUSCL Finite Volume flux reconstruction.
Quartet-based methods to reconstruct phylogenetic networks.
Yang, Jialiang; Grünewald, Stefan; Xu, Yifei; Wan, Xiu-Feng
2014-02-20
Phylogenetic networks are employed to visualize evolutionary relationships among a group of nucleotide sequences, genes or species when reticulate events like hybridization, recombination, reassortant and horizontal gene transfer are believed to be involved. In comparison to traditional distance-based methods, quartet-based methods consider more information in the reconstruction process and thus have the potential to be more accurate. We introduce QuartetSuite, which includes a set of new quartet-based methods, namely QuartetS, QuartetA, and QuartetM, to reconstruct phylogenetic networks from nucleotide sequences. We tested their performances and compared them with other popular methods on two simulated nucleotide sequence data sets: one generated from a tree topology and the other from a complicated evolutionary history containing three reticulate events. We further validated these methods to two real data sets: a bacterial data set consisting of seven concatenated genes of 36 bacterial species and an influenza data set related to recently emerging H7N9 low pathogenic avian influenza viruses in China. QuartetS, QuartetA, and QuartetM have the potential to accurately reconstruct evolutionary scenarios from simple branching trees to complicated networks containing many reticulate events. These methods could provide insights into the understanding of complicated biological evolutionary processes such as bacterial taxonomy and reassortant of influenza viruses.
Two-Dimensional Impact Reconstruction Method for Rail Defect Inspection
Directory of Open Access Journals (Sweden)
Jie Zhao
2014-01-01
Full Text Available The safety of train operating is seriously menaced by the rail defects, so it is of great significance to inspect rail defects dynamically while the train is operating. This paper presents a two-dimensional impact reconstruction method to realize the on-line inspection of rail defects. The proposed method utilizes preprocessing technology to convert time domain vertical vibration signals acquired by wireless sensor network to space signals. The modern time-frequency analysis method is improved to reconstruct the obtained multisensor information. Then, the image fusion processing technology based on spectrum threshold processing and node color labeling is proposed to reduce the noise, and blank the periodic impact signal caused by rail joints and locomotive running gear. This method can convert the aperiodic impact signals caused by rail defects to partial periodic impact signals, and locate the rail defects. An application indicates that the two-dimensional impact reconstruction method could display the impact caused by rail defects obviously, and is an effective on-line rail defects inspection method.
Directory of Open Access Journals (Sweden)
C. Möstl
2009-05-01
Full Text Available We analyze a magnetic signature associated with the leading edge of a bursty bulk flow observed by Cluster at −19 RE downtail on 22 August 2001. A distinct rotation of the magnetic field was seen by all four spacecraft. This event was previously examined by Slavin et al. (2003b using both linear force-free modeling as well as a curlometer technique. Extending this work, we apply here single- and multi-spacecraft Grad-Shafranov (GS reconstruction techniques to the Cluster observations and find good evidence that the structure encountered is indeed a magnetic flux rope and contains helical magnetic field lines. We find that the flux rope has a diameter of approximately 1 RE, an axial field of 26.4 nT, a velocity of ≈650 km/s, a total axial current of 0.16 MA and magnetic fluxes of order 105 Wb. The field line twist is estimated as half a turn per RE. The invariant axis is inclined at 40° to the ecliptic plane and 10° to the GSM equatorial plane. The flux rope has a force-free core and non-force-free boundaries. When we compare and contrast our results with those obtained from minimum variance, single-spacecraft force-free fitting and curlometer techniques, we find in general fair agreement, but also clear differences such as a higher inclination of the axis to the ecliptic. We further conclude that single-spacecraft methods have limitations which should be kept in mind when applied to THEMIS observations, and that non-force-free GS and curlometer techniques are to be preferred in their analysis. Some properties we derived for this earthward– moving structure are similar to those inferred by Lui et al. (2007, using a different approach, for a tailward-moving flux rope observed during the expansion phase of the same substorm.
The gridding method for image reconstruction by Fourier transformation
International Nuclear Information System (INIS)
Schomberg, H.; Timmer, J.
1995-01-01
This paper explores a computational method for reconstructing an n-dimensional signal f from a sampled version of its Fourier transform f. The method involves a window function w and proceeds in three steps. First, the convolution g = w * f is computed numerically on a Cartesian grid, using the available samples of f. Then, g = wf is computed via the inverse discrete Fourier transform, and finally f is obtained as g/w. Due to the smoothing effect of the convolution, evaluating w * f is much less error prone than merely interpolating f. The method was originally devised for image reconstruction in radio astronomy, but is actually applicable to a broad range of reconstructive imaging methods, including magnetic resonance imaging and computed tomography. In particular, it provides a fast and accurate alternative to the filtered backprojection. The basic method has several variants with other applications, such as the equidistant resampling of arbitrarily sampled signals or the fast computation of the Radon (Hough) transform
DEFF Research Database (Denmark)
Ravn, Ib
. FLUX betegner en flyden eller strømmen, dvs. dynamik. Forstår man livet som proces og udvikling i stedet for som ting og mekanik, får man et andet billede af det gode liv end det, som den velkendte vestlige mekanicisme lægger op til. Dynamisk forstået indebærer det gode liv den bedst mulige...... kanalisering af den flux eller energi, der strømmer igennem os og giver sig til kende i vore daglige aktiviteter. Skal vores tanker, handlinger, arbejde, samvær og politiske liv organiseres efter stramme og faste regelsæt, uden slinger i valsen? Eller skal de tværtimod forløbe ganske uhindret af regler og bånd...
Total variation superiorized conjugate gradient method for image reconstruction
Zibetti, Marcelo V. W.; Lin, Chuan; Herman, Gabor T.
2018-03-01
The conjugate gradient (CG) method is commonly used for the relatively-rapid solution of least squares problems. In image reconstruction, the problem can be ill-posed and also contaminated by noise; due to this, approaches such as regularization should be utilized. Total variation (TV) is a useful regularization penalty, frequently utilized in image reconstruction for generating images with sharp edges. When a non-quadratic norm is selected for regularization, as is the case for TV, then it is no longer possible to use CG. Non-linear CG is an alternative, but it does not share the efficiency that CG shows with least squares and methods such as fast iterative shrinkage-thresholding algorithms (FISTA) are preferred for problems with TV norm. A different approach to including prior information is superiorization. In this paper it is shown that the conjugate gradient method can be superiorized. Five different CG variants are proposed, including preconditioned CG. The CG methods superiorized by the total variation norm are presented and their performance in image reconstruction is demonstrated. It is illustrated that some of the proposed variants of the superiorized CG method can produce reconstructions of superior quality to those produced by FISTA and in less computational time, due to the speed of the original CG for least squares problems. In the Appendix we examine the behavior of one of the superiorized CG methods (we call it S-CG); one of its input parameters is a positive number ɛ. It is proved that, for any given ɛ that is greater than the half-squared-residual for the least squares solution, S-CG terminates in a finite number of steps with an output for which the half-squared-residual is less than or equal to ɛ. Importantly, it is also the case that the output will have a lower value of TV than what would be provided by unsuperiorized CG for the same value ɛ of the half-squared residual.
Using the SAND-II and MLM methods to reconstruct fast neutron spectra
International Nuclear Information System (INIS)
Bondars, Kh.Ya.; Kamnev, V.A.; Lapenas, A.A.; Troshin, V.S.
1981-01-01
The reconstruction of fast neutron spectra from measured reaction rates may be reduced to the solution of Fredholm's integral equation of the first kind. This problem falls in the category of incorrectly formulated problems, and so additional information is required concerning the unknown function i.e. concerning the differential energy dependence of the neutron, flux density sup(phi)(E). There are various methods for seeking a solution to the problem as formulated above. One of the best-known methods used in the USSR is the maximum likelihood method (MLM) (or directional difference method (DDM)), whereas SAND-II is commonly used abroad. The purpose of this paper is to compare the MLM and SAND-II methods, taking as an example the processing of measurement data which were obtained in the B-2 beam line at the BR-10 reactor in order to determine the composition of shielding for a fast reactor
Transfer matrix method for four-flux radiative transfer.
Slovick, Brian; Flom, Zachary; Zipp, Lucas; Krishnamurthy, Srini
2017-07-20
We develop a transfer matrix method for four-flux radiative transfer, which is ideally suited for studying transport through multiple scattering layers. The model predicts the specular and diffuse reflection and transmission of multilayer composite films, including interface reflections, for diffuse or collimated incidence. For spherical particles in the diffusion approximation, we derive closed-form expressions for the matrix coefficients and show remarkable agreement with numerical Monte Carlo simulations for a range of absorption values and film thicknesses, and for an example multilayer slab.
Filtering of SPECT reconstructions made using Bellini's attenuation correction method
International Nuclear Information System (INIS)
Glick, S.J.; Penney, B.C.; King, M.A.
1991-01-01
This paper evaluates a three-dimensional (3D) Wiener filter which is used to restore SPECT reconstructions which were made using Bellini's method of attenuation correction. Its performance is compared to that of several pre-reconstruction filers: the one-dimensional (1D) Butterworth, the two-dimensional (2D) Butterworth, and a 2D Wiener filer. A simulation study is used to compare the four filtering methods. An approximation to a clinical liver spleen study was used as the source distribution and algorithm which accounts for the depth and distance dependent blurring in SPECT was used to compute noise free projections. To study the effect of filtering method on tumor detection accuracy, a 2 cm diameter, cool spherical tumor (40% contrast) was placed at a known, but random, location with the liver. Projection sets for ten tumor locations were computed and five noise realizations of each set were obtained by introducing Poisson noise. The simulated projections were either: filtered with the 1D or 2D Butterworth or the 2D Wiener and then reconstructed using Bellini's intrinsic attenuation correction, or reconstructed first, then filtered with the 3D Wiener. The criteria used for comparison were: normalized mean square error (NMSE), cold spot contrast, and accuracy of tumor detection with an automated numerical method. Results indicate that restorations obtained with 3D Wiener filtering yielded significantly higher lesion contrast and lower NMSE values compared to the other methods of processing. The Wiener restoration filters and the 2D Butterworth all provided similar measures of detectability, which were noticeably higher than that obtained with 1D Butterworth smoothing
Improving automated 3D reconstruction methods via vision metrology
Toschi, Isabella; Nocerino, Erica; Hess, Mona; Menna, Fabio; Sargeant, Ben; MacDonald, Lindsay; Remondino, Fabio; Robson, Stuart
2015-05-01
This paper aims to provide a procedure for improving automated 3D reconstruction methods via vision metrology. The 3D reconstruction problem is generally addressed using two different approaches. On the one hand, vision metrology (VM) systems try to accurately derive 3D coordinates of few sparse object points for industrial measurement and inspection applications; on the other, recent dense image matching (DIM) algorithms are designed to produce dense point clouds for surface representations and analyses. This paper strives to demonstrate a step towards narrowing the gap between traditional VM and DIM approaches. Efforts are therefore intended to (i) test the metric performance of the automated photogrammetric 3D reconstruction procedure, (ii) enhance the accuracy of the final results and (iii) obtain statistical indicators of the quality achieved in the orientation step. VM tools are exploited to integrate their main functionalities (centroid measurement, photogrammetric network adjustment, precision assessment, etc.) into the pipeline of 3D dense reconstruction. Finally, geometric analyses and accuracy evaluations are performed on the raw output of the matching (i.e. the point clouds) by adopting a metrological approach. The latter is based on the use of known geometric shapes and quality parameters derived from VDI/VDE guidelines. Tests are carried out by imaging the calibrated Portable Metric Test Object, designed and built at University College London (UCL), UK. It allows assessment of the performance of the image orientation and matching procedures within a typical industrial scenario, characterised by poor texture and known 3D/2D shapes.
Image reconstruction methods for the PBX-M pinhole camera
International Nuclear Information System (INIS)
Holland, A.; Powell, E.T.; Fonck, R.J.
1990-03-01
This paper describes two methods which have been used to reconstruct the soft x-ray emission profile of the PBX-M tokamak from the projected images recorded by the PBX-M pinhole camera. Both methods must accurately represent the shape of the reconstructed profile while also providing a degree of immunity to noise in the data. The first method is a simple least squares fit to the data. This has the advantage of being fast and small, and thus easily implemented on the PDP-11 computer used to control the video digitizer for the pinhole camera. The second method involves the application of a maximum entropy algorithm to an overdetermined system. This has the advantage of allowing the use of a default profile. This profile contains additional knowledge about the plasma shape which can be obtained from equilibrium fits to the external magnetic measurements. Additionally the reconstruction is guaranteed positive, and the fit to the data can be relaxed by specifying both the amount and distribution of noise in the image. The algorithm described has the advantage of being considerably faster, for an overdetermined system, than the usual Lagrange multiplier approach to finding the maximum entropy solution. 13 refs., 24 figs
A two-way regularization method for MEG source reconstruction
Tian, Tian Siva; Huang, Jianhua Z.; Shen, Haipeng; Li, Zhimin
2012-01-01
The MEG inverse problem refers to the reconstruction of the neural activity of the brain from magnetoencephalography (MEG) measurements. We propose a two-way regularization (TWR) method to solve the MEG inverse problem under the assumptions that only a small number of locations in space are responsible for the measured signals (focality), and each source time course is smooth in time (smoothness). The focality and smoothness of the reconstructed signals are ensured respectively by imposing a sparsity-inducing penalty and a roughness penalty in the data fitting criterion. A two-stage algorithm is developed for fast computation, where a raw estimate of the source time course is obtained in the first stage and then refined in the second stage by the two-way regularization. The proposed method is shown to be effective on both synthetic and real-world examples. © Institute of Mathematical Statistics, 2012.
A two-way regularization method for MEG source reconstruction
Tian, Tian Siva
2012-09-01
The MEG inverse problem refers to the reconstruction of the neural activity of the brain from magnetoencephalography (MEG) measurements. We propose a two-way regularization (TWR) method to solve the MEG inverse problem under the assumptions that only a small number of locations in space are responsible for the measured signals (focality), and each source time course is smooth in time (smoothness). The focality and smoothness of the reconstructed signals are ensured respectively by imposing a sparsity-inducing penalty and a roughness penalty in the data fitting criterion. A two-stage algorithm is developed for fast computation, where a raw estimate of the source time course is obtained in the first stage and then refined in the second stage by the two-way regularization. The proposed method is shown to be effective on both synthetic and real-world examples. © Institute of Mathematical Statistics, 2012.
A Family of Multipoint Flux Mixed Finite Element Methods for Elliptic Problems on General Grids
Wheeler, Mary F.; Xue, Guangri; Yotov, Ivan
2011-01-01
In this paper, we discuss a family of multipoint flux mixed finite element (MFMFE) methods on simplicial, quadrilateral, hexahedral, and triangular-prismatic grids. The MFMFE methods are locally conservative with continuous normal fluxes, since
Monte Carlo methods for flux expansion solutions of transport problems
International Nuclear Information System (INIS)
Spanier, J.
1999-01-01
Adaptive Monte Carlo methods, based on the use of either correlated sampling or importance sampling, to obtain global solutions to certain transport problems have recently been described. The resulting learning algorithms are capable of achieving geometric convergence when applied to the estimation of a finite number of coefficients in a flux expansion representation of the global solution. However, because of the nonphysical nature of the random walk simulations needed to perform importance sampling, conventional transport estimators and source sampling techniques require modification to be used successfully in conjunction with such flux expansion methods. It is shown how these problems can be overcome. First, the traditional path length estimators in wide use in particle transport simulations are generalized to include rather general detector functions (which, in this application, are the individual basis functions chosen for the flus expansion). Second, it is shown how to sample from the signed probabilities that arise as source density functions in these applications, without destroying the zero variance property needed to ensure geometric convergence to zero error
Reconstructing dust fluxes and paleoproductivity at the southern Agulhas Plateau since MIS-6
Frenkel, M. M.; Anderson, R. F.; Winckler, G.
2017-12-01
Understanding the mechanisms underlying glacial-interglacial cycles requires characterizing the role of oceanic feedbacks in climatic changes. For example, increased aeolian iron fluxes to Fe-limited regions of the ocean and corresponding changes in marine productivity could have improved biological pump efficiency and resulted in CO2 drawdown. Here we explore these feedbacks using marine sediment core MDO2-2588 collected from the southern Agulhas Plateau (SAP; 41°S, 26°E), located beneath the modern subtropical front. Today, diatom productivity in this region is Si-limited because high Si utilization south of the polar front (PF) means that water advected northward to our study site is Si-depleted. However, previous work has suggested that extended sea ice cover during glacial periods may have limited diatom productivity south of the PF while frontal systems shifted northward, allowing more Si to reach thermocline of the SAP. Meanwhile, increased glacial dust flux to the SAP may have simultaneously supplied more Fe, contributing to higher glacial productivity. This hypothesis has been supported by observations of higher LGM and MIS-6 productivity at MD02-2588 using bulk biogenic content and diatom assemblages (Romero et al., Paleoceanography, 30 (2015) 118-132). Gradients in d13C between benthic and planktic foraminifera have also been used to support Fe fertilization at this site on millennial timescales (Ziegler et al., Nature Geoscience, 6 (2013) 457-461). Yet, studies have yet to produce coordinated records of dust flux and export production for the SAP. Here, we present records of dust, based on 230Th-normalized 232Th fluxes, and export production using 230Th-normalized excess-Ba and opal fluxes and authigenic U through MIS-6. Preliminary results show that lithogenic fluxes to MD02-2588 were approximately twice as high during MIS-6 as MIS-5e and were concurrent with a two-fold increase in excess-Ba flux. However, this relative increase in lithogenic flux
Sediment core and glacial environment reconstruction - a method review
Bakke, Jostein; Paasche, Øyvind
2010-05-01
Alpine glaciers are often located in remote and high-altitude regions of the world, areas that only rarely are covered by instrumental records. Reconstructions of glaciers has therefore proven useful for understanding past climate dynamics on both shorter and longer time-scales. One major drawback with glacier reconstructions based solely on moraine chronologies - by far the most common -, is that due to selective preservation of moraine ridges such records do not exclude the possibility of multiple Holocene glacier advances. This problem is true regardless whether cosmogenic isotopes or lichenometry have been used to date the moraines, or also radiocarbon dating of mega-fossils buried in till or underneath the moraines themselves. To overcome this problem Karlén (1976) initially suggested that glacial erosion and the associated production of rock-flour deposited in downstream lakes could provide a continuous record of glacial fluctuations, hence overcoming the problem of incomplete reconstructions. We want to discuss the methods used to reconstruct past glacier activity based on sediments deposited in distal glacier-fed lakes. By quantifying physical properties of glacial and extra-glacial sediments deposited in catchments, and in downstream lakes and fjords, it is possible to isolate and identify past glacier activity - size and production rate - that subsequently can be used to reconstruct changing environmental shifts and trends. Changes in average sediment evacuation from alpine glaciers are mainly governed by glacier size and the mass turnover gradient, determining the deformation rate at any given time. The amount of solid precipitation (mainly winter accumulation) versus loss due to melting during the ablation-season (mainly summer temperature) determines the mass turnover gradient in either positive or negative direction. A prevailing positive net balance will lead to higher sedimentation rates and vice versa, which in turn can be recorded in downstream
Skin sparing mastectomy: Technique and suggested methods of reconstruction
International Nuclear Information System (INIS)
Farahat, A.M.; Hashim, T.; Soliman, H.O.; Manie, T.M.; Soliman, O.M.
2014-01-01
To demonstrate the feasibility and accessibility of performing adequate mastectomy to extirpate the breast tissue, along with en-block formal axillary dissection performed from within the same incision. We also compared different methods of immediate breast reconstruction used to fill the skin envelope to achieve the best aesthetic results. Methods: 38 patients with breast cancer underwent skin-sparing mastectomy with formal axillary clearance, through a circum-areolar incision. Immediate breast reconstruction was performed using different techniques to fill in the skin envelope. Two reconstruction groups were assigned; group 1: Autologus tissue transfer only (n= 24), and group 2: implant augmentation (n= 14). Autologus tissue transfer: The techniques used included filling in the skin envelope using Extended Latissimus Dorsi flap (18 patients) and Pedicled TRAM flap (6 patients). Augmentation with implants: Subpectoral implants(4 patients), a rounded implant placed under the pectoralis major muscle to augment an LD reconstructed breast. LD pocket (10 patients), an anatomical implant placed over the pectoralis major muscle within a pocket created by the LD flap. No contra-lateral procedure was performed in any of the cases to achieve symmetry. Results: All cases underwent adequate excision of the breast tissue along with en-block complete axillary clearance (when indicated), without the need for an additional axillary incision. Eighteen patients underwent reconstruction using extended LD flaps only, six had TRAM flaps, four had augmentation using implants placed below the pectoralis muscle along with LD flaps, and ten had implants placed within the LD pocket. Breast shape, volume and contour were successfully restored in all patients. Adequate degree of ptosis was achieved, to ensure maximal symmetry. Conclusions: Skin Sparing mastectomy through a circum-areolar incision has proven to be a safe and feasible option for the management of breast cancer in Egyptian
Use of regularized algebraic methods in tomographic reconstruction
International Nuclear Information System (INIS)
Koulibaly, P.M.; Darcourt, J.; Blanc-Ferraud, L.; Migneco, O.; Barlaud, M.
1997-01-01
The algebraic methods are used in emission tomography to facilitate the compensation of attenuation and of Compton scattering. We have tested on a phantom the use of a regularization (a priori introduction of information), as well as the taking into account of spatial resolution variation with the depth (SRVD). Hence, we have compared the performances of the two methods by back-projection filtering (BPF) and of the two algebraic methods (AM) in terms of FWHM (by means of a point source), of the reduction of background noise (σ/m) on the homogeneous part of Jaszczak's phantom and of reconstruction speed (time unit = BPF). The BPF methods make use of a grade filter (maximal resolution, no noise treatment), single or associated with a Hann's low-pass (f c = 0.4), as well as of an attenuation correction. The AM which embody attenuation and scattering corrections are, on one side, the OS EM (Ordered Subsets, partitioning and rearranging of the projection matrix; Expectation Maximization) without regularization or SRVD correction, and, on the other side, the OS MAP EM (Maximum a posteriori), regularized and embodying the SRVD correction. A table is given containing for each used method (grade, Hann, OS EM and OS MAP EM) the values of FWHM, σ/m and time, respectively. One can observe that the OS MAP EM algebraic method allows ameliorating both the resolution, by taking into account the SRVD in the reconstruction process and noise treatment by regularization. In addition, due to the OS technique the reconstruction times are acceptable
Reconstruction of Banknote Fragments Based on Keypoint Matching Method.
Gwo, Chih-Ying; Wei, Chia-Hung; Li, Yue; Chiu, Nan-Hsing
2015-07-01
Banknotes may be shredded by a scrap machine, ripped up by hand, or damaged in accidents. This study proposes an image registration method for reconstruction of multiple sheets of banknotes. The proposed method first constructs different scale spaces to identify keypoints in the underlying banknote fragments. Next, the features of those keypoints are extracted to represent their local patterns around keypoints. Then, similarity is computed to find the keypoint pairs between the fragment and the reference banknote. The banknote fragments can determine the coordinate and amend the orientation. Finally, an assembly strategy is proposed to piece multiple sheets of banknote fragments together. Experimental results show that the proposed method causes, on average, a deviation of 0.12457 ± 0.12810° for each fragment while the SIFT method deviates 1.16893 ± 2.35254° on average. The proposed method not only reconstructs the banknotes but also decreases the computing cost. Furthermore, the proposed method can estimate relatively precisely the orientation of the banknote fragments to assemble. © 2015 American Academy of Forensic Sciences.
Track and vertex reconstruction: From classical to adaptive methods
International Nuclear Information System (INIS)
Strandlie, Are; Fruehwirth, Rudolf
2010-01-01
This paper reviews classical and adaptive methods of track and vertex reconstruction in particle physics experiments. Adaptive methods have been developed to meet the experimental challenges at high-energy colliders, in particular, the CERN Large Hadron Collider. They can be characterized by the obliteration of the traditional boundaries between pattern recognition and statistical estimation, by the competition between different hypotheses about what constitutes a track or a vertex, and by a high level of flexibility and robustness achieved with a minimum of assumptions about the data. The theoretical background of some of the adaptive methods is described, and it is shown that there is a close connection between the two main branches of adaptive methods: neural networks and deformable templates, on the one hand, and robust stochastic filters with annealing, on the other hand. As both classical and adaptive methods of track and vertex reconstruction presuppose precise knowledge of the positions of the sensitive detector elements, the paper includes an overview of detector alignment methods and a survey of the alignment strategies employed by past and current experiments.
Critical node treatment in the analytic function expansion method for Pin Power Reconstruction
International Nuclear Information System (INIS)
Gao, Z.; Xu, Y.; Downar, T.
2013-01-01
Pin Power Reconstruction (PPR) was implemented in PARCS using the eight term analytic function expansion method (AFEN). This method has been demonstrated to be both accurate and efficient. However, similar to all the methods involving analytic functions, such as the analytic node method (ANM) and AFEN for nodal solution, the use of AFEN for PPR also has potential numerical issue with critical nodes. The conventional analytic functions are trigonometric or hyperbolic sine or cosine functions with an angular frequency proportional to buckling. For a critic al node the buckling is zero and the sine functions becomes zero, and the cosine function become unity. In this case, the eight terms of the analytic functions are no longer distinguishable from ea ch other which makes their corresponding coefficients can no longer be determined uniquely. The mode flux distribution of critical node can be linear while the conventional analytic functions can only express a uniform distribution. If there is critical or near critical node in a plane, the reconstructed pin power distribution is often be shown negative or very large values using the conventional method. In this paper, we propose a new method to avoid the numerical problem wit h critical nodes which uses modified trigonometric or hyperbolic sine functions which are the ratio of trigonometric or hyperbolic sine and its angular frequency. If there are no critical or near critical nodes present, the new pin power reconstruction method with modified analytic functions are equivalent to the conventional analytic functions. The new method is demonstrated using the L336C5 benchmark problem. (authors)
Critical node treatment in the analytic function expansion method for Pin Power Reconstruction
Energy Technology Data Exchange (ETDEWEB)
Gao, Z. [Rice University, MS 318, 6100 Main Street, Houston, TX 77005 (United States); Xu, Y. [Argonne National Laboratory, 9700 South Case Ave., Argonne, IL 60439 (United States); Downar, T. [Department of Nuclear Engineering, University of Michigan, 2355 Bonisteel blvd., Ann Arbor, MI 48109 (United States)
2013-07-01
Pin Power Reconstruction (PPR) was implemented in PARCS using the eight term analytic function expansion method (AFEN). This method has been demonstrated to be both accurate and efficient. However, similar to all the methods involving analytic functions, such as the analytic node method (ANM) and AFEN for nodal solution, the use of AFEN for PPR also has potential numerical issue with critical nodes. The conventional analytic functions are trigonometric or hyperbolic sine or cosine functions with an angular frequency proportional to buckling. For a critic al node the buckling is zero and the sine functions becomes zero, and the cosine function become unity. In this case, the eight terms of the analytic functions are no longer distinguishable from ea ch other which makes their corresponding coefficients can no longer be determined uniquely. The mode flux distribution of critical node can be linear while the conventional analytic functions can only express a uniform distribution. If there is critical or near critical node in a plane, the reconstructed pin power distribution is often be shown negative or very large values using the conventional method. In this paper, we propose a new method to avoid the numerical problem wit h critical nodes which uses modified trigonometric or hyperbolic sine functions which are the ratio of trigonometric or hyperbolic sine and its angular frequency. If there are no critical or near critical nodes present, the new pin power reconstruction method with modified analytic functions are equivalent to the conventional analytic functions. The new method is demonstrated using the L336C5 benchmark problem. (authors)
Reconstruction of the energy flux and search for squarks and gluinos in D0 experiment
International Nuclear Information System (INIS)
Ridel, M.
2002-04-01
The DΦ experiment is located at the Fermi National Accelerator Laboratory on the TeVatron proton-antiproton collider. The Run II has started in march 2001 after 5 years of shutdown and will allow DΦ extend its reach in squarks and gluinos searches, particles predicted by supersymmetry. In this work, I focussed on their decays that lead to signature with jets and missing transverse energy. But before the data taking started, I studied both software and hardware ways to improve the energy measurement which is crucial for jets and for missing transverse energy. Energy deposits in the calorimeter has been clustered with cellNN, at the cell level instead of the tower level. Efforts have been made to take advantage of the calorimeter granularity to aim at individual particles showers reconstruction. CellNN starts from the third floor which has a quadruple granularity compared to the other floors. The longitudinal information has been used to detect electromagnetic and hadronic showers overlaps. Then, clusters and reconstructed tracks from the central detectors are combined and their energies compared. The better measurement is kept. Using this procedure allows to improve the reconstruction of each event energy flow. The efficiency of the current calorimeter triggers has been determined. They has been used to perform a Monte Carlo search analysis of squarks and gluinos in the mSUGRA framework. The lower bound that Ddiameter will be able to put on squarks and gluinos masses with a 100 pb -1 integrated luminosity has been predicted. The use of the energy flow instead of standard reconstruction tools will be able to improve this lower limit. (author)
Methods of total spectral radiant flux realization at VNIIOFI
Ivashin, Evgeniy; Lalek, Jan; Rybczyński, Andrzej; Ogarev, Sergey; Khlevnoy, Boris; Dobroserdov, Dmitry; Sapritsky, Victor
2018-02-01
VNIIOFI carries out works on realization of independent methods for realization of the total spectral radiant flux (TSRF) of incoherent optical radiation sources - reference high-temperature blackbodies (BB), halogen lamps, and LED with quasi-Lambert spatial distribution of radiance. The paper describes three schemes for measuring facilities using photometers, spectroradiometers and computer-controlled high class goniometer. The paper describes different approaches for TSRF realization at the VNIIOFI National radiometric standard on the basis of high-temperature BB and LED sources, and gonio-spectroradiometer. Further, they are planned to be compared, and the use of fixed-point cells (in particular, based on the high-temperature δ(MoC)-C metal-carbon eutectic with a phase transition temperature of 2583 °C corresponding to the metrological optical “source-A”) as an option instead of the BB is considered in order to enhance calibration accuracy.
A Robust Shape Reconstruction Method for Facial Feature Point Detection
Directory of Open Access Journals (Sweden)
Shuqiu Tan
2017-01-01
Full Text Available Facial feature point detection has been receiving great research advances in recent years. Numerous methods have been developed and applied in practical face analysis systems. However, it is still a quite challenging task because of the large variability in expression and gestures and the existence of occlusions in real-world photo shoot. In this paper, we present a robust sparse reconstruction method for the face alignment problems. Instead of a direct regression between the feature space and the shape space, the concept of shape increment reconstruction is introduced. Moreover, a set of coupled overcomplete dictionaries termed the shape increment dictionary and the local appearance dictionary are learned in a regressive manner to select robust features and fit shape increments. Additionally, to make the learned model more generalized, we select the best matched parameter set through extensive validation tests. Experimental results on three public datasets demonstrate that the proposed method achieves a better robustness over the state-of-the-art methods.
Fast nonconvex nonsmooth minimization methods for image restoration and reconstruction.
Nikolova, Mila; Ng, Michael K; Tam, Chi-Pan
2010-12-01
Nonconvex nonsmooth regularization has advantages over convex regularization for restoring images with neat edges. However, its practical interest used to be limited by the difficulty of the computational stage which requires a nonconvex nonsmooth minimization. In this paper, we deal with nonconvex nonsmooth minimization methods for image restoration and reconstruction. Our theoretical results show that the solution of the nonconvex nonsmooth minimization problem is composed of constant regions surrounded by closed contours and neat edges. The main goal of this paper is to develop fast minimization algorithms to solve the nonconvex nonsmooth minimization problem. Our experimental results show that the effectiveness and efficiency of the proposed algorithms.
Optical wedge method for spatial reconstruction of particle trajectories
International Nuclear Information System (INIS)
Asatiani, T.L.; Alchudzhyan, S.V.; Gazaryan, K.A.; Zograbyan, D.Sh.; Kozliner, L.I.; Krishchyan, V.M.; Martirosyan, G.S.; Ter-Antonyan, S.V.
1978-01-01
A technique of optical wedges allowing the full reconstruction of pictures of events in space is considered. The technique is used for the detection of particle tracks in optical wide-gap spark chambers by photographing in one projection. The optical wedges are refracting right-angle plastic prisms positioned between the camera and the spark chamber so that through them both ends of the track are photographed. A method for calibrating measurements is given, and an estimate made of the accuracy of the determination of the second projection with the help of the optical wedges
An Optimized Method for Terrain Reconstruction Based on Descent Images
Directory of Open Access Journals (Sweden)
Xu Xinchao
2016-02-01
Full Text Available An optimization method is proposed to perform high-accuracy terrain reconstruction of the landing area of Chang’e III. First, feature matching is conducted using geometric model constraints. Then, the initial terrain is obtained and the initial normal vector of each point is solved on the basis of the initial terrain. By changing the vector around the initial normal vector in small steps a set of new vectors is obtained. By combining these vectors with the direction of light and camera, the functions are set up on the basis of a surface reflection model. Then, a series of gray values is derived by solving the equations. The new optimized vector is recorded when the obtained gray value is closest to the corresponding pixel. Finally, the optimized terrain is obtained after iteration of the vector field. Experiments were conducted using the laboratory images and descent images of Chang’e III. The results showed that the performance of the proposed method was better than that of the classical feature matching method. It can provide a reference for terrain reconstruction of the landing area in subsequent moon exploration missions.
Extension of local front reconstruction method with controlled coalescence model
Rajkotwala, A. H.; Mirsandi, H.; Peters, E. A. J. F.; Baltussen, M. W.; van der Geld, C. W. M.; Kuerten, J. G. M.; Kuipers, J. A. M.
2018-02-01
The physics of droplet collisions involves a wide range of length scales. This poses a challenge to accurately simulate such flows with standard fixed grid methods due to their inability to resolve all relevant scales with an affordable number of computational grid cells. A solution is to couple a fixed grid method with subgrid models that account for microscale effects. In this paper, we improved and extended the Local Front Reconstruction Method (LFRM) with a film drainage model of Zang and Law [Phys. Fluids 23, 042102 (2011)]. The new framework is first validated by (near) head-on collision of two equal tetradecane droplets using experimental film drainage times. When the experimental film drainage times are used, the LFRM method is better in predicting the droplet collisions, especially at high velocity in comparison with other fixed grid methods (i.e., the front tracking method and the coupled level set and volume of fluid method). When the film drainage model is invoked, the method shows a good qualitative match with experiments, but a quantitative correspondence of the predicted film drainage time with the experimental drainage time is not obtained indicating that further development of film drainage model is required. However, it can be safely concluded that the LFRM coupled with film drainage models is much better in predicting the collision dynamics than the traditional methods.
Digital module for neutron flux measurement by Campbell method
International Nuclear Information System (INIS)
Baratte, G.
1987-02-01
The study reported here concerns a wide range measurement channel for reactor control instrumentation but it may also be useful for specific measurements requiring the Campbell method. A wide range measurement channel allows the processing of the signal issued from a single fission chamber so it's possible to insure control of nuclear reactors in three different running modes: pulse processing, fluctuations and current. The study described in this note includes three parts: - the analogical wide range neutron measurement channel is presented in the first chapter; the fluctuation mode is thoroughly studied; the results of tests and proper limitations of analogical processing are summarized. A theoretical study of the neutron flux measurement by numerical calculation of the fluctuation signal variance is given in the second chapter. The digital module is described in the third chapter; the results of experiments are analysed. The validity of the digital method is proved by means of a practical realisation. The performances obtained with the digital fluctuation test model may be compared with those given by the analogical fluctuation channel which can be used for the control of lower fission rates. The digital module may also be used for any fluctuation measurement where very short response time and broad spectral band of analysis are not strictly necessary [fr
Virtanen, Iiro; Virtanen, Ilpo; Pevtsov, Alexei; Yeates, Anthony; Mursula, Kalevi
2017-04-01
We aim to use the surface flux transport model to simulate the long-term evolution of the photospheric magnetic field from historical observations. In this work we study the accuracy of the model and its sensitivity to uncertainties in its main parameters and the input data. We test the model by running simulations with different values of meridional circulation and supergranular diffusion parameters, and study how the flux distribution inside active regions and the initial magnetic field affect the simulation. We compare the results to assess how sensitive the simulation is to uncertainties in meridional circulation speed, supergranular diffusion and input data. We also compare the simulated magnetic field with observations. We find that there is generally good agreement between simulations and observations. While the model is not capable of replicating fine details of the magnetic field, the long-term evolution of the polar field is very similar in simulations and observations. Simulations typically yield a smoother evolution of polar fields than observations, that often include artificial variations due to observational limitations. We also find that the simulated field is fairly insensitive to uncertainties in model parameters or the input data. Due to the decay term included in the model the effects of the uncertainties are rather minor or temporary, lasting typically one solar cycle.
Computational methods for three-dimensional microscopy reconstruction
Frank, Joachim
2014-01-01
Approaches to the recovery of three-dimensional information on a biological object, which are often formulated or implemented initially in an intuitive way, are concisely described here based on physical models of the object and the image-formation process. Both three-dimensional electron microscopy and X-ray tomography can be captured in the same mathematical framework, leading to closely-related computational approaches, but the methodologies differ in detail and hence pose different challenges. The editors of this volume, Gabor T. Herman and Joachim Frank, are experts in the respective methodologies and present research at the forefront of biological imaging and structural biology. Computational Methods for Three-Dimensional Microscopy Reconstruction will serve as a useful resource for scholars interested in the development of computational methods for structural biology and cell biology, particularly in the area of 3D imaging and modeling.
Iterative reconstruction methods for Thermo-acoustic Tomography
International Nuclear Information System (INIS)
Marinesque, Sebastien
2012-01-01
We define, study and implement various iterative reconstruction methods for Thermo-acoustic Tomography (TAT): the Back and Forth Nudging (BFN), easy to implement and to use, a variational technique (VT) and the Back and Forth SEEK (BF-SEEK), more sophisticated, and a coupling method between Kalman filter (KF) and Time Reversal (TR). A unified formulation is explained for the sequential techniques aforementioned that defines a new class of inverse problem methods: the Back and Forth Filters (BFF). In addition to existence and uniqueness (particularly for backward solutions), we study many frameworks that ensure and characterize the convergence of the algorithms. Thus we give a general theoretical framework for which the BFN is a well-posed problem. Then, in application to TAT, existence and uniqueness of its solutions and geometrical convergence of the algorithm are proved, and an explicit convergence rate and a description of its numerical behaviour are given. Next, theoretical and numerical studies of more general and realistic framework are led, namely different objects, speeds (with or without trapping), various sensor configurations and samplings, attenuated equations or external sources. Then optimal control and best estimate tools are used to characterize the BFN convergence and converging feedbacks for BFF, under observability assumptions. Finally, we compare the most flexible and efficient current techniques (TR and an iterative variant) with our various BFF and the VT in several experiments. Thus, robust, with different possible complexities and flexible, the methods that we propose are very interesting reconstruction techniques, particularly in TAT and when observations are degraded. (author) [fr
DEFF Research Database (Denmark)
Kandel, Tanka P; Lærke, Poul Erik; Elsgaard, Lars
2016-01-01
One of the shortcomings of closed chamber methods for soil respiration (SR) measurements is the decreased CO2 diffusion rate from soil to chamber headspace that may occur due to increased chamber CO2 concentrations. This feedback on diffusion rate may lead to underestimation of pre-deployment flu......One of the shortcomings of closed chamber methods for soil respiration (SR) measurements is the decreased CO2 diffusion rate from soil to chamber headspace that may occur due to increased chamber CO2 concentrations. This feedback on diffusion rate may lead to underestimation of pre...... was placed on fixed collars, and CO2 concentration in the chamber headspace were recorded at 1-s intervals for 45 min. Fluxes were measured in different soil types (sandy, sandy loam and organic soils), and for various manipulations (tillage, rain and drought) and soil conditions (temperature and moisture......) to obtain a range of fluxes with different shapes of flux curves. The linear method provided more stable flux results during short enclosure times (few min) but underestimated initial fluxes by 15–300% after 45 min deployment time. Non-linear models reduced the underestimation as average underestimation...
A new method for depth profiling reconstruction in confocal microscopy
Esposito, Rosario; Scherillo, Giuseppe; Mensitieri, Giuseppe
2018-05-01
Confocal microscopy is commonly used to reconstruct depth profiles of chemical species in multicomponent systems and to image nuclear and cellular details in human tissues via image intensity measurements of optical sections. However, the performance of this technique is reduced by inherent effects related to wave diffraction phenomena, refractive index mismatch and finite beam spot size. All these effects distort the optical wave and cause an image to be captured of a small volume around the desired illuminated focal point within the specimen rather than an image of the focal point itself. The size of this small volume increases with depth, thus causing a further loss of resolution and distortion of the profile. Recently, we proposed a theoretical model that accounts for the above wave distortion and allows for a correct reconstruction of the depth profiles for homogeneous samples. In this paper, this theoretical approach has been adapted for describing the profiles measured from non-homogeneous distributions of emitters inside the investigated samples. The intensity image is built by summing the intensities collected from each of the emitters planes belonging to the illuminated volume, weighed by the emitters concentration. The true distribution of the emitters concentration is recovered by a new approach that implements this theoretical model in a numerical algorithm based on the Maximum Entropy Method. Comparisons with experimental data and numerical simulations show that this new approach is able to recover the real unknown concentration distribution from experimental profiles with an accuracy better than 3%.
Features of the method of large-scale paleolandscape reconstructions
Nizovtsev, Vyacheslav; Erman, Natalia; Graves, Irina
2017-04-01
The method of paleolandscape reconstructions was tested in the key area of the basin of the Central Dubna, located at the junction of the Taldom and Sergiev Posad districts of the Moscow region. A series of maps was created which shows paleoreconstructions of the original (indigenous) living environment of initial settlers during main time periods of the Holocene age and features of human interaction with landscapes at the early stages of economic development of the territory (in the early and middle Holocene). The sequence of these works is as follows. 1. Comprehensive analysis of topographic maps of different scales and aerial and satellite images, stock materials of geological and hydrological surveys and prospecting of peat deposits, archaeological evidence on ancient settlements, palynological and osteological analysis, analysis of complex landscape and archaeological studies. 2. Mapping of factual material and analyzing of the spatial distribution of archaeological sites were performed. 3. Running of a large-scale field landscape mapping (sample areas) and compiling of maps of the modern landscape structure. On this basis, edaphic properties of the main types of natural boundaries were analyzed and their resource base was determined. 4. Reconstruction of lake-river system during the main periods of the Holocene. The boundaries of restored paleolakes were determined based on power and territorial confinement of decay ooze. 5. On the basis of landscape and edaphic method the actual paleolandscape reconstructions for the main periods of the Holocene were performed. During the reconstructions of the original, indigenous flora we relied on data of palynological studies conducted on the studied area or in similar landscape conditions. 6. The result was a retrospective analysis and periodization of the settlement process, economic development and the formation of the first anthropogenically transformed landscape complexes. The reconstruction of the dynamics of the
MO-DE-209-02: Tomosynthesis Reconstruction Methods
International Nuclear Information System (INIS)
Mainprize, J.
2016-01-01
Digital Breast Tomosynthesis (DBT) is rapidly replacing mammography as the standard of care in breast cancer screening and diagnosis. DBT is a form of computed tomography, in which a limited set of projection images are acquired over a small angular range and reconstructed into tomographic data. The angular range varies from 15° to 50° and the number of projections varies between 9 and 25 projections, as determined by the equipment manufacturer. It is equally valid to treat DBT as the digital analog of classical tomography – that is, linear tomography. In fact, the name “tomosynthesis” stands for “synthetic tomography.” DBT shares many common features with classical tomography, including the radiographic appearance, dose, and image quality considerations. As such, both the science and practical physics of DBT systems is a hybrid between computed tomography and classical tomographic methods. In this lecture, we will explore the continuum from radiography to computed tomography to illustrate the characteristics of DBT. This lecture will consist of four presentations that will provide a complete overview of DBT, including a review of the fundamentals of DBT acquisition, a discussion of DBT reconstruction methods, an overview of dosimetry for DBT systems, and summary of the underlying image theory of DBT thereby relating image quality and dose. Learning Objectives: To understand the fundamental principles behind tomosynthesis image acquisition. To understand the fundamentals of tomosynthesis image reconstruction. To learn the determinants of image quality and dose in DBT, including measurement techniques. To learn the image theory underlying tomosynthesis, and the relationship between dose and image quality. ADM is a consultant to, and holds stock in, Real Time Tomography, LLC. ADM receives research support from Hologic Inc., Analogic Inc., and Barco NV.; ADM is a member of the Scientific Advisory Board for Gamma Medica Inc.; A. Maidment, Research Support
MO-DE-209-02: Tomosynthesis Reconstruction Methods
Energy Technology Data Exchange (ETDEWEB)
Mainprize, J. [Sunnybrook Health Sciences Centre, Toronto, ON (Canada)
2016-06-15
Digital Breast Tomosynthesis (DBT) is rapidly replacing mammography as the standard of care in breast cancer screening and diagnosis. DBT is a form of computed tomography, in which a limited set of projection images are acquired over a small angular range and reconstructed into tomographic data. The angular range varies from 15° to 50° and the number of projections varies between 9 and 25 projections, as determined by the equipment manufacturer. It is equally valid to treat DBT as the digital analog of classical tomography – that is, linear tomography. In fact, the name “tomosynthesis” stands for “synthetic tomography.” DBT shares many common features with classical tomography, including the radiographic appearance, dose, and image quality considerations. As such, both the science and practical physics of DBT systems is a hybrid between computed tomography and classical tomographic methods. In this lecture, we will explore the continuum from radiography to computed tomography to illustrate the characteristics of DBT. This lecture will consist of four presentations that will provide a complete overview of DBT, including a review of the fundamentals of DBT acquisition, a discussion of DBT reconstruction methods, an overview of dosimetry for DBT systems, and summary of the underlying image theory of DBT thereby relating image quality and dose. Learning Objectives: To understand the fundamental principles behind tomosynthesis image acquisition. To understand the fundamentals of tomosynthesis image reconstruction. To learn the determinants of image quality and dose in DBT, including measurement techniques. To learn the image theory underlying tomosynthesis, and the relationship between dose and image quality. ADM is a consultant to, and holds stock in, Real Time Tomography, LLC. ADM receives research support from Hologic Inc., Analogic Inc., and Barco NV.; ADM is a member of the Scientific Advisory Board for Gamma Medica Inc.; A. Maidment, Research Support
Li, ZhaoYu; Chen, Tao; Yan, GuangQing
2016-10-01
A new method for determining the central axial orientation of a two-dimensional coherent magnetic flux rope (MFR) via multipoint analysis of the magnetic-field structure is developed. The method is devised under the following geometrical assumptions: (1) on its cross section, the structure is left-right symmetric; (2) the projected structure velocity is vertical to the line of symmetry. The two conditions can be naturally satisfied for cylindrical MFRs and are expected to be satisfied for MFRs that are flattened within current sheets. The model test demonstrates that, for determining the axial orientation of such structures, the new method is more efficient and reliable than traditional techniques such as minimum-variance analysis of the magnetic field, Grad-Shafranov (GS) reconstruction, and the more recent method based on the cylindrically symmetric assumption. A total of five flux transfer events observed by Cluster are studied using the proposed approach, and the application results indicate that the observed structures, regardless of their actual physical properties, fit the assumed geometrical model well. For these events, the inferred axial orientations are all in excellent agreement with those obtained using the multi-GS reconstruction technique.
A novel mechanochemical method for reconstructing the moisture-degraded HKUST-1.
Sun, Xuejiao; Li, Hao; Li, Yujie; Xu, Feng; Xiao, Jing; Xia, Qibin; Li, Yingwei; Li, Zhong
2015-07-11
A novel mechanochemical method was proposed to reconstruct quickly moisture-degraded HKUST-1. The degraded HKUST-1 can be restored within minutes. The reconstructed samples were characterized, and confirmed to have 95% surface area and 92% benzene capacity of the fresh HKUST-1. It is a simple and effective strategy for degraded MOF reconstruction.
Nagler, Pamela L.; Glenn, Edward P.; Morino, Kiyomi; Neale, Christopher M.U; Cosh, Michael H.
2010-01-01
Riparian evapotranspiration (ET) was measured on a salt cedar (Tamarix spp.) dominated river terrace on the Lower Colorado River from 2007 to 2009 using tissue-heat-balance sap flux sensors at six sites representing very dense, medium dense, and sparse stands of plants. Salt cedar ET varied markedly across sites, and sap flux sensors showed that plants were subject to various degrees of stress, detected as mid-day depression of transpiration and stomatal conductance. Sap flux results were scaled from the leaf level of measurement to the stand level by measuring plant-specific leaf area index and fractional ground cover at each site. Results were compared to Bowen ratio moisture tower data available for three of the sites. Sap flux sensors and flux tower results ranked the sites the same and had similar estimates of ET. A regression equation, relating measured ET of salt cedar and other riparian plants and crops on the Lower Colorado River to the Enhanced Vegetation Index from the MODIS sensor on the Terra satellite and reference crop ET measured at meteorological stations, was able to predict actual ET with an accuracy or uncertainty of about 20%, despite between-site differences for salt cedar. Peak summer salt cedar ET averaged about 6 mm d-1 across sites and methods of measurement.
Efficient parsimony-based methods for phylogenetic network reconstruction.
Jin, Guohua; Nakhleh, Luay; Snir, Sagi; Tuller, Tamir
2007-01-15
Phylogenies--the evolutionary histories of groups of organisms-play a major role in representing relationships among biological entities. Although many biological processes can be effectively modeled as tree-like relationships, others, such as hybrid speciation and horizontal gene transfer (HGT), result in networks, rather than trees, of relationships. Hybrid speciation is a significant evolutionary mechanism in plants, fish and other groups of species. HGT plays a major role in bacterial genome diversification and is a significant mechanism by which bacteria develop resistance to antibiotics. Maximum parsimony is one of the most commonly used criteria for phylogenetic tree inference. Roughly speaking, inference based on this criterion seeks the tree that minimizes the amount of evolution. In 1990, Jotun Hein proposed using this criterion for inferring the evolution of sequences subject to recombination. Preliminary results on small synthetic datasets. Nakhleh et al. (2005) demonstrated the criterion's application to phylogenetic network reconstruction in general and HGT detection in particular. However, the naive algorithms used by the authors are inapplicable to large datasets due to their demanding computational requirements. Further, no rigorous theoretical analysis of computing the criterion was given, nor was it tested on biological data. In the present work we prove that the problem of scoring the parsimony of a phylogenetic network is NP-hard and provide an improved fixed parameter tractable algorithm for it. Further, we devise efficient heuristics for parsimony-based reconstruction of phylogenetic networks. We test our methods on both synthetic and biological data (rbcL gene in bacteria) and obtain very promising results.
Revisiting a model-independent dark energy reconstruction method
Energy Technology Data Exchange (ETDEWEB)
Lazkoz, Ruth; Salzano, Vincenzo; Sendra, Irene [Euskal Herriko Unibertsitatea, Fisika Teorikoaren eta Zientziaren Historia Saila, Zientzia eta Teknologia Fakultatea, Bilbao (Spain)
2012-09-15
In this work we offer new insights into the model-independent dark energy reconstruction method developed by Daly and Djorgovski (Astrophys. J. 597:9, 2003; Astrophys. J. 612:652, 2004; Astrophys. J. 677:1, 2008). Our results, using updated SNeIa and GRBs, allow to highlight some of the intrinsic weaknesses of the method. Conclusions on the main dark energy features as drawn from this method are intimately related to the features of the samples themselves, particularly for GRBs, which are poor performers in this context and cannot be used for cosmological purposes, that is, the state of the art does not allow to regard them on the same quality basis as SNeIa. We find there is a considerable sensitivity to some parameters (window width, overlap, selection criteria) affecting the results. Then, we try to establish what the current redshift range is for which one can make solid predictions on dark energy evolution. Finally, we strengthen the former view that this model is modest in the sense it provides only a picture of the global trend and has to be managed very carefully. But, on the other hand, we believe it offers an interesting complement to other approaches, given that it works on minimal assumptions. (orig.)
Benchmarking burnup reconstruction methods for dynamically operated research reactors
Energy Technology Data Exchange (ETDEWEB)
Sternat, Matthew R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Charlton, William S. [Univ. of Nebraska, Lincoln, NE (United States). National Strategic Research Institute; Nichols, Theodore F. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2016-03-01
The burnup of an HEU fueled dynamically operated research reactor, the Oak Ridge Research Reactor, was experimentally reconstructed using two different analytic methodologies and a suite of signature isotopes to evaluate techniques for estimating burnup for research reactor fuel. The methods studied include using individual signature isotopes and the complete mass spectrometry spectrum to recover the sample’s burnup. The individual, or sets of, isotopes include ^{148}Nd, ^{137}Cs+^{137}Ba, ^{139}La, and ^{145}Nd+^{146}Nd. The storage documentation from the analyzed fuel material provided two different measures of burnup: burnup percentage and the total power generated from the assembly in MWd. When normalized to conventional units, these two references differed by 7.8% (395.42GWd/MTHM and 426.27GWd/MTHM) in the resulting burnup for the spent fuel element used in the benchmark. Among all methods being evaluated, the results were within 11.3% of either reference burnup. The results were mixed in closeness to both reference burnups; however, consistent results were achieved from all three experimental samples.
International Nuclear Information System (INIS)
Shafii, Mohammad Ali; Meidianti, Rahma; Wildian,; Fitriyani, Dian; Tongkukut, Seni H. J.; Arkundato, Artoto
2014-01-01
Theoretical analysis of integral neutron transport equation using collision probability (CP) method with quadratic flux approach has been carried out. In general, the solution of the neutron transport using the CP method is performed with the flat flux approach. In this research, the CP method is implemented in the cylindrical nuclear fuel cell with the spatial of mesh being conducted into non flat flux approach. It means that the neutron flux at any point in the nuclear fuel cell are considered different each other followed the distribution pattern of quadratic flux. The result is presented here in the form of quadratic flux that is better understanding of the real condition in the cell calculation and as a starting point to be applied in computational calculation
Analysis of the neutron flux in an annular pulsed reactor by using finite volume method
Energy Technology Data Exchange (ETDEWEB)
Silva, Mário A.B. da; Narain, Rajendra; Bezerra, Jair de L., E-mail: mabs500@gmail.com, E-mail: narain@ufpe.br, E-mail: jairbezerra@gmail.com [Universidade Federal de Pernambuco (UFPE), Recife, PE (Brazil). Centro de Tecnologia e Geociências. Departamento de Energia Nuclear
2017-07-01
Production of very intense neutron sources is important for basic nuclear physics and for material testing and isotope production. Nuclear reactors have been used as sources of intense neutron fluxes, although the achievement of such levels is limited by the inability to remove fission heat. Periodic pulsed reactors provide very intense fluxes by a rotating modulator near a subcritical core. A concept for the production of very intense neutron fluxes that combines features of periodic pulsed reactors and steady state reactors was proposed by Narain (1997). Such a concept is known as Very Intense Continuous High Flux Pulsed Reactor (VICHFPR) and was analyzed by using diffusion equation with moving boundary conditions and Finite Difference Method with Crank-Nicolson formalism. This research aims to analyze the flux distribution in the Very Intense Continuous Flux High Pulsed Reactor (VICHFPR) by using the Finite Volume Method and compares its results with those obtained by the previous computational method. (author)
Energy Technology Data Exchange (ETDEWEB)
Shafii, Mohammad Ali, E-mail: mashafii@fmipa.unand.ac.id; Meidianti, Rahma, E-mail: mashafii@fmipa.unand.ac.id; Wildian,, E-mail: mashafii@fmipa.unand.ac.id; Fitriyani, Dian, E-mail: mashafii@fmipa.unand.ac.id [Department of Physics, Andalas University Padang West Sumatera Indonesia (Indonesia); Tongkukut, Seni H. J. [Department of Physics, Sam Ratulangi University Manado North Sulawesi Indonesia (Indonesia); Arkundato, Artoto [Department of Physics, Jember University Jember East Java Indonesia (Indonesia)
2014-09-30
Theoretical analysis of integral neutron transport equation using collision probability (CP) method with quadratic flux approach has been carried out. In general, the solution of the neutron transport using the CP method is performed with the flat flux approach. In this research, the CP method is implemented in the cylindrical nuclear fuel cell with the spatial of mesh being conducted into non flat flux approach. It means that the neutron flux at any point in the nuclear fuel cell are considered different each other followed the distribution pattern of quadratic flux. The result is presented here in the form of quadratic flux that is better understanding of the real condition in the cell calculation and as a starting point to be applied in computational calculation.
Analysis of the neutron flux in an annular pulsed reactor by using finite volume method
International Nuclear Information System (INIS)
Silva, Mário A.B. da; Narain, Rajendra; Bezerra, Jair de L.
2017-01-01
Production of very intense neutron sources is important for basic nuclear physics and for material testing and isotope production. Nuclear reactors have been used as sources of intense neutron fluxes, although the achievement of such levels is limited by the inability to remove fission heat. Periodic pulsed reactors provide very intense fluxes by a rotating modulator near a subcritical core. A concept for the production of very intense neutron fluxes that combines features of periodic pulsed reactors and steady state reactors was proposed by Narain (1997). Such a concept is known as Very Intense Continuous High Flux Pulsed Reactor (VICHFPR) and was analyzed by using diffusion equation with moving boundary conditions and Finite Difference Method with Crank-Nicolson formalism. This research aims to analyze the flux distribution in the Very Intense Continuous Flux High Pulsed Reactor (VICHFPR) by using the Finite Volume Method and compares its results with those obtained by the previous computational method. (author)
International Nuclear Information System (INIS)
Choi, Joonsung; Kim, Dongchan; Oh, Changhyun; Han, Yeji; Park, HyunWook
2013-01-01
In MRI (magnetic resonance imaging), signal sampling along a radial k-space trajectory is preferred in certain applications due to its distinct advantages such as robustness to motion, and the radial sampling can be beneficial for reconstruction algorithms such as parallel MRI (pMRI) due to the incoherency. For radial MRI, the image is usually reconstructed from projection data using analytic methods such as filtered back-projection or Fourier reconstruction after gridding. However, the quality of the reconstructed image from these analytic methods can be degraded when the number of acquired projection views is insufficient. In this paper, we propose a novel reconstruction method based on the expectation maximization (EM) method, where the EM algorithm is remodeled for MRI so that complex images can be reconstructed. Then, to optimize the proposed method for radial pMRI, a reconstruction method that uses coil sensitivity information of multichannel RF coils is formulated. Experiment results from synthetic and in vivo data show that the proposed method introduces better reconstructed images than the analytic methods, even from highly subsampled data, and provides monotonic convergence properties compared to the conjugate gradient based reconstruction method. (paper)
Estimating and localizing the algebraic and total numerical errors using flux reconstructions
Czech Academy of Sciences Publication Activity Database
Papež, Jan; Strakoš, Z.; Vohralík, M.
2018-01-01
Roč. 138, č. 3 (2018), s. 681-721 ISSN 0029-599X R&D Projects: GA ČR GA13-06684S Grant - others:GA MŠk(CZ) LL1202 Institutional support: RVO:67985807 Keywords : numerical solution of partial differential equations * finite element method * a posteriori error estimation * algebraic error * discretization error * stopping criteria * spatial distribution of the error Subject RIV: BA - General Mathematics Impact factor: 2.152, year: 2016
A Spatial-Temporal Comparison of Lake Mendota CO2 Fluxes and Collection Methods
Baldocchi, A. K.; Reed, D. E.; Desai, A. R.; Loken, L. C.; Schramm, P.; Stanley, E. H.
2017-12-01
Monitoring of carbon fluxes at the lake/atmosphere interface can help us determine baselines from which to understand responses in both space and time that may result from our warming climate or increasing nutrient inputs. Since recent research has shown lakes to be hotspots of global carbon cycling, it is important to quantify carbon sink and source dynamics as well as to verify observations between multiple methods in the context of long-term data collection efforts. Here we evaluate a new method for measuring space and time variation in CO2 fluxes based on novel speedboat-based collection method of aquatic greenhouse gas concentrations and a flux computation and interpolation algorithm. Two-hundred and forty-nine consecutive days of spatial flux maps over the 2016 open ice period were compared to ongoing eddy covariance tower flux measurements on the shore of Lake Mendota, Wisconsin US using a flux footprint analysis. Spatial and temporal alignments of the fluxes from these two observational datasets revealed both similar trends from daily to seasonal timescales as well as biases between methods. For example, throughout the Spring carbon fluxes showed strong correlation although off by an order of magnitude. Isolating physical patterns of agreement between the two methods of the lake/atmosphere CO2 fluxes allows us to pinpoint where biology and physical drivers contribute to the global carbon cycle and help improve modelling of lakes and utilize lakes as leading indicators of climate change.
Quantitative comparison of in situ soil CO2 flux measurement methods
Jennifer D. Knoepp; James M. Vose
2002-01-01
Development of reliable regional or global carbon budgets requires accurate measurement of soil CO2 flux. We conducted laboratory and field studies to determine the accuracy and comparability of methods commonly used to measure in situ soil CO2 fluxes. Methods compared included CO2...
Energy Technology Data Exchange (ETDEWEB)
Ballhausen, H.
2007-02-07
This treatise develops new methods for high flux neutron radiography and high flux neutron tomography and describes some of their applications in actual experiments. Instead of single images, time series can be acquired with short exposure times due to the available high intensity. To best use the increased amount of information, new estimators are proposed, which extract accurate results from the recorded ensembles, even if the individual piece of data is very noisy and in addition severely affected by systematic errors such as an influence of gamma background radiation. The spatial resolution of neutron radiographies, usually limited by beam divergence and inherent resolution of the scintillator, can be significantly increased by scanning the sample with a pinhole-micro-collimator. This technique circumvents any limitations in present detector design and, due to the available high intensity, could be successfully tested. Imaging with scattered neutrons as opposed to conventional total attenuation based imaging determines separately the absorption and scattering cross sections within the sample. For the first time even coherent angle dependent scattering could be visualized space-resolved. New applications of high flux neutron imaging are presented, such as materials engineering experiments on innovative metal joints, time-resolved tomography on multilayer stacks of fuel cells under operation, and others. A new implementation of an algorithm for the algebraic reconstruction of tomography data executes even in case of missing information, such as limited angle tomography, and returns quantitative reconstructions. The setup of the world-leading high flux radiography and tomography facility at the Institut Laue-Langevin is presented. A comprehensive appendix covers the physical and technical foundations of neutron imaging. (orig.)
An analytical transport theory method for calculating flux distribution in slab cells
International Nuclear Information System (INIS)
Abdel Krim, M.S.
2001-01-01
A transport theory method for calculating flux distributions in slab fuel cell is described. Two coupled integral equations for flux in fuel and moderator are obtained; assuming partial reflection at moderator external boundaries. Galerkin technique is used to solve these equations. Numerical results for average fluxes in fuel and moderator and the disadvantage factor are given. Comparison with exact numerical methods, that is for total reflection moderator outer boundaries, show that the Galerkin technique gives accurate results for the disadvantage factor and average fluxes. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Nakos, James Thomas
2010-12-01
The purpose of this report is to describe the methods commonly used to measure heat flux in fire applications at Sandia National Laboratories in both hydrocarbon (JP-8 jet fuel, diesel fuel, etc.) and propellant fires. Because these environments are very severe, many commercially available heat flux gauges do not survive the test, so alternative methods had to be developed. Specially built sensors include 'calorimeters' that use a temperature measurement to infer heat flux by use of a model (heat balance on the sensing surface) or by using an inverse heat conduction method. These specialty-built sensors are made rugged so they will survive the environment, so are not optimally designed for ease of use or accuracy. Other methods include radiometers, co-axial thermocouples, directional flame thermometers (DFTs), Sandia 'heat flux gauges', transpiration radiometers, and transverse Seebeck coefficient heat flux gauges. Typical applications are described and pros and cons of each method are listed.
Managing dense nonaqueous phase liquid (DNAPL) contaminated sites continues to be among the most pressing environmental problems currently faced. One approach that has recently been investigated for use in DNAPL site characterization and remediation is mass flux (mass per unit ar...
Directory of Open Access Journals (Sweden)
Marc Aubinet
1997-01-01
Full Text Available Différent methods of measurement of momentum and sensible heat flux densifies are presented and compared above a gras covered fallow. The aerodynamic (AD and eddy covariance (EC methods are presented and compared for both momentum and sensible heat measurements. In addition, the temperature fluctuation (TF method is compared to the HEC method for the sensible heat flux measurement. The AD and EC methods are in good agreement for the momentum flux measurements. For the sensible heat flux, the AD method is very sensible to temperature errors. So it is unusable during night and gives biased estimations during the day. The TF method gives only estimations of the sensible heat flux. It is in good agreement with the EC method during the day but diverges completely during night, being unable to disceming positive from négative fluxes. From the three methods, the EC method is the sole that allows to measure continuously both momentum and sensible heat flux but it requires a loud data treatment. We présent in this paper the algorithm used for this treatment.
Iterative Reconstruction Methods for Hybrid Inverse Problems in Impedance Tomography
DEFF Research Database (Denmark)
Hoffmann, Kristoffer; Knudsen, Kim
2014-01-01
For a general formulation of hybrid inverse problems in impedance tomography the Picard and Newton iterative schemes are adapted and four iterative reconstruction algorithms are developed. The general problem formulation includes several existing hybrid imaging modalities such as current density...... impedance imaging, magnetic resonance electrical impedance tomography, and ultrasound modulated electrical impedance tomography, and the unified approach to the reconstruction problem encompasses several algorithms suggested in the literature. The four proposed algorithms are implemented numerically in two...
Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method
Energy Technology Data Exchange (ETDEWEB)
Pereira, N F; Sitek, A, E-mail: nfp4@bwh.harvard.ed, E-mail: asitek@bwh.harvard.ed [Department of Radiology, Brigham and Women' s Hospital-Harvard Medical School Boston, MA (United States)
2010-09-21
Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated.
Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method
Pereira, N. F.; Sitek, A.
2010-09-01
Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated.
Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method
International Nuclear Information System (INIS)
Pereira, N F; Sitek, A
2010-01-01
Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated.
Dynamic Error Analysis Method for Vibration Shape Reconstruction of Smart FBG Plate Structure
Directory of Open Access Journals (Sweden)
Hesheng Zhang
2016-01-01
Full Text Available Shape reconstruction of aerospace plate structure is an important issue for safe operation of aerospace vehicles. One way to achieve such reconstruction is by constructing smart fiber Bragg grating (FBG plate structure with discrete distributed FBG sensor arrays using reconstruction algorithms in which error analysis of reconstruction algorithm is a key link. Considering that traditional error analysis methods can only deal with static data, a new dynamic data error analysis method are proposed based on LMS algorithm for shape reconstruction of smart FBG plate structure. Firstly, smart FBG structure and orthogonal curved network based reconstruction method is introduced. Then, a dynamic error analysis model is proposed for dynamic reconstruction error analysis. Thirdly, the parameter identification is done for the proposed dynamic error analysis model based on least mean square (LMS algorithm. Finally, an experimental verification platform is constructed and experimental dynamic reconstruction analysis is done. Experimental results show that the dynamic characteristics of the reconstruction performance for plate structure can be obtained accurately based on the proposed dynamic error analysis method. The proposed method can also be used for other data acquisition systems and data processing systems as a general error analysis method.
Linogram and other direct Fourier methods for tomographic reconstruction
International Nuclear Information System (INIS)
Magnusson, M.
1993-01-01
Computed tomography (CT) is an outstanding break-through in technology as well as in medical diagnostics. The aim in CT is to produce an image with good image quality as fast as possible. The two most well-known methods for CT-reconstruction are the Direct Fourier Method (DFM) and the Filtered Backprojection Method (FBM). This thesis is divided in four parts. In part 1 we give an introduction to the principles of CT as well as a basic treatise of the DFM and the FBM. We also present a short CT history as well as brief descriptions of techniques related to X-ray CT such as SPECT, PET and MRI. Part 2 is devoted to the Linogram Method (LM). The method is presented both intuitively and rigorously and a complete algorithm is given for the discrete case. The implementation has been done using the SNARK subroutine package with various parameters and phantom images. For comparison, the FBM has been applied to the same input projection data. The experiments show that the LM gives almost the same image quality, pixel for pixel, as the FBM. In part 3 we show that the LM is a close relative to the common DFM. We give a new extended explanation of artifacts in DFMs. The source of the problem is twofold: interpolation errors and circular convolution. By identifying the second effect as distinct from the first one, we are able to suggest and verify remedies for the DFM which brings the image quality on par with FBM. One of these remedies is the LM. A slight difficulty with both LM and ordinary DFM techniques is that they require a special projection geometry, whereas most commercial CT-scanners provides fan beam projection data. However, the wanted linogram projection data can be interpolated from fan beam projection data. In part 4, we show that it is possible to obtain good image quality with both LM and DFM techniques using fan beam projection indata. The thesis concludes that the computation cost can be essentially decreased by using LM or other DFMs instead of FBM
A scalar flux - oriented method for the transport equation in slab geometry
International Nuclear Information System (INIS)
Budd, C.
1981-01-01
A new method for solving the neutron transport equation is described. An unusual feature of this method is that it deals principally with scalar fluxes rather than angular fluxes. An alternative approach in slab geometry promises to be cheaper to run and does not suffer from many of the problems of the discrete ordinates method. It also appears possible to extend the method to several dimensions and this is discussed. (U.K.)
A simulation of portable PET with a new geometric image reconstruction method
Energy Technology Data Exchange (ETDEWEB)
Kawatsu, Shoji [Department of Radiology, Kyoritu General Hospital, 4-33 Go-bancho, Atsuta-ku, Nagoya-shi, Aichi 456 8611 (Japan): Department of Brain Science and Molecular Imaging, National Institute for Longevity Sciences, National Center for Geriatrics and Gerontology, 36-3, Gengo Moriaka-cho, Obu-shi, Aichi 474 8522 (Japan)]. E-mail: b6rgw@fantasy.plala.or.jp; Ushiroya, Noboru [Department of General Education, Wakayama National College of Technology, 77 Noshima, Nada-cho, Gobo-shi, Wakayama 644 0023 (Japan)
2006-12-20
A new method is proposed for three-dimensional positron emission tomography image reconstruction. The method uses the elementary geometric property of line of response whereby two lines of response, which originate from radioactive isotopes in the same position, lie within a few millimeters distance of each other. The method differs from the filtered back projection method and the iterative reconstruction method. The method is applied to a simulation of portable positron emission tomography.
Blockwise conjugate gradient methods for image reconstruction in volumetric CT.
Qiu, W; Titley-Peloquin, D; Soleimani, M
2012-11-01
Cone beam computed tomography (CBCT) enables volumetric image reconstruction from 2D projection data and plays an important role in image guided radiation therapy (IGRT). Filtered back projection is still the most frequently used algorithm in applications. The algorithm discretizes the scanning process (forward projection) into a system of linear equations, which must then be solved to recover images from measured projection data. The conjugate gradients (CG) algorithm and its variants can be used to solve (possibly regularized) linear systems of equations Ax=b and linear least squares problems minx∥b-Ax∥2, especially when the matrix A is very large and sparse. Their applications can be found in a general CT context, but in tomography problems (e.g. CBCT reconstruction) they have not widely been used. Hence, CBCT reconstruction using the CG-type algorithm LSQR was implemented and studied in this paper. In CBCT reconstruction, the main computational challenge is that the matrix A usually is very large, and storing it in full requires an amount of memory well beyond the reach of commodity computers. Because of these memory capacity constraints, only a small fraction of the weighting matrix A is typically used, leading to a poor reconstruction. In this paper, to overcome this difficulty, the matrix A is partitioned and stored blockwise, and blockwise matrix-vector multiplications are implemented within LSQR. This implementation allows us to use the full weighting matrix A for CBCT reconstruction without further enhancing computer standards. Tikhonov regularization can also be implemented in this fashion, and can produce significant improvement in the reconstructed images. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Diogenes, Alysson N.; Santos, Luis O.E. dos; Fernandes, Celso P. [Universidade Federal de Santa Catarina (UFSC), Florianopolis, SC (Brazil); Appoloni, Carlos R. [Universidade Estadual de Londrina (UEL), PR (Brazil)
2008-07-01
The reservoir rocks physical properties are usually obtained in laboratory, through standard experiments. These experiments are often very expensive and time-consuming. Hence, the digital image analysis techniques are a very fast and low cost methodology for physical properties prediction, knowing only geometrical parameters measured from the rock microstructure thin sections. This research analyzes two methods for porous media reconstruction using the relaxation method simulated annealing. Using geometrical parameters measured from rock thin sections, it is possible to construct a three-dimensional (3D) model of the microstructure. We assume statistical homogeneity and isotropy and the 3D model maintains porosity spatial correlation, chord size distribution and d 3-4 distance transform distribution for a pixel-based reconstruction and spatial correlation for an object-based reconstruction. The 2D and 3D preliminary results are compared with microstructures reconstructed by truncated Gaussian methods. As this research is in its beginning, only the 2D results will be presented. (author)
Exploring Normalization and Network Reconstruction Methods using In Silico and In Vivo Models
Abstract: Lessons learned from the recent DREAM competitions include: The search for the best network reconstruction method continues, and we need more complete datasets with ground truth from more complex organisms. It has become obvious that the network reconstruction methods t...
Accelerated gradient methods for total-variation-based CT image reconstruction
Energy Technology Data Exchange (ETDEWEB)
Joergensen, Jakob H.; Hansen, Per Christian [Technical Univ. of Denmark, Lyngby (Denmark). Dept. of Informatics and Mathematical Modeling; Jensen, Tobias L.; Jensen, Soeren H. [Aalborg Univ. (Denmark). Dept. of Electronic Systems; Sidky, Emil Y.; Pan, Xiaochuan [Chicago Univ., Chicago, IL (United States). Dept. of Radiology
2011-07-01
Total-variation (TV)-based CT image reconstruction has shown experimentally to be capable of producing accurate reconstructions from sparse-view data. In particular TV-based reconstruction is well suited for images with piecewise nearly constant regions. Computationally, however, TV-based reconstruction is demanding, especially for 3D imaging, and the reconstruction from clinical data sets is far from being close to real-time. This is undesirable from a clinical perspective, and thus there is an incentive to accelerate the solution of the underlying optimization problem. The TV reconstruction can in principle be found by any optimization method, but in practice the large scale of the systems arising in CT image reconstruction preclude the use of memory-intensive methods such as Newton's method. The simple gradient method has much lower memory requirements, but exhibits prohibitively slow convergence. In the present work we address the question of how to reduce the number of gradient method iterations needed to achieve a high-accuracy TV reconstruction. We consider the use of two accelerated gradient-based methods, GPBB and UPN, to solve the 3D-TV minimization problem in CT image reconstruction. The former incorporates several heuristics from the optimization literature such as Barzilai-Borwein (BB) step size selection and nonmonotone line search. The latter uses a cleverly chosen sequence of auxiliary points to achieve a better convergence rate. The methods are memory efficient and equipped with a stopping criterion to ensure that the TV reconstruction has indeed been found. An implementation of the methods (in C with interface to Matlab) is available for download from http://www2.imm.dtu.dk/~pch/TVReg/. We compare the proposed methods with the standard gradient method, applied to a 3D test problem with synthetic few-view data. We find experimentally that for realistic parameters the proposed methods significantly outperform the standard gradient method. (orig.)
Online In-Core Thermal Neutron Flux Measurement for the Validation of Computational Methods
International Nuclear Information System (INIS)
Mohamad Hairie Rabir; Muhammad Rawi Mohamed Zin; Yahya Ismail
2016-01-01
In order to verify and validate the computational methods for neutron flux calculation in RTP calculations, a series of thermal neutron flux measurement has been performed. The Self Powered Neutron Detector (SPND) was used to measure thermal neutron flux to verify the calculated neutron flux distribution in the TRIGA reactor. Measurements results obtained online for different power level of the reactor. The experimental results were compared to the calculations performed with Monte Carlo code MCNP using detailed geometrical model of the reactor. The calculated and measured thermal neutron flux in the core are in very good agreement indicating that the material and geometrical properties of the reactor core are modelled well. In conclusion one can state that our computational model describes very well the neutron flux distribution in the reactor core. Since the computational model properly describes the reactor core it can be used for calculations of reactor core parameters and for optimization of RTP utilization. (author)
A combined reconstruction-classification method for diffuse optical tomography
Energy Technology Data Exchange (ETDEWEB)
Hiltunen, P [Department of Biomedical Engineering and Computational Science, Helsinki University of Technology, PO Box 3310, FI-02015 TKK (Finland); Prince, S J D; Arridge, S [Department of Computer Science, University College London, Gower Street London, WC1E 6B (United Kingdom)], E-mail: petri.hiltunen@tkk.fi, E-mail: s.prince@cs.ucl.ac.uk, E-mail: s.arridge@cs.ucl.ac.uk
2009-11-07
We present a combined classification and reconstruction algorithm for diffuse optical tomography (DOT). DOT is a nonlinear ill-posed inverse problem. Therefore, some regularization is needed. We present a mixture of Gaussians prior, which regularizes the DOT reconstruction step. During each iteration, the parameters of a mixture model are estimated. These associate each reconstructed pixel with one of several classes based on the current estimate of the optical parameters. This classification is exploited to form a new prior distribution to regularize the reconstruction step and update the optical parameters. The algorithm can be described as an iteration between an optimization scheme with zeroth-order variable mean and variance Tikhonov regularization and an expectation-maximization scheme for estimation of the model parameters. We describe the algorithm in a general Bayesian framework. Results from simulated test cases and phantom measurements show that the algorithm enhances the contrast of the reconstructed images with good spatial accuracy. The probabilistic classifications of each image contain only a few misclassified pixels.
Measurement of absolute neutron flux in LWSCR based on the nuclear track method
International Nuclear Information System (INIS)
Sadeghzadeh, J.; Nassiri Mofakham, N.; Khajehmiri, Z.
2012-01-01
Highlights: ► Up to now the spectral parameters of thermal neutrons are measured with activation foils that are not always reliable in low flux systems. ► We applied a solid state nuclear track detector to measure the absolute neutron flux in the light water sub-critical reactor (LWSCR). ► Experiments concerning fission track detecting were performed and were investigated using the Monte Carlo code MCNP. ► The neutron fluxes obtained in experiment are in fairly good agreement with the results obtained by MCNP. - Abstract: In the present paper, a solid state nuclear track detector is applied to measure the absolute neutron flux in the light water sub-critical reactor (LWSCR) in Nuclear Science and Technology Research Institute (NSTRI). Up to now, the spectral parameters of thermal neutrons have been measured with activation foils that are not always reliable in low flux systems. The method investigated here is the irradiation method. Experiments concerning fission track detecting were performed. The experiment including neutron flux calculation method has also been investigated using the Monte Carlo code MCNP. The analysis shows that the values of neutron flux obtained by experiment are in fairly good agreement with the results obtained by MCNP. Thus, this method may be able to predict the absolute value of neutron flux at LWSCR and other similar reactors.
Flux schemes based finite volume method for internal transonic flow with condensation
Czech Academy of Sciences Publication Activity Database
Halama, Jan; Benkhaldoun, F.; Fořt, J.
2011-01-01
Roč. 65, č. 8 (2011), s. 953-968 ISSN 0271-2091 Institutional research plan: CEZ:AV0Z20760514 Keywords : VFFC flux * SRNH flux * two-phase homogeneous flow * fractional step method * condensation Subject RIV: BK - Fluid Dynamics Impact factor: 1.176, year: 2011
Methods to assess high-resolution subsurface gas concentrations and gas fluxes in wetland ecosystems
DEFF Research Database (Denmark)
Elberling, Bo; Kühl, Michael; Glud, Ronnie Nøhr
2013-01-01
The need for measurements of soil gas concentrations and surface fluxes of greenhouse gases at high temporal and spatial resolution in wetland ecosystem has lead to the introduction of several new analytical techniques and methods. In addition to the automated flux chamber methodology for high-re...
Energy Technology Data Exchange (ETDEWEB)
Van Eyndhoven, G., E-mail: geert.vaneyndhoven@uantwerpen.be [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Kurttepeli, M. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Van Oers, C.J.; Cool, P. [Laboratory of Adsorption and Catalysis, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Bals, S. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Batenburg, K.J. [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Centrum Wiskunde and Informatica, Science Park 123, NL-1090 GB Amsterdam (Netherlands); Mathematical Institute, Universiteit Leiden, Niels Bohrweg 1, NL-2333 CA Leiden (Netherlands); Sijbers, J. [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium)
2015-01-15
Electron tomography is currently a versatile tool to investigate the connection between the structure and properties of nanomaterials. However, a quantitative interpretation of electron tomography results is still far from straightforward. Especially accurate quantification of pore-space is hampered by artifacts introduced in all steps of the processing chain, i.e., acquisition, reconstruction, segmentation and quantification. Furthermore, most common approaches require subjective manual user input. In this paper, the PORES algorithm “POre REconstruction and Segmentation” is introduced; it is a tailor-made, integral approach, for the reconstruction, segmentation, and quantification of porous nanomaterials. The PORES processing chain starts by calculating a reconstruction with a nanoporous-specific reconstruction algorithm: the Simultaneous Update of Pore Pixels by iterative REconstruction and Simple Segmentation algorithm (SUPPRESS). It classifies the interior region to the pores during reconstruction, while reconstructing the remaining region by reducing the error with respect to the acquired electron microscopy data. The SUPPRESS reconstruction can be directly plugged into the remaining processing chain of the PORES algorithm, resulting in accurate individual pore quantification and full sample pore statistics. The proposed approach was extensively validated on both simulated and experimental data, indicating its ability to generate accurate statistics of nanoporous materials. - Highlights: • An electron tomography reconstruction/segmentation method for nanoporous materials. • The method exploits the porous nature of the scanned material. • Validated extensively on both simulation and real data experiments. • Results in increased image resolution and improved porosity quantification.
Jiang, Hongzhen; Liu, Xu; Liu, Yong; Li, Dong; Chen, Zhu; Zheng, Fanglan; Yu, Deqiang
2017-10-01
An effective approach for reconstructing on-axis lensless Fourier Transform digital hologram by using the screen division method is proposed. Firstly, the on-axis Fourier Transform digital hologram is divided into sub-holograms. Then the reconstruction result of every sub-hologram is obtained according to the position of corresponding sub-hologram in the hologram reconstruction plane with Fourier transform operation. Finally, the reconstruction image of on-axis Fourier Transform digital hologram can be acquired by the superposition of the reconstruction result of sub-holograms. Compared with the traditional reconstruction method with the phase shifting technology, in which multiple digital holograms are required to record for obtaining the reconstruction image, this method can obtain the reconstruction image with only one digital hologram and therefore greatly simplify the recording and reconstruction process of on-axis lensless Fourier Transform digital holography. The effectiveness of the proposed method is well proved with the experimental results and it will have potential application foreground in the holographic measurement and display field.
Nogrette, F.; Heurteau, D.; Chang, R.; Bouton, Q.; Westbrook, C. I.; Sellem, R.; Clément, D.
2015-11-01
We report on the development of a novel FPGA-based time-to-digital converter and its implementation in a detection chain that records the coordinates of single particles along three dimensions. The detector is composed of micro-channel plates mounted on top of a cross delay line and connected to fast electronics. We demonstrate continuous recording of the timing signals from the cross delay line at rates up to 4.1 × 106 s-1 and three-dimensional reconstruction of the coordinates up to 3.2 × 106 particles per second. From the imaging of a calibrated structure we measure the in-plane resolution of the detector to be 140(20) μm at a flux of 3 × 105 particles per second. In addition, we analyze a method to estimate the resolution without placing any structure under vacuum, a significant practical improvement. While we use UV photons here, the results of this work apply to the detection of other kinds of particles.
Reconstruction method for data protection in telemedicine systems
Buldakova, T. I.; Suyatinov, S. I.
2015-03-01
In the report the approach to protection of transmitted data by creation of pair symmetric keys for the sensor and the receiver is offered. Since biosignals are unique for each person, their corresponding processing allows to receive necessary information for creation of cryptographic keys. Processing is based on reconstruction of the mathematical model generating time series that are diagnostically equivalent to initial biosignals. Information about the model is transmitted to the receiver, where the restoration of physiological time series is performed using the reconstructed model. Thus, information about structure and parameters of biosystem model received in the reconstruction process can be used not only for its diagnostics, but also for protection of transmitted data in telemedicine complexes.
The calculation of neutron flux using Monte Carlo method
Günay, Mehtap; Bardakçı, Hilal
2017-09-01
In this study, a hybrid reactor system was designed by using 99-95% Li20Sn80 + 1-5% RG-Pu, 99-95% Li20Sn80 + 1-5% RG-PuF4, and 99-95% Li20Sn80 + 1-5% RG-PuO2 fluids, ENDF/B-VII.0 evaluated nuclear data library and 9Cr2WVTa structural material. The fluids were used in the liquid first wall, liquid second wall (blanket) and shield zones of a fusion-fission hybrid reactor system. The neutron flux was calculated according to the mixture components, radial, energy spectrum in the designed hybrid reactor system for the selected fluids, library and structural material. Three-dimensional nucleonic calculations were performed using the most recent version MCNPX-2.7.0 the Monte Carlo code.
Energy Technology Data Exchange (ETDEWEB)
Stefanicki, G; Geissbuehler, P; Siegwolf, R [Paul Scherrer Inst. (PSI), Villigen (Switzerland)
1999-08-01
The Eddy covariance technique allows to measure different components of turbulent air fluxes, including the flow of water vapour. Sap flux measurements determine directly the water flow in tree stems. We compared the water flux just above the crowns of trees in a forest by the technique of Eddy covariance and the water flux by the xylem sap flux method. These two completely different approaches showed a good qualitative correspondence. The correlation coefficient is 0.8. With an estimation of the crown diameter of the measured tree we also find a very good quantitative agreement. (author) 3 figs., 5 refs.
Tabular method of critical heat flux description in square packing rod bundles
International Nuclear Information System (INIS)
Bobkov, V.P.; Smogalev, I.P.
2003-01-01
Elaborations of harnessing tabular method for the description and calculation of critical heat fluxes in square packing rod bundles are presented. The tabular method for fuel rod triangular assemblies derived from using basic table for critical heat fluxes in triangular fuel assemblies demonstrates good results. For the harnessing tabular method in square packing rod bundles correction functions reflecting specific geometry were found. Comparative evaluations of calculated values for the critical heat fluxes with experimental ones are presented. Good agreement of calculations with experiments is noted in all range of parameters [ru
International Nuclear Information System (INIS)
Kim, Yeung Chan
2016-01-01
A study on the measurement of critical heat flux using the transient inverse heat conduction method in spray cooling was performed. The inverse heat conduction method estimates the surface heat flux or temperature using a measured interior temperature history. The effects of the measuring time interval and location of temperature measurement on the measurement of critical heat flux were primarily investigated. The following results were obtained. The estimated critical heat flux decreased as the time interval of temperature measurement increased. Meanwhile, the effect of measurement location on critical heat flux was not significant. It was also found, from the experimental results, that the critical superheat increased as the measurement location of thermocouple neared the heat transfer surface.
Energy Technology Data Exchange (ETDEWEB)
Kim, Yeung Chan [Andong Nat’l Univ., Andong (Korea, Republic of)
2016-10-15
A study on the measurement of critical heat flux using the transient inverse heat conduction method in spray cooling was performed. The inverse heat conduction method estimates the surface heat flux or temperature using a measured interior temperature history. The effects of the measuring time interval and location of temperature measurement on the measurement of critical heat flux were primarily investigated. The following results were obtained. The estimated critical heat flux decreased as the time interval of temperature measurement increased. Meanwhile, the effect of measurement location on critical heat flux was not significant. It was also found, from the experimental results, that the critical superheat increased as the measurement location of thermocouple neared the heat transfer surface.
Quantitative method for measuring heat flux emitted from a cryogenic object
Duncan, R.V.
1993-03-16
The present invention is a quantitative method for measuring the total heat flux, and of deriving the total power dissipation, of a heat-fluxing object which includes the steps of placing an electrical noise-emitting heat-fluxing object in a liquid helium bath and measuring the superfluid transition temperature of the bath. The temperature of the liquid helium bath is thereafter reduced until some measurable parameter, such as the electrical noise, exhibited by the heat-fluxing object or a temperature-dependent resistive thin film in intimate contact with the heat-fluxing object, becomes greatly reduced. The temperature of the liquid helum bath is measured at this point. The difference between the superfluid transition temperature of the liquid helium bath surrounding the heat-fluxing object, and the temperature of the liquid helium bath when the electrical noise emitted by the heat-fluxing object becomes greatly reduced, is determined. The total heat flux from the heat-fluxing object is determined as a function of this difference between these temperatures. In certain applications, the technique can be used to optimize thermal design parameters of cryogenic electronics, for example, Josephson junction and infrared sensing devices.
Quantitative method for measuring heat flux emitted from a cryogenic object
International Nuclear Information System (INIS)
Duncan, R.V.
1993-01-01
The present invention is a quantitative method for measuring the total heat flux, and of deriving the total power dissipation, of a heat-fluxing object which includes the steps of placing an electrical noise-emitting heat-fluxing object in a liquid helium bath and measuring the superfluid transition temperature of the bath. The temperature of the liquid helium bath is thereafter reduced until some measurable parameter, such as the electrical noise, exhibited by the heat-fluxing object or a temperature-dependent resistive thin film in intimate contact with the heat-fluxing object, becomes greatly reduced. The temperature of the liquid helum bath is measured at this point. The difference between the superfluid transition temperature of the liquid helium bath surrounding the heat-fluxing object, and the temperature of the liquid helium bath when the electrical noise emitted by the heat-fluxing object becomes greatly reduced, is determined. The total heat flux from the heat-fluxing object is determined as a function of this difference between these temperatures. In certain applications, the technique can be used to optimize thermal design parameters of cryogenic electronics, for example, Josephson junction and infrared sensing devices
TORT/MCNP coupling method for the calculation of neutron flux around a core of BWR
International Nuclear Information System (INIS)
Kurosawa, M.
2005-01-01
For the analysis of BWR neutronics performance, accurate data are required for neutron flux distribution over the In-Reactor Pressure Vessel equipments taking into account the detailed geometrical arrangement. The TORT code can calculate neutron flux around a core of BWR in a three-dimensional geometry model, but has difficulties in fine geometrical modelling and lacks huge computer resource. On the other hand, the MCNP code enables the calculation of the neutron flux with a detailed geometry model, but requires very long sampling time to give enough number of particles. Therefore, a TORT/MCNP coupling method has been developed to eliminate the two problems mentioned above in each code. In this method, the TORT code calculates angular flux distribution on the core surface and the MCNP code calculates neutron spectrum at the points of interest using the flux distribution. The coupling method will be used as the DOT-DOMINO-MORSE code system. This TORT/MCNP coupling method was applied to calculate the neutron flux at points where induced radioactivity data were measured for 54 Mn and 60 Co and the radioactivity calculations based on the neutron flux obtained from the above method were compared with the measured data. (authors)
TORT/MCNP coupling method for the calculation of neutron flux around a core of BWR.
Kurosawa, Masahiko
2005-01-01
For the analysis of BWR neutronics performance, accurate data are required for neutron flux distribution over the In-Reactor Pressure Vessel equipments taking into account the detailed geometrical arrangement. The TORT code can calculate neutron flux around a core of BWR in a three-dimensional geometry model, but has difficulties in fine geometrical modelling and lacks huge computer resource. On the other hand, the MCNP code enables the calculation of the neutron flux with a detailed geometry model, but requires very long sampling time to give enough number of particles. Therefore, a TORT/MCNP coupling method has been developed to eliminate the two problems mentioned above in each code. In this method, the TORT code calculates angular flux distribution on the core surface and the MCNP code calculates neutron spectrum at the points of interest using the flux distribution. The coupling method will be used as the DOT-DOMINO-MORSE code system. This TORT/MCNP coupling method was applied to calculate the neutron flux at points where induced radioactivity data were measured for 54Mn and 60Co and the radioactivity calculations based on the neutron flux obtained from the above method were compared with the measured data.
Theoretical simulation of the dual-heat-flux method in deep body temperature measurements.
Huang, Ming; Chen, Wenxi
2010-01-01
Deep body temperature reveals individual physiological states, and is important in patient monitoring and chronobiological studies. An innovative dual-heat-flux method has been shown experimentally to be competitive with the conventional zero-heat-flow method in its performance, in terms of measurement accuracy and step response to changes in the deep temperature. We have utilized a finite element method to model and simulate the dynamic process of a dual-heat-flux probe in deep body temperature measurements to validate the fundamental principles of the dual-heat-flux method theoretically, and to acquire a detailed quantitative description of the thermal profile of the dual-heat-flux probe. The simulation results show that the estimated deep body temperature is influenced by the ambient temperature (linearly, at a maximum rate of 0.03 °C/°C) and the blood perfusion rate. The corresponding depth of the estimated temperature in the skin and subcutaneous tissue layer is consistent when using the dual-heat-flux probe. Insights in improving the performance of the dual-heat-flux method were discussed for further studies of dual-heat-flux probes, taking into account structural and geometric considerations.
Dobramysl, U; Holcman, D
2018-02-15
Is it possible to recover the position of a source from the steady-state fluxes of Brownian particles to small absorbing windows located on the boundary of a domain? To address this question, we develop a numerical procedure to avoid tracking Brownian trajectories in the entire infinite space. Instead, we generate particles near the absorbing windows, computed from the analytical expression of the exit probability. When the Brownian particles are generated by a steady-state gradient at a single point, we compute asymptotically the fluxes to small absorbing holes distributed on the boundary of half-space and on a disk in two dimensions, which agree with stochastic simulations. We also derive an expression for the splitting probability between small windows using the matched asymptotic method. Finally, when there are more than two small absorbing windows, we show how to reconstruct the position of the source from the diffusion fluxes. The present approach provides a computational first principle for the mechanism of sensing a gradient of diffusing particles, a ubiquitous problem in cell biology.
Wheeler, Mary; Xue, Guangri; Yotov, Ivan
2013-01-01
We study the numerical approximation on irregular domains with general grids of the system of poroelasticity, which describes fluid flow in deformable porous media. The flow equation is discretized by a multipoint flux mixed finite element method
Proposal for a new method of reactor neutron flux distribution determination
Energy Technology Data Exchange (ETDEWEB)
Popic, V R [Institute of nuclear sciences Boris Kidric, Vinca, Beograd (Serbia and Montenegro)
1964-01-15
A method, based on the measurements of the activity produced in a medium flowing with variable velocity through a reactor, for the determination of the neutron flux distribution inside a reactor is considered theoretically (author)
Simple method of modelling of digital holograms registering and their optical reconstruction
International Nuclear Information System (INIS)
Evtikhiev, N N; Cheremkhin, P A; Krasnov, V V; Kurbatova, E A; Molodtsov, D Yu; Porshneva, L A; Rodin, V G
2016-01-01
The technique of modeling of digital hologram recording and image optical reconstruction from these holograms is described. The method takes into account characteristics of the object, digital camera's photosensor and spatial light modulator used for digital holograms displaying. Using the technique, equipment can be chosen for experiments for obtaining good reconstruction quality and/or holograms diffraction efficiency. Numerical experiments were conducted. (paper)
A Method for Interactive 3D Reconstruction of Piecewise Planar Objects from Single Images
Sturm , Peter; Maybank , Steve
1999-01-01
International audience; We present an approach for 3D reconstruction of objects from a single image. Obviously, constraints on the 3D structure are needed to perform this task. Our approach is based on user-provided coplanarity, perpendicularity and parallelism constraints. These are used to calibrate the image and perform 3D reconstruction. The method is described in detail and results are provided.
Skin sparing mastectomy: Technique and suggested methods of reconstruction
Directory of Open Access Journals (Sweden)
Ahmed M. Farahat
2014-09-01
Conclusions: Skin Sparing mastectomy through a circum-areolar incision has proven to be a safe and feasible option for the management of breast cancer in Egyptian women, offering them adequate oncologic control and optimum cosmetic outcome through preservation of the skin envelope of the breast when ever indicated. Our patients can benefit from safe surgery and have good cosmetic outcomeby applying different reconstructive techniques.
Deep Learning Methods for Particle Reconstruction in the HGCal
Arzi, Ofir
2017-01-01
The High Granularity end-cap Calorimeter is part of the phase-2 CMS upgrade (see Figure \\ref{fig:cms})\\cite{Contardo:2020886}. It's goal it to provide measurements of high resolution in time, space and energy. Given such measurements, the purpose of this work is to discuss the use of Deep Neural Networks for the task of particle and trajectory reconstruction, identification and energy estimation, during my participation in the CERN Summer Students Program.
Deep Learning Methods for Particle Reconstruction in the HGCal
Arzi, Ofir
2017-01-01
The High Granularity end-cap Calorimeter is part of the phase-2 CMS upgrade (see Figure 1)[1]. It’s goal it to provide measurements of high resolution in time, space and energy. Given such measurements, the purpose of this work is to discuss the use of Deep Neural Networks for the task of particle and trajectory reconstruction, identiﬁcation and energy estimation, during my participation in the CERN Summer Students Program.
Heat flux estimation in an infrared experimental furnace using an inverse method
International Nuclear Information System (INIS)
Le Bideau, P.; Ploteau, J.P.; Glouannec, P.
2009-01-01
Infrared emitters are widely used in industrial furnaces for thermal treatment. In these processes, the knowledge of the incident heat flux on the surface of the product is a primary step to optimise the command emitters and for maintenance shift. For these reasons, it is necessary to develop autonomous flux meters that could provide an answer to these requirements. These sensors must give an in-line distribution of infrared irradiation in the tunnel furnace and must be able to measure high heat flux in severe thermal environments. In this paper we present a method for in-line assessments solving an inverse heat conduction problem. A metallic mass is instrumented by thermocouples and an inverse method allows the incident heat flux to be estimated. In the first part, attention is focused on a new design tool, which is a numerical code, for the evaluation of potential options during captor conception. In the second part we present the realization and the test of this 'indirect' flux meter and its associated inverse problem. 'Direct' detectors based on thermoelectric devices are compared with this new flux meter in the same conditions in the same furnace. Results prove that this technique is a reliable method, appropriate for high temperature ambiances. This technique can be applied to furnaces where the heat flux is inaccessible to 'direct' measurements.
An automated 3D reconstruction method of UAV images
Liu, Jun; Wang, He; Liu, Xiaoyang; Li, Feng; Sun, Guangtong; Song, Ping
2015-10-01
In this paper a novel fully automated 3D reconstruction approach based on low-altitude unmanned aerial vehicle system (UAVs) images will be presented, which does not require previous camera calibration or any other external prior knowledge. Dense 3D point clouds are generated by integrating orderly feature extraction, image matching, structure from motion (SfM) and multi-view stereo (MVS) algorithms, overcoming many of the cost, time limitations of rigorous photogrammetry techniques. An image topology analysis strategy is introduced to speed up large scene reconstruction by taking advantage of the flight-control data acquired by UAV. Image topology map can significantly reduce the running time of feature matching by limiting the combination of images. A high-resolution digital surface model of the study area is produced base on UAV point clouds by constructing the triangular irregular network. Experimental results show that the proposed approach is robust and feasible for automatic 3D reconstruction of low-altitude UAV images, and has great potential for the acquisition of spatial information at large scales mapping, especially suitable for rapid response and precise modelling in disaster emergency.
A fast 4D cone beam CT reconstruction method based on the OSC-TV algorithm.
Mascolo-Fortin, Julia; Matenine, Dmitri; Archambault, Louis; Després, Philippe
2018-01-01
Four-dimensional cone beam computed tomography allows for temporally resolved imaging with useful applications in radiotherapy, but raises particular challenges in terms of image quality and computation time. The purpose of this work is to develop a fast and accurate 4D algorithm by adapting a GPU-accelerated ordered subsets convex algorithm (OSC), combined with the total variation minimization regularization technique (TV). Different initialization schemes were studied to adapt the OSC-TV algorithm to 4D reconstruction: each respiratory phase was initialized either with a 3D reconstruction or a blank image. Reconstruction algorithms were tested on a dynamic numerical phantom and on a clinical dataset. 4D iterations were implemented for a cluster of 8 GPUs. All developed methods allowed for an adequate visualization of the respiratory movement and compared favorably to the McKinnon-Bates and adaptive steepest descent projection onto convex sets algorithms, while the 4D reconstructions initialized from a prior 3D reconstruction led to better overall image quality. The most suitable adaptation of OSC-TV to 4D CBCT was found to be a combination of a prior FDK reconstruction and a 4D OSC-TV reconstruction with a reconstruction time of 4.5 minutes. This relatively short reconstruction time could facilitate a clinical use.
A shape-based quality evaluation and reconstruction method for electrical impedance tomography.
Antink, Christoph Hoog; Pikkemaat, Robert; Malmivuo, Jaakko; Leonhardt, Steffen
2015-06-01
Linear methods of reconstruction play an important role in medical electrical impedance tomography (EIT) and there is a wide variety of algorithms based on several assumptions. With the Graz consensus reconstruction algorithm for EIT (GREIT), a novel linear reconstruction algorithm as well as a standardized framework for evaluating and comparing methods of reconstruction were introduced that found widespread acceptance in the community. In this paper, we propose a two-sided extension of this concept by first introducing a novel method of evaluation. Instead of being based on point-shaped resistivity distributions, we use 2759 pairs of real lung shapes for evaluation that were automatically segmented from human CT data. Necessarily, the figures of merit defined in GREIT were adjusted. Second, a linear method of reconstruction that uses orthonormal eigenimages as training data and a tunable desired point spread function are proposed. Using our novel method of evaluation, this approach is compared to the classical point-shaped approach. Results show that most figures of merit improve with the use of eigenimages as training data. Moreover, the possibility of tuning the reconstruction by modifying the desired point spread function is shown. Finally, the reconstruction of real EIT data shows that higher contrasts and fewer artifacts can be achieved in ventilation- and perfusion-related images.
A shape-based quality evaluation and reconstruction method for electrical impedance tomography
International Nuclear Information System (INIS)
Antink, Christoph Hoog; Pikkemaat, Robert; Leonhardt, Steffen; Malmivuo, Jaakko
2015-01-01
Linear methods of reconstruction play an important role in medical electrical impedance tomography (EIT) and there is a wide variety of algorithms based on several assumptions. With the Graz consensus reconstruction algorithm for EIT (GREIT), a novel linear reconstruction algorithm as well as a standardized framework for evaluating and comparing methods of reconstruction were introduced that found widespread acceptance in the community.In this paper, we propose a two-sided extension of this concept by first introducing a novel method of evaluation. Instead of being based on point-shaped resistivity distributions, we use 2759 pairs of real lung shapes for evaluation that were automatically segmented from human CT data. Necessarily, the figures of merit defined in GREIT were adjusted. Second, a linear method of reconstruction that uses orthonormal eigenimages as training data and a tunable desired point spread function are proposed.Using our novel method of evaluation, this approach is compared to the classical point-shaped approach. Results show that most figures of merit improve with the use of eigenimages as training data. Moreover, the possibility of tuning the reconstruction by modifying the desired point spread function is shown. Finally, the reconstruction of real EIT data shows that higher contrasts and fewer artifacts can be achieved in ventilation- and perfusion-related images. (paper)
International Nuclear Information System (INIS)
Yuan, Lan Qin; Yang, Jun; Harrison, Noel
2014-01-01
Fuel irradiation experiments to study fuel behaviors have been performed in the experimental loops of the National Research Universal (NRU) Reactor at Atomic Energy of Canada Limited (AECL) Chalk River Laboratories (CRL) in support of the development of new fuel technologies. Before initiating a fuel irradiation experiment, the experimental proposal must be approved to ensure that the test fuel strings put into the NRU loops meet safety margin requirements in critical heat flux (CHF). The fuel strings in irradiation experiments can have varying degrees of fuel enrichment and burnup, resulting in large variations in radial heat flux distribution (RFD). CHF experiments performed in Freon flow at CRL for full-scale bundle strings with a number of RFDs showed a strong effect of RFD on CHF. A prediction method was derived based on experimental CHF data to account for the RFD effect on CHF. It provides good CHF predictions for various RFDs as compared to the data. However, the range of the tested RFDs in the CHF experiments is not as wide as that required in the fuel irradiation experiments. The applicability of the prediction method needs to be examined for the RFDs beyond the range tested by the CHF experiments. The Canadian subchannel code ASSERT-PV was employed to simulate the CHF behavior for RFDs that would be encountered in fuel irradiation experiments. The CHF predictions using the derived method were compared with the ASSERT simulations. It was observed that the CHF predictions agree well with the ASSERT simulations in terms of CHF, confirming the applicability of the prediction method in fuel irradiation experiments. (author)
Least Square NUFFT Methods Applied to 2D and 3D Radially Encoded MR Image Reconstruction
Song, Jiayu; Liu, Qing H.; Gewalt, Sally L.; Cofer, Gary; Johnson, G. Allan
2009-01-01
Radially encoded MR imaging (MRI) has gained increasing attention in applications such as hyperpolarized gas imaging, contrast-enhanced MR angiography, and dynamic imaging, due to its motion insensitivity and improved artifact properties. However, since the technique collects k-space samples nonuniformly, multidimensional (especially 3D) radially sampled MRI image reconstruction is challenging. The balance between reconstruction accuracy and speed becomes critical when a large data set is processed. Kaiser-Bessel gridding reconstruction has been widely used for non-Cartesian reconstruction. The objective of this work is to provide an alternative reconstruction option in high dimensions with on-the-fly kernels calculation. The work develops general multi-dimensional least square nonuniform fast Fourier transform (LS-NUFFT) algorithms and incorporates them into a k-space simulation and image reconstruction framework. The method is then applied to reconstruct the radially encoded k-space, although the method addresses general nonuniformity and is applicable to any non-Cartesian patterns. Performance assessments are made by comparing the LS-NUFFT based method with the conventional Kaiser-Bessel gridding method for 2D and 3D radially encoded computer simulated phantoms and physically scanned phantoms. The results show that the LS-NUFFT reconstruction method has better accuracy-speed efficiency than the Kaiser-Bessel gridding method when the kernel weights are calculated on the fly. The accuracy of the LS-NUFFT method depends on the choice of scaling factor, and it is found that for a particular conventional kernel function, using its corresponding deapodization function as scaling factor and utilizing it into the LS-NUFFT framework has the potential to improve accuracy. When a cosine scaling factor is used, in particular, the LS-NUFFT method is faster than Kaiser-Bessel gridding method because of a quasi closed-form solution. The method is successfully applied to 2D and
Directory of Open Access Journals (Sweden)
Songjun Zeng
2010-01-01
Full Text Available A method for three-dimensional (3D reconstruction of macromolecule assembles, that is, octahedral symmetrical adapted functions (OSAFs method, was introduced in this paper and a series of formulations for reconstruction by OSAF method were derived. To verify the feasibility and advantages of the method, two octahedral symmetrical macromolecules, that is, heat shock protein Degp24 and the Red-cell L Ferritin, were utilized as examples to implement reconstruction by the OSAF method. The schedule for simulation was designed as follows: 2000 random orientated projections of single particles with predefined Euler angles and centers of origins were generated, then different levels of noises that is signal-to-noise ratio (S/N =0.1,0.5, and 0.8 were added. The structures reconstructed by the OSAF method were in good agreement with the standard models and the relative errors of the structures reconstructed by the OSAF method to standard structures were very little even for high level noise. The facts mentioned above account for that the OSAF method is feasible and efficient approach to reconstruct structures of macromolecules and have ability to suppress the influence of noise.
International Nuclear Information System (INIS)
Khaled, M; Garnier, B; Peerhossaini, H; Harambat, F
2010-01-01
A new experimental technique is presented that allows simultaneous measurement of convective and radiative heat flux in the underhood. The goal is to devise an easily implemented and accurate experimental method for application in the vehicle underhood compartment. The new method is based on a technique for heat-flux measurement developed by the authors (Heat flow (flux) sensors for measurement of convection, conduction and radiation heat flow 27036-2, © Rhopoint Components Ltd, Hurst Green, Oxted, RH8 9AX, UK) that uses several thermocouples in the thickness of a thermal resistive layer (foil heat-flux sensor). The method proposed here uses a pair of these thermocouples with different radiative properties. Measurements validating this novel technique are carried out on a flat plate with a prescribed constant temperature in both natural- and forced-convection flow regimes. The test flat plate is instrumented by this new technique, and also with a different technique that is intrusive but very accurate, used as reference here (Bardon J P and Jarny Y 1994 Procédé et dispositif de mesure transitoire de température et flux surfacique Brevet n°94.011996, 22 February). Discrepancies between the measurements by the two techniques are less than 10% for both convective and radiative heat flux. Error identification and sensitivity analysis of the new method are also presented
The feasibility of images reconstructed with the method of sieves
International Nuclear Information System (INIS)
Veklerov, E.; Llacer, J.
1990-01-01
The concept of sieves has been applied with the maximum likelihood estimator (MLE) to image reconstruction. While it makes it possible to recover smooth images consistent with the data, the degree of smoothness provided by it is arbitrary. It is shown that the concept of feasibility is able to resolve this arbitrariness. By varying the values of parameters determining the degree of smoothness, one can generate images on both sides of the feasibility region, as well as within the region. Feasible images recovered by using different sieve parameters are compared with feasible results of other procedures. One- and two-dimensional examples using both simulated and real data sets are considered
International Nuclear Information System (INIS)
Song, Xizi; Xu, Yanbin; Dong, Feng
2017-01-01
Electrical resistance tomography (ERT) is a promising measurement technique with important industrial and clinical applications. However, with limited effective measurements, it suffers from poor spatial resolution due to the ill-posedness of the inverse problem. Recently, there has been an increasing research interest in hybrid imaging techniques, utilizing couplings of physical modalities, because these techniques obtain much more effective measurement information and promise high resolution. Ultrasound modulated electrical impedance tomography (UMEIT) is one of the newly developed hybrid imaging techniques, which combines electric and acoustic modalities. A linearized image reconstruction method based on power density is proposed for UMEIT. The interior data, power density distribution, is adopted to reconstruct the conductivity distribution with the proposed image reconstruction method. At the same time, relating the power density change to the change in conductivity, the Jacobian matrix is employed to make the nonlinear problem into a linear one. The analytic formulation of this Jacobian matrix is derived and its effectiveness is also verified. In addition, different excitation patterns are tested and analyzed, and opposite excitation provides the best performance with the proposed method. Also, multiple power density distributions are combined to implement image reconstruction. Finally, image reconstruction is implemented with the linear back-projection (LBP) algorithm. Compared with ERT, with the proposed image reconstruction method, UMEIT can produce reconstructed images with higher quality and better quantitative evaluation results. (paper)
Dynamic PET Image reconstruction for parametric imaging using the HYPR kernel method
Spencer, Benjamin; Qi, Jinyi; Badawi, Ramsey D.; Wang, Guobao
2017-03-01
Dynamic PET image reconstruction is a challenging problem because of the ill-conditioned nature of PET and the lowcounting statistics resulted from short time-frames in dynamic imaging. The kernel method for image reconstruction has been developed to improve image reconstruction of low-count PET data by incorporating prior information derived from high-count composite data. In contrast to most of the existing regularization-based methods, the kernel method embeds image prior information in the forward projection model and does not require an explicit regularization term in the reconstruction formula. Inspired by the existing highly constrained back-projection (HYPR) algorithm for dynamic PET image denoising, we propose in this work a new type of kernel that is simpler to implement and further improves the kernel-based dynamic PET image reconstruction. Our evaluation study using a physical phantom scan with synthetic FDG tracer kinetics has demonstrated that the new HYPR kernel-based reconstruction can achieve a better region-of-interest (ROI) bias versus standard deviation trade-off for dynamic PET parametric imaging than the post-reconstruction HYPR denoising method and the previously used nonlocal-means kernel.
Development of High Flux Isotope Reactor (HFIR) subcriticality monitoring methods
International Nuclear Information System (INIS)
Rothrock, R.B.
1991-01-01
Use of subcritical source multiplication measurements during refueling has been investigated as a possible replacement for out-of-reactor subcriticality measurements formerly made on fresh HFIR fuel elements at the ORNL Critical Experiment Facility. These measurements have been used in the past for preparation of estimated critical rod positions, and as a partial verification, prior to reactor startup, that the requirements for operational shutdown margin would be met. Results of subcritical count rate data collection during recent HFIR refuelings and supporting calculations are described illustrating the intended measurement method and its expected uncertainty. These results are compared to historical uses of the out-of-reactor core measurements and their accuracy requirements, and a planned in-reactor test is described which will establish the sensitivity of the method and calibrate it for future routine use during HFIR refueling. 2 refs., 1 fig., 2 tabs
Zhu, Dianwen; Zhang, Wei; Zhao, Yue; Li, Changqing
2016-03-01
Dynamic fluorescence molecular tomography (FMT) has the potential to quantify physiological or biochemical information, known as pharmacokinetic parameters, which are important for cancer detection, drug development and delivery etc. To image those parameters, there are indirect methods, which are easier to implement but tend to provide images with low signal-to-noise ratio, and direct methods, which model all the measurement noises together and are statistically more efficient. The direct reconstruction methods in dynamic FMT have attracted a lot of attention recently. However, the coupling of tomographic image reconstruction and nonlinearity of kinetic parameter estimation due to the compartment modeling has imposed a huge computational burden to the direct reconstruction of the kinetic parameters. In this paper, we propose to take advantage of both the direct and indirect reconstruction ideas through a variable splitting strategy under the augmented Lagrangian framework. Each iteration of the direct reconstruction is split into two steps: the dynamic FMT image reconstruction and the node-wise nonlinear least squares fitting of the pharmacokinetic parameter images. Through numerical simulation studies, we have found that the proposed algorithm can achieve good reconstruction results within a small amount of time. This will be the first step for a combined dynamic PET and FMT imaging in the future.
MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions
Novosad, Philip; Reader, Andrew J.
2016-06-01
Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [18F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral
Green, C. T.; Liao, L.; Nolan, B. T.; Juckem, P. F.; Ransom, K.; Harter, T.
2017-12-01
Process-based modeling of regional NO3- fluxes to groundwater is critical for understanding and managing water quality. Measurements of atmospheric tracers of groundwater age and dissolved-gas indicators of denitrification progress have potential to improve estimates of NO3- reactive transport processes. This presentation introduces a regionalized version of a vertical flux method (VFM) that uses simple mathematical estimates of advective-dispersive reactive transport with regularization procedures to calibrate estimated tracer concentrations to observed equivalents. The calibrated VFM provides estimates of chemical, hydrologic and reaction parameters (source concentration time series, recharge, effective porosity, dispersivity, reaction rate coefficients) and derived values (e.g. mean unsaturated zone travel time, eventual depth of the NO3- front) for individual wells. Statistical learning methods are used to extrapolate parameters and predictions from wells to continuous areas. The regional VFM was applied to 473 well samples in central-eastern Wisconsin. Chemical measurements included O2, NO3-, N2 from denitrification, and atmospheric tracers of groundwater age including carbon-14, chlorofluorocarbons, tritium, and triogiogenic helium. VFM results were consistent with observed chemistry, and calibrated parameters were in-line with independent estimates. Results indicated that (1) unsaturated zone travel times were a substantial portion of the transit time to wells and streams (2) fractions of N leached to groundwater have changed over time, with increasing fractions from manure and decreasing fractions from fertilizer, and (3) under current practices and conditions, 60% of the shallow aquifer will eventually be affected by NO3- contamination. Based on GIS coverages of variables related to soils, land use and hydrology, the VFM results at individual wells were extrapolated regionally using boosted regression trees, a statistical learning approach, that related
Funamizu, Hideki; Onodera, Yusei; Aizu, Yoshihisa
2018-05-01
In this study, we report color quality improvement of reconstructed images in color digital holography using the speckle method and the spectral estimation. In this technique, an object is illuminated by a speckle field and then an object wave is produced, while a plane wave is used as a reference wave. For three wavelengths, the interference patterns of two coherent waves are recorded as digital holograms on an image sensor. Speckle fields are changed by moving a ground glass plate in an in-plane direction, and a number of holograms are acquired to average the reconstructed images. After the averaging process of images reconstructed from multiple holograms, we use the Wiener estimation method for obtaining spectral transmittance curves in reconstructed images. The color reproducibility in this method is demonstrated and evaluated using a Macbeth color chart film and staining cells of onion.
Directory of Open Access Journals (Sweden)
Jing Wang
2013-01-01
Full Text Available The image reconstruction for electrical impedance tomography (EIT mathematically is a typed nonlinear ill-posed inverse problem. In this paper, a novel iteration regularization scheme based on the homotopy perturbation technique, namely, homotopy perturbation inversion method, is applied to investigate the EIT image reconstruction problem. To verify the feasibility and effectiveness, simulations of image reconstruction have been performed in terms of considering different locations, sizes, and numbers of the inclusions, as well as robustness to data noise. Numerical results indicate that this method can overcome the numerical instability and is robust to data noise in the EIT image reconstruction. Moreover, compared with the classical Landweber iteration method, our approach improves the convergence rate. The results are promising.
Cox, Christopher
Low-order numerical methods are widespread in academic solvers and ubiquitous in industrial solvers due to their robustness and usability. High-order methods are less robust and more complicated to implement; however, they exhibit low numerical dissipation and have the potential to improve the accuracy of flow simulations at a lower computational cost when compared to low-order methods. This motivates our development of a high-order compact method using Huynh's flux reconstruction scheme for solving unsteady incompressible flow on unstructured grids. We use Chorin's classic artificial compressibility formulation with dual time stepping to solve unsteady flow problems. In 2D, an implicit non-linear lower-upper symmetric Gauss-Seidel scheme with backward Euler discretization is used to efficiently march the solution in pseudo time, while a second-order backward Euler discretization is used to march in physical time. We verify and validate implementation of the high-order method coupled with our implicit time stepping scheme using both steady and unsteady incompressible flow problems. The current implicit time stepping scheme is proven effective in satisfying the divergence-free constraint on the velocity field in the artificial compressibility formulation. The high-order solver is extended to 3D and parallelized using MPI. Due to its simplicity, time marching for 3D problems is done explicitly. The feasibility of using the current implicit time stepping scheme for large scale three-dimensional problems with high-order polynomial basis still remains to be seen. We directly use the aforementioned numerical solver to simulate pulsatile flow of a Newtonian blood-analog fluid through a rigid 180-degree curved artery model. One of the most physiologically relevant forces within the cardiovascular system is the wall shear stress. This force is important because atherosclerotic regions are strongly correlated with curvature and branching in the human vasculature, where the
A Family of Multipoint Flux Mixed Finite Element Methods for Elliptic Problems on General Grids
Wheeler, Mary F.
2011-01-01
In this paper, we discuss a family of multipoint flux mixed finite element (MFMFE) methods on simplicial, quadrilateral, hexahedral, and triangular-prismatic grids. The MFMFE methods are locally conservative with continuous normal fluxes, since they are developed within a variational framework as mixed finite element methods with special approximating spaces and quadrature rules. The latter allows for local flux elimination giving a cell-centered system for the scalar variable. We study two versions of the method: with a symmetric quadrature rule on smooth grids and a non-symmetric quadrature rule on rough grids. Theoretical and numerical results demonstrate first order convergence for problems with full-tensor coefficients. Second order superconvergence is observed on smooth grids. © 2011 Published by Elsevier Ltd.
Improved vertical streambed flux estimation using multiple diurnal temperature methods in series
Irvine, Dylan J.; Briggs, Martin A.; Cartwright, Ian; Scruggs, Courtney; Lautz, Laura K.
2017-01-01
Analytical solutions that use diurnal temperature signals to estimate vertical fluxes between groundwater and surface water based on either amplitude ratios (Ar) or phase shifts (Δϕ) produce results that rarely agree. Analytical solutions that simultaneously utilize Ar and Δϕ within a single solution have more recently been derived, decreasing uncertainty in flux estimates in some applications. Benefits of combined (ArΔϕ) methods also include that thermal diffusivity and sensor spacing can be calculated. However, poor identification of either Ar or Δϕ from raw temperature signals can lead to erratic parameter estimates from ArΔϕ methods. An add-on program for VFLUX 2 is presented to address this issue. Using thermal diffusivity selected from an ArΔϕ method during a reliable time period, fluxes are recalculated using an Ar method. This approach maximizes the benefits of the Ar and ArΔϕ methods. Additionally, sensor spacing calculations can be used to identify periods with unreliable flux estimates, or to assess streambed scour. Using synthetic and field examples, the use of these solutions in series was particularly useful for gaining conditions where fluxes exceeded 1 m/d.
International Nuclear Information System (INIS)
Chen, Zhenmao; Aoto, Kazumi; Kato, Syoichi
1999-07-01
In this report, reconstruction of magnetic charges induced by mechanical damages in a test piece of SUS304 stainless steel is performed as a part of efforts to establish a passive nondestructive testing method on the basis of the inspection of leakage magnetic field. The approach for solving this typical ill-posed inverse problem is selected as a way in the least square method category. Concerning the ill-poseness of the system of equations, an iteration algorithm is adopted to its solving in which the designations of initial profile, the weight coefficients and the total number of iterations are taken as means of regularization. From examples using simulated input data, it is verified that the approach gives good reconstruction results in case of signals with a relative high S/N ratio. For improving the robustness of the proposed method, a Galerkin procedure with base functions chosen as the Daubechies' wavelet is also introduced for discretizing the governing equation. By comparing the reconstruction results of the least square method and those using wavelet discretization, it is found that the wavelet used approach is more feasible in the inversion of noise polluted signals. Reconstruction of 1-D and 2-D magnetic charges with the least square strategy and reconstruction of an 1-D problem with the wavelet used method are carried out from both simulated and measured magnetic field signals which are used as the validation of the proposed inversion strategy. (author)
Neutron flux measurement with 6Li and 7Li dual glass scintillators by γ compensation method
International Nuclear Information System (INIS)
Ji Changsong; Zhang Shulan; Zhang Shuheng
1996-01-01
Based on the characteristics of 6 Li glass scintillator which is sensitive to both neutron and gamma rays, and 7 Li glass scintillator which is sensitive to gamma rays only, a new method of detecting weak neutron flux under interference of strong gamma radiation has been investigated by means of 6 Li- 7 Li pair glass scintillator gamma compensation method. The result of neutron flux measurement by above-mentioned method with an error of about 1% when the gamma ray interference is up to 18.7% has been obtained
Neutron flux measurement with 6Li and 7Li dual glass scintillators by γ compensation method
International Nuclear Information System (INIS)
Ji Changsong; Zhang Shulan; Zhang Shuheng
1998-01-01
Based on the characteristics of 6 Li glass scintillator which is sensitive to both neutron and gamma rays, and 7 Li glass scintillator which is sensitive to gamma rays only, a new method of detecting weak neutron flux under interference of strong gamma radiation has been investigated by mans of 6 Li- 7 Li dual glass scintillator gamma compensation method. The result of neutron flux measurement by above-mentioned method with an error of about 1% when the gamma ray interference is up to 18.7% has been obtained
Energy Technology Data Exchange (ETDEWEB)
Schaaf, S.; Daemmgen, U.; Burkart, S. [Federal Agricultural Research Centre, Inst. of Agroecology, Braunschweig (Germany); Gruenhage, L. [Justus-Liebig-Univ., Inst. for Plant Ecology, Giessen (Germany)
2005-04-01
Vertical fluxes of water vapour and carbon dioxide obtained from gradient, eddy covariance (closed and open path systems) and chamber measurements above arable crops were compared with the directly measured energy balance and the harvested net biomass carbon. The gradient and chamber measurements were in the correct order of magnitude, whereas the closed path eddy covariance system showed unacceptably small fluxes. Correction methods based on power spectra analysis yielded increased fluxes. However, the energy balance could not be closed satisfactorily. The application of the open path system proved to be successful. The SVAT model PLATIN which had been adapted to various arable crops was able to depict the components of the energy balance adequately. Net carbon fluxes determined with the corrected closed path data sets, chamber, and SVAT model equal those of the harvested carbon. (orig.)
Gas-Kinetic Theory Based Flux Splitting Method for Ideal Magnetohydrodynamics
Xu, Kun
1998-01-01
A gas-kinetic solver is developed for the ideal magnetohydrodynamics (MHD) equations. The new scheme is based on the direct splitting of the flux function of the MHD equations with the inclusion of "particle" collisions in the transport process. Consequently, the artificial dissipation in the new scheme is much reduced in comparison with the MHD Flux Vector Splitting Scheme. At the same time, the new scheme is compared with the well-developed Roe-type MHD solver. It is concluded that the kinetic MHD scheme is more robust and efficient than the Roe- type method, and the accuracy is competitive. In this paper the general principle of splitting the macroscopic flux function based on the gas-kinetic theory is presented. The flux construction strategy may shed some light on the possible modification of AUSM- and CUSP-type schemes for the compressible Euler equations, as well as to the development of new schemes for a non-strictly hyperbolic system.
International Nuclear Information System (INIS)
Huang, C.-H.; Wu, H.-H.
2006-01-01
In the present study an inverse hyperbolic heat conduction problem is solved by the conjugate gradient method (CGM) in estimating the unknown boundary heat flux based on the boundary temperature measurements. Results obtained in this inverse problem will be justified based on the numerical experiments where three different heat flux distributions are to be determined. Results show that the inverse solutions can always be obtained with any arbitrary initial guesses of the boundary heat flux. Moreover, the drawbacks of the previous study for this similar inverse problem, such as (1) the inverse solution has phase error and (2) the inverse solution is sensitive to measurement error, can be avoided in the present algorithm. Finally, it is concluded that accurate boundary heat flux can be estimated in this study
Effect of flux discontinuity on spatial approximations for discrete ordinates methods
International Nuclear Information System (INIS)
Duo, J.I.; Azmy, Y.Y.
2005-01-01
This work presents advances on error analysis of the spatial approximation of the discrete ordinates method for solving the neutron transport equation. Error norms for different non-collided flux problems over a two dimensional pure absorber medium are evaluated using three numerical methods. The problems are characterized by the incoming flux boundary conditions to obtain solutions with different level of differentiability. The three methods considered are the Diamond Difference (DD) method, the Arbitrarily High Order Transport method of the Nodal type (AHOT-N), and of the Characteristic type (AHOT-C). The last two methods are employed in constant, linear and quadratic orders of spatial approximation. The cell-wise error is computed as the difference between the cell-averaged flux computed by each method and the exact value, then the L 1 , L 2 , and L ∞ error norms are calculated. The results of this study demonstrate that the level of differentiability of the exact solution profoundly affects the rate of convergence of the numerical methods' solutions. Furthermore, in the case of discontinuous exact flux the methods fail to converge in the maximum error norm, or in the pointwise sense, in accordance with previous local error analysis. (authors)
An External Wire Frame Fixation Method of Skin Grafting for Burn Reconstruction.
Yoshino, Yukiko; Ueda, Hyakuzoh; Ono, Simpei; Ogawa, Rei
2017-06-28
The skin graft is a prevalent reconstructive method for burn injuries. We have been applying external wire frame fixation methods in combination with skin grafts since 1986 and have experienced better outcomes in percentage of successful graft take. The overall purpose of this method was to further secure skin graft adherence to wound beds in hard to stabilize areas. There are also location-specific benefits to this technique such as eliminating the need of tarsorrhaphy in periorbital area, allowing immediate food intake after surgery in perioral area, and performing less invasive fixing methods in digits, and so on. The purpose of this study was to clarify its benefits and applicable locations. We reviewed 22 postburn patients with skin graft reconstructions using the external wire frame method at our institution from December 2012 through September 2016. Details of the surgical technique and individual reports are also discussed. Of the 22 cases, 15 (68%) were split-thickness skin grafts and 7 (32%) were full-thickness skin grafts. Five cases (23%) involved periorbital reconstruction, 5 (23%) involved perioral reconstruction, 2 (9%) involved lower limb reconstruction, and 10 (45%) involved digital reconstruction. Complete (100%) survival of the skin graft was attained in all cases. No signs of complication were observed. With 30 years of experiences all combined, we have summarized fail-proof recommendations to a successful graft survival with an emphasis on the locations of its application.
A modified sparse reconstruction method for three-dimensional synthetic aperture radar image
Zhang, Ziqiang; Ji, Kefeng; Song, Haibo; Zou, Huanxin
2018-03-01
There is an increasing interest in three-dimensional Synthetic Aperture Radar (3-D SAR) imaging from observed sparse scattering data. However, the existing 3-D sparse imaging method requires large computing times and storage capacity. In this paper, we propose a modified method for the sparse 3-D SAR imaging. The method processes the collection of noisy SAR measurements, usually collected over nonlinear flight paths, and outputs 3-D SAR imagery. Firstly, the 3-D sparse reconstruction problem is transformed into a series of 2-D slices reconstruction problem by range compression. Then the slices are reconstructed by the modified SL0 (smoothed l0 norm) reconstruction algorithm. The improved algorithm uses hyperbolic tangent function instead of the Gaussian function to approximate the l0 norm and uses the Newton direction instead of the steepest descent direction, which can speed up the convergence rate of the SL0 algorithm. Finally, numerical simulation results are given to demonstrate the effectiveness of the proposed algorithm. It is shown that our method, compared with existing 3-D sparse imaging method, performs better in reconstruction quality and the reconstruction time.
Variability in CT lung-nodule volumetry: Effects of dose reduction and reconstruction methods.
Young, Stefano; Kim, Hyun J Grace; Ko, Moe Moe; Ko, War War; Flores, Carlos; McNitt-Gray, Michael F
2015-05-01
Measuring the size of nodules on chest CT is important for lung cancer staging and measuring therapy response. 3D volumetry has been proposed as a more robust alternative to 1D and 2D sizing methods. There have also been substantial advances in methods to reduce radiation dose in CT. The purpose of this work was to investigate the effect of dose reduction and reconstruction methods on variability in 3D lung-nodule volumetry. Reduced-dose CT scans were simulated by applying a noise-addition tool to the raw (sinogram) data from clinically indicated patient scans acquired on a multidetector-row CT scanner (Definition Flash, Siemens Healthcare). Scans were simulated at 25%, 10%, and 3% of the dose of their clinical protocol (CTDIvol of 20.9 mGy), corresponding to CTDIvol values of 5.2, 2.1, and 0.6 mGy. Simulated reduced-dose data were reconstructed with both conventional filtered backprojection (B45 kernel) and iterative reconstruction methods (SAFIRE: I44 strength 3 and I50 strength 3). Three lab technologist readers contoured "measurable" nodules in 33 patients under each of the different acquisition/reconstruction conditions in a blinded study design. Of the 33 measurable nodules, 17 were used to estimate repeatability with their clinical reference protocol, as well as interdose and inter-reconstruction-method reproducibilities. The authors compared the resulting distributions of proportional differences across dose and reconstruction methods by analyzing their means, standard deviations (SDs), and t-test and F-test results. The clinical-dose repeatability experiment yielded a mean proportional difference of 1.1% and SD of 5.5%. The interdose reproducibility experiments gave mean differences ranging from -5.6% to -1.7% and SDs ranging from 6.3% to 9.9%. The inter-reconstruction-method reproducibility experiments gave mean differences of 2.0% (I44 strength 3) and -0.3% (I50 strength 3), and SDs were identical at 7.3%. For the subset of repeatability cases, inter-reconstruction-method
On the kinematic reconstruction of deep inelastic scattering at HERA: the Σmethod
International Nuclear Information System (INIS)
Bassler, U.; Bernardi, G.
1994-12-01
We review and compare the reconstruction methods of the inclusive deep inelastic scattering variables used at HERA. We introduce a new prescription, the Sigma (Σ) method, which allows to measure the structure function of the proton F 2 (x, Q 2 ) in a large kinematic domain, and in particular in the low x-low Q 2 region, with small systematic errors and small radiative corrections. A detailed comparison between the Σ method and the other methods is shown. Extensions of the Σ method are presented. The effect of QED radiation on the kinematic reconstruction and on the structure function measurement is discussed. (orig.)
Accelerated gradient methods for total-variation-based CT image reconstruction
DEFF Research Database (Denmark)
Jørgensen, Jakob Heide; Jensen, Tobias Lindstrøm; Hansen, Per Christian
2011-01-01
incorporates several heuristics from the optimization literature such as Barzilai-Borwein (BB) step size selection and nonmonotone line search. The latter uses a cleverly chosen sequence of auxiliary points to achieve a better convergence rate. The methods are memory efficient and equipped with a stopping...... reconstruction can in principle be found by any optimization method, but in practice the large scale of the systems arising in CT image reconstruction preclude the use of memory-demanding methods such as Newton’s method. The simple gradient method has much lower memory requirements, but exhibits slow convergence...
International Nuclear Information System (INIS)
Wasastjerna, F.; Lux, I.
1980-03-01
A transmission probability method implemented in the program TPHEX is described. This program was developed for the calculation of neutron flux distributions in hexagonal light water reactor fuel assemblies. The accuracy appears to be superior to diffusion theory, and the computation time is shorter than that of the collision probability method. (author)
Multiobjective flux balancing using the NISE method for metabolic network analysis.
Oh, Young-Gyun; Lee, Dong-Yup; Lee, Sang Yup; Park, Sunwon
2009-01-01
Flux balance analysis (FBA) is well acknowledged as an analysis tool of metabolic networks in the framework of metabolic engineering. However, FBA has a limitation for solving a multiobjective optimization problem which considers multiple conflicting objectives. In this study, we propose a novel multiobjective flux balance analysis method, which adapts the noninferior set estimation (NISE) method (Solanki et al., 1993) for multiobjective linear programming (MOLP) problems. NISE method can generate an approximation of the Pareto curve for conflicting objectives without redundant iterations of single objective optimization. Furthermore, the flux distributions at each Pareto optimal solution can be obtained for understanding the internal flux changes in the metabolic network. The functionality of this approach is shown by applying it to a genome-scale in silico model of E. coli. Multiple objectives for the poly(3-hydroxybutyrate) [P(3HB)] production are considered simultaneously, and relationships among them are identified. The Pareto curve for maximizing succinic acid production vs. maximizing biomass production is used for the in silico analysis of various combinatorial knockout strains. This proposed method accelerates the strain improvement in the metabolic engineering by reducing computation time of obtaining the Pareto curve and analysis time of flux distribution at each Pareto optimal solution. (c) 2009 American Institute of Chemical Engineers Biotechnol. Prog., 2009.
A method to calculate flux distribution in reactor systems containing materials with grain structure
International Nuclear Information System (INIS)
Stepanek, J.
1980-01-01
A method is proposed to compute the neutron flux spatial distribution in slab, spherical or cylindrical systems containing zones with close grain structure of material. Several different types of equally distributed particles embedded in the matrix material are allowed in one or more zones. The multi-energy group structure of the flux is considered. The collision probability method is used to compute the fluxes in the grains and in an ''effective'' part of the matrix material. Then the overall structure of the flux distribution in the zones with homogenized materials is determined using the DPN ''surface flux'' method. Both computations are connected using the balance equation during the outer iterations. The proposed method is written in the code SURCU-DH. Two testcases are computed and discussed. One testcase is the computation of the eigenvalue in simplified slab geometry of an LWR container of one zone with boral grains equally distributed in an aluminium matrix. The second is the computation of the eigenvalue in spherical geometry of the HTR pebble-bed cell with spherical particles embedded in a graphite matrix. The results are compared to those obtained by repeated use of the WIMS Code. (author)
A Survey on Methods for Reconstructing Surfaces from Unorganized Point Sets
Directory of Open Access Journals (Sweden)
Vilius Matiukas
2011-08-01
Full Text Available This paper addresses the issue of reconstructing and visualizing surfaces from unorganized point sets. These can be acquired using different techniques, such as 3D-laser scanning, computerized tomography, magnetic resonance imaging and multi-camera imaging. The problem of reconstructing surfaces from their unorganized point sets is common for many diverse areas, including computer graphics, computer vision, computational geometry or reverse engineering. The paper presents three alternative methods that all use variations in complementary cones to triangulate and reconstruct the tested 3D surfaces. The article evaluates and contrasts three alternatives.Article in English
Two-wavelength Method Estimates Heat fluxes over Heterogeneous Surface in North-China
Zhang, G.; Zheng, N.; Zhang, J.
2017-12-01
Heat fluxes is a key process of hydrological and heat transfer of soil-plant-atmosphere continuum (SPAC), and now it is becoming an important topic in meteorology, hydrology, ecology and other related research areas. Because the temporal and spatial variation of fluxes at regional scale is very complicated, it is still difficult to measure fluxes at the kilometer scale over a heterogeneous surface. A technique called "two-wavelength method" which combines optical scintillometer with microwave scintillometer is able to measure both sensible and latent heat fluxes over large spatial scales at the same time. The main purpose of this study is to investigate the fluxes over non-uniform terrain in North-China. Estimation of heat fluxes was carried out with the optical-microwave scintillometer and an eddy covariance (EC) system over heterogeneous surface in Tai Hang Mountains, China. EC method was set as a benchmark in the study. Structure parameters obtained from scintillometer showed that the typical measurement values of Cn2 are around 10-13 m-2/3 for microwave scintillometer, and values of Cn2 were around 10-15 m-2/3 for optical scintillometer. The correlation of heat fluxes (H) derived from scintillometer and EC system showed as a ratio of 1.05,and with R2=0.75, while the correlation of latent heat fluxes (LE) showed as 1.29 with R2=0.67. It was also found that heat fluxes derived from the two system showed good agreement (R2=0.9 for LE, R2=0.97 for H) when the Bowen ratio (β) was 1.03, while discrepancies showed significantly when β=0.75, and RMSD in H was 139.22 W/m2, 230.85 W/m2 in LE respectively.Experiment results in our research shows that, the two-wavelength method gives a larger heat fluxes over the study area, and a deeper study should be conduct. We expect that our investigate and analysis can be promoted the application of scintillometry method in regional evapotranspiration measurements and relevant disciplines.
Shang, Shang; Bai, Jing; Song, Xiaolei; Wang, Hongkai; Lau, Jaclyn
2007-01-01
Conjugate gradient method is verified to be efficient for nonlinear optimization problems of large-dimension data. In this paper, a penalized linear and nonlinear combined conjugate gradient method for the reconstruction of fluorescence molecular tomography (FMT) is presented. The algorithm combines the linear conjugate gradient method and the nonlinear conjugate gradient method together based on a restart strategy, in order to take advantage of the two kinds of conjugate gradient methods and compensate for the disadvantages. A quadratic penalty method is adopted to gain a nonnegative constraint and reduce the illposedness of the problem. Simulation studies show that the presented algorithm is accurate, stable, and fast. It has a better performance than the conventional conjugate gradient-based reconstruction algorithms. It offers an effective approach to reconstruct fluorochrome information for FMT.
Analysis on the reconstruction accuracy of the Fitch method for inferring ancestral states
Directory of Open Access Journals (Sweden)
Grünewald Stefan
2011-01-01
Full Text Available Abstract Background As one of the most widely used parsimony methods for ancestral reconstruction, the Fitch method minimizes the total number of hypothetical substitutions along all branches of a tree to explain the evolution of a character. Due to the extensive usage of this method, it has become a scientific endeavor in recent years to study the reconstruction accuracies of the Fitch method. However, most studies are restricted to 2-state evolutionary models and a study for higher-state models is needed since DNA sequences take the format of 4-state series and protein sequences even have 20 states. Results In this paper, the ambiguous and unambiguous reconstruction accuracy of the Fitch method are studied for N-state evolutionary models. Given an arbitrary phylogenetic tree, a recurrence system is first presented to calculate iteratively the two accuracies. As complete binary tree and comb-shaped tree are the two extremal evolutionary tree topologies according to balance, we focus on the reconstruction accuracies on these two topologies and analyze their asymptotic properties. Then, 1000 Yule trees with 1024 leaves are generated and analyzed to simulate real evolutionary scenarios. It is known that more taxa not necessarily increase the reconstruction accuracies under 2-state models. The result under N-state models is also tested. Conclusions In a large tree with many leaves, the reconstruction accuracies of using all taxa are sometimes less than those of using a leaf subset under N-state models. For complete binary trees, there always exists an equilibrium interval [a, b] of conservation probability, in which the limiting ambiguous reconstruction accuracy equals to the probability of randomly picking a state. The value b decreases with the increase of the number of states, and it seems to converge. When the conservation probability is greater than b, the reconstruction accuracies of the Fitch method increase rapidly. The reconstruction
The analysis of RPV fast neutron flux calculation for PWR with three-dimensional SN method
International Nuclear Information System (INIS)
Yang Shouhai; Chen Yixue; Wang Weijin; Shi Shengchun; Lu Daogang
2011-01-01
Discrete ordinates (S N ) method is one of the most widely used method for reactor pressure vessel (RPV) design. As the fast development of computer CPU speed and memory capacity and consummation of three-dimensional discrete-ordinates method, it is mature for 3-D S N method to be used to engineering design for nuclear facilities. This work was done specifically for PWR model, with the results of 3-D core neutron transport calculation by 3-D core calculation, 3-D RPV fast neutron flux distribution obtain by 3-D S N method were compared with gained by 1-D and 2-D S N method and the 3-D Monte Carlo (MC) method. In this paper, the application of three-dimensional S N method in calculating RPV fast neutron flux distribution for pressurized water reactor (PWR) is presented and discussed. (authors)
A two-step Hilbert transform method for 2D image reconstruction
International Nuclear Information System (INIS)
Noo, Frederic; Clackdoyle, Rolf; Pack, Jed D
2004-01-01
The paper describes a new accurate two-dimensional (2D) image reconstruction method consisting of two steps. In the first step, the backprojected image is formed after taking the derivative of the parallel projection data. In the second step, a Hilbert filtering is applied along certain lines in the differentiated backprojection (DBP) image. Formulae for performing the DBP step in fan-beam geometry are also presented. The advantage of this two-step Hilbert transform approach is that in certain situations, regions of interest (ROIs) can be reconstructed from truncated projection data. Simulation results are presented that illustrate very similar reconstructed image quality using the new method compared to standard filtered backprojection, and that show the capability to correctly handle truncated projections. In particular, a simulation is presented of a wide patient whose projections are truncated laterally yet for which highly accurate ROI reconstruction is obtained
Evaluation of image reconstruction methods for 123I-MIBG-SPECT. A rank-order study
International Nuclear Information System (INIS)
Soederberg, Marcus; Mattsson, Soeren; Oddstig, Jenny; Uusijaervi-Lizana, Helena; Leide-Svegborn, Sigrid; Valind, Sven; Thorsson, Ola; Garpered, Sabine; Prautzsch, Tilmann; Tischenko, Oleg
2012-01-01
Background: There is an opportunity to improve the image quality and lesion detectability in single photon emission computed tomography (SPECT) by choosing an appropriate reconstruction method and optimal parameters for the reconstruction. Purpose: To optimize the use of the Flash 3D reconstruction algorithm in terms of equivalent iteration (EI) number (number of subsets times the number of iterations) and to compare with two recently developed reconstruction algorithms ReSPECT and orthogonal polynomial expansion on disc (OPED) for application on 123 I-metaiodobenzylguanidine (MIBG)-SPECT. Material and Methods: Eleven adult patients underwent SPECT 4 h and 14 patients 24 h after injection of approximately 200 MBq 123 I-MIBG using a Siemens Symbia T6 SPECT/CT. Images were reconstructed from raw data using the Flash 3D algorithm at eight different EI numbers. The images were ranked by three experienced nuclear medicine physicians according to their overall impression of the image quality. The obtained optimal images were then compared in one further visual comparison with images reconstructed using the ReSPECT and OPED algorithms. Results: The optimal EI number for Flash 3D was determined to be 32 for acquisition 4 h and 24 h after injection. The average rank order (best first) for the different reconstructions for acquisition after 4 h was: Flash 3D 32 > ReSPECT > Flash 3D 64 > OPED, and after 24 h: Flash 3D 16 > ReSPECT > Flash 3D 32 > OPED. A fair level of inter-observer agreement concerning optimal EI number and reconstruction algorithm was obtained, which may be explained by the different individual preferences of what is appropriate image quality. Conclusion: Using Siemens Symbia T6 SPECT/CT and specified acquisition parameters, Flash 3D 32 (4 h) and Flash 3D 16 (24 h), followed by ReSPECT, were assessed to be the preferable reconstruction algorithms in visual assessment of 123 I-MIBG images
A path flux analysis method for the reduction of detailed chemical kinetic mechanisms
Energy Technology Data Exchange (ETDEWEB)
Sun, Wenting; Ju, Yiguang [Department of Mechanical and Aerospace Engineering, Princeton University, Princeton, NJ 08544 (United States); Chen, Zheng [State Key Laboratory for Turbulence and Complex Systems, College of Engineering, Peking University, Beijing 100871 (China); Gou, Xiaolong [School of Power Engineering, Chongqing University, Chongqing 400044 (China)
2010-07-15
A direct path flux analysis (PFA) method for kinetic mechanism reduction is proposed and validated by using high temperature ignition, perfect stirred reactors, and steady and unsteady flame propagations of n-heptane and n-decane/air mixtures. The formation and consumption fluxes of each species at multiple reaction path generations are analyzed and used to identify the important reaction pathways and the associated species. The formation and consumption path fluxes used in this method retain flux conservation information and are used to define the path indexes for the first and the second generation reaction paths related to a targeted species. Based on the indexes of each reaction path for the first and second generations, different sized reduced chemical mechanisms which contain different number of species are generated. The reduced mechanisms of n-heptane and n-decane obtained by using the present method are compared to those generated by the direct relation graph (DRG) method. The reaction path analysis for n-decane is conducted to demonstrate the validity of the present method. The comparisons of the ignition delay times, flame propagation speeds, flame structures, and unsteady spherical flame propagation processes showed that with either the same or significantly less number of species, the reduced mechanisms generated by the present PFA are more accurate than that of DRG in a broad range of initial pressures and temperatures. The method is also integrated with the dynamic multi-timescale method and a further increase of computation efficiency is achieved. (author)
A simple method to take urethral sutures for neobladder reconstruction and radical prostatectomy
Directory of Open Access Journals (Sweden)
B Satheesan
2007-01-01
Full Text Available For the reconstruction of urethra-vesical anastamosis after radical prostatectomy and for neobladder reconstruction, taking adequate sutures to include the urethral mucosa is vital. Due to the retraction of the urethra and unfriendly pelvis, the process of taking satisfactory urethral sutures may be laborious. Here, we describe a simple method by which we could overcome similar technical problems during surgery using Foley catheter as the guide for the suture.
Directory of Open Access Journals (Sweden)
S Singh
2008-11-01
Full Text Available We describe herein a modified technique for reconstruction of chronic rupture of the quadriceps tendon in a patient with bilateral total knee replacement and distal realignment of the patella. The surgery involved the application of a Dacron graft and the ‘double eights’ technique. The patient achieved satisfactory results after surgery and we believe that this technique of reconstruction offers advantages over other methods.
Comparing and improving reconstruction methods for proxies based on compositional data
Nolan, C.; Tipton, J.; Booth, R.; Jackson, S. T.; Hooten, M.
2017-12-01
Many types of studies in paleoclimatology and paleoecology involve compositional data. Often, these studies aim to use compositional data to reconstruct an environmental variable of interest; the reconstruction is usually done via the development of a transfer function. Transfer functions have been developed using many different methods. Existing methods tend to relate the compositional data and the reconstruction target in very simple ways. Additionally, the results from different methods are rarely compared. Here we seek to address these two issues. First, we introduce a new hierarchical Bayesian multivariate gaussian process model; this model allows for the relationship between each species in the compositional dataset and the environmental variable to be modeled in a way that captures the underlying complexities. Then, we compare this new method to machine learning techniques and commonly used existing methods. The comparisons are based on reconstructing the water table depth history of Caribou Bog (an ombrotrophic Sphagnum peat bog in Old Town, Maine, USA) from a new 7500 year long record of testate amoebae assemblages. The resulting reconstructions from different methods diverge in both their resulting means and uncertainties. In particular, uncertainty tends to be drastically underestimated by some common methods. These results will help to improve inference of water table depth from testate amoebae. Furthermore, this approach can be applied to test and improve inferences of past environmental conditions from a broad array of paleo-proxies based on compositional data
International Nuclear Information System (INIS)
Schuerrer, F.
1980-01-01
For characterizing heterogene configurations of pebble-bed reactors the fine structure of the flux distribution as well as the determination of the macroscopic neutronphysical quantities are of interest. When calculating system parameters of Wigner-Seitz-cells the usual codes for neutron spectra calculation always neglect the modulation of the neutron flux by the influence of neighbouring spheres. To judge the error arising from that procedure it is necessary to determinate the flux distribution in the surrounding of a spherical fuel element. In the present paper an approximation method to calculate the flux distribution in the two-sphere model is developed. This method is based on the exactly solvable problem of the flux determination of a point source of neutrons in an infinite medium, which contains a spherical perturbation zone eccentric to the point source. An iteration method allows by superposing secondary fields and alternately satisfying the conditions of continuity on the surface of each of the two fuel elements to advance to continually improving approximations. (orig.) 891 RW/orig. 892 CKA [de
Low rank alternating direction method of multipliers reconstruction for MR fingerprinting.
Assländer, Jakob; Cloos, Martijn A; Knoll, Florian; Sodickson, Daniel K; Hennig, Jürgen; Lattanzi, Riccardo
2018-01-01
The proposed reconstruction framework addresses the reconstruction accuracy, noise propagation and computation time for magnetic resonance fingerprinting. Based on a singular value decomposition of the signal evolution, magnetic resonance fingerprinting is formulated as a low rank (LR) inverse problem in which one image is reconstructed for each singular value under consideration. This LR approximation of the signal evolution reduces the computational burden by reducing the number of Fourier transformations. Also, the LR approximation improves the conditioning of the problem, which is further improved by extending the LR inverse problem to an augmented Lagrangian that is solved by the alternating direction method of multipliers. The root mean square error and the noise propagation are analyzed in simulations. For verification, in vivo examples are provided. The proposed LR alternating direction method of multipliers approach shows a reduced root mean square error compared to the original fingerprinting reconstruction, to a LR approximation alone and to an alternating direction method of multipliers approach without a LR approximation. Incorporating sensitivity encoding allows for further artifact reduction. The proposed reconstruction provides robust convergence, reduced computational burden and improved image quality compared to other magnetic resonance fingerprinting reconstruction approaches evaluated in this study. Magn Reson Med 79:83-96, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Analysis of fracture surface of CFRP material by three-dimensional reconstruction methods
International Nuclear Information System (INIS)
Lobo, Raquel M.; Andrade, Arnaldo H.P.
2009-01-01
Fracture surfaces of CFRP (carbon Fiber Reinforced Polymer) materials, used in the nuclear fuel cycle, presents an elevated roughness, mainly due to the fracture mode known as pulling out, that displays pieces of carbon fibers after debonding between fiber and matrix. The fractographic analysis, by bi-dimensional images is deficient for not considering the so important vertical resolution as much as the horizontal resolution. In this case, the knowledge of this heights distribution that occurs during the breaking, can lead to the calculation of the involved energies in the process that would allows a better agreement on the fracture mechanisms of the composite material. An important solution for the material characterization, whose surface presents a high roughness due to the variation in height, is to reconstruct three-dimensionally these fracture surfaces. In this work, the 3D reconstruction was done by two different methods: the variable focus reconstruction, through a stack of images obtained by optical microscopy (OM) and the parallax reconstruction, carried through with images acquired by scanning electron microscopy (SEM). The results of both methods present an elevation map of the reconstructed image that determine the height of the surface pixel by pixel,. The results obtained by the methods of reconstruction for the CFRP surfaces, have been compared with others materials such as aluminum and copper that present a ductile type fracture surface, with lower roughness. (author)
Directory of Open Access Journals (Sweden)
D. J. Bolinius
2016-04-01
Full Text Available Semi-volatile persistent organic pollutants (POPs cycle between the atmosphere and terrestrial surfaces; however measuring fluxes of POPs between the atmosphere and other media is challenging. Sampling times of hours to days are required to accurately measure trace concentrations of POPs in the atmosphere, which rules out the use of eddy covariance techniques that are used to measure gas fluxes of major air pollutants. An alternative, the modified Bowen ratio (MBR method, has been used instead. In this study we used data from FLUXNET for CO2 and water vapor (H2O to compare fluxes measured by eddy covariance to fluxes measured with the MBR method using vertical concentration gradients in air derived from averaged data that simulate the long sampling times typically required to measure POPs. When concentration gradients are strong and fluxes are unidirectional, the MBR method and the eddy covariance method agree within a factor of 3 for CO2, and within a factor of 10 for H2O. To remain within the range of applicability of the MBR method, field studies should be carried out under conditions such that the direction of net flux does not change during the sampling period. If that condition is met, then the performance of the MBR method is neither strongly affected by the length of sample duration nor the use of a fixed value for the transfer coefficient.
Efficient 3D Volume Reconstruction from a Point Cloud Using a Phase-Field Method
Directory of Open Access Journals (Sweden)
Darae Jeong
2018-01-01
Full Text Available We propose an explicit hybrid numerical method for the efficient 3D volume reconstruction from unorganized point clouds using a phase-field method. The proposed three-dimensional volume reconstruction algorithm is based on the 3D binary image segmentation method. First, we define a narrow band domain embedding the unorganized point cloud and an edge indicating function. Second, we define a good initial phase-field function which speeds up the computation significantly. Third, we use a recently developed explicit hybrid numerical method for solving the three-dimensional image segmentation model to obtain efficient volume reconstruction from point cloud data. In order to demonstrate the practical applicability of the proposed method, we perform various numerical experiments.
AIR Tools II: algebraic iterative reconstruction methods, improved implementation
DEFF Research Database (Denmark)
Hansen, Per Christian; Jørgensen, Jakob Sauer
2017-01-01
with algebraic iterative methods and their convergence properties. The present software is a much expanded and improved version of the package AIR Tools from 2012, based on a new modular design. In addition to improved performance and memory use, we provide more flexible iterative methods, a column-action method...
Inter-comparison of different direct and indirect methods to determine radon flux from soil
International Nuclear Information System (INIS)
Grossi, C.; Vargas, A.; Camacho, A.; Lopez-Coto, I.; Bolivar, J.P.; Xia Yu; Conen, F.
2011-01-01
The physical and chemical characteristics of radon gas make it a good tracer for use in the application of atmospheric transport models. For this purpose the radon source needs to be known on a global scale and this is difficult to achieve by only direct experimental methods. However, indirect methods can provide radon flux maps on larger scales, but their reliability has to be carefully checked. It is the aim of this work to compare radon flux values obtained by direct and indirect methods in a measurement campaign performed in the summer of 2008. Different systems to directly measure radon flux from the soil surface and to measure the related parameters terrestrial γ dose and 226 Ra activity in soil, for indirect estimation of radon flux, were tested. Four eastern Spanish sites with different geological and soil characteristics were selected: Teruel, Los Pedrones, Quintanar de la Orden and Madrid. The study shows the usefulness of both direct and indirect methods for obtaining radon flux data. Direct radon flux measurements by continuous and integrated monitors showed a coefficient of variation between 10% and 23%. At the same time, indirect methods based on correlations between 222 Rn and terrestrial γ dose rate, or 226 Ra activity in soil, provided results similar to the direct measurements, when these proxies were directly measured at the site. Larger discrepancies were found when proxy values were extracted from existing data bases. The participating members involved in the campaign study were the Institute of Energy Technology (INTE) of the Technical University of Catalonia (UPC), Huelva University (UHU), and Basel University (BASEL).
International Nuclear Information System (INIS)
Wang, Jinguo; Zhao, Zhiqin; Song, Jian; Chen, Guoping; Nie, Zaiping; Liu, Qing-Huo
2015-01-01
Purpose: An iterative reconstruction method has been previously reported by the authors of this paper. However, the iterative reconstruction method was demonstrated by solely using the numerical simulations. It is essential to apply the iterative reconstruction method to practice conditions. The objective of this work is to validate the capability of the iterative reconstruction method for reducing the effects of acoustic heterogeneity with the experimental data in microwave induced thermoacoustic tomography. Methods: Most existing reconstruction methods need to combine the ultrasonic measurement technology to quantitatively measure the velocity distribution of heterogeneity, which increases the system complexity. Different to existing reconstruction methods, the iterative reconstruction method combines time reversal mirror technique, fast marching method, and simultaneous algebraic reconstruction technique to iteratively estimate the velocity distribution of heterogeneous tissue by solely using the measured data. Then, the estimated velocity distribution is used subsequently to reconstruct the highly accurate image of microwave absorption distribution. Experiments that a target placed in an acoustic heterogeneous environment are performed to validate the iterative reconstruction method. Results: By using the estimated velocity distribution, the target in an acoustic heterogeneous environment can be reconstructed with better shape and higher image contrast than targets that are reconstructed with a homogeneous velocity distribution. Conclusions: The distortions caused by the acoustic heterogeneity can be efficiently corrected by utilizing the velocity distribution estimated by the iterative reconstruction method. The advantage of the iterative reconstruction method over the existing correction methods is that it is successful in improving the quality of the image of microwave absorption distribution without increasing the system complexity
Energy Technology Data Exchange (ETDEWEB)
Ridel, M
2002-04-01
The D{phi} experiment is located at the Fermi National Accelerator Laboratory on the TeVatron proton-antiproton collider. The Run II has started in march 2001 after 5 years of shutdown and will allow D{phi} extend its reach in squarks and gluinos searches, particles predicted by supersymmetry. In this work, I focussed on their decays that lead to signature with jets and missing transverse energy. But before the data taking started, I studied both software and hardware ways to improve the energy measurement which is crucial for jets and for missing transverse energy. Energy deposits in the calorimeter has been clustered with cellNN, at the cell level instead of the tower level. Efforts have been made to take advantage of the calorimeter granularity to aim at individual particles showers reconstruction. CellNN starts from the third floor which has a quadruple granularity compared to the other floors. The longitudinal information has been used to detect electromagnetic and hadronic showers overlaps. Then, clusters and reconstructed tracks from the central detectors are combined and their energies compared. The better measurement is kept. Using this procedure allows to improve the reconstruction of each event energy flow. The efficiency of the current calorimeter triggers has been determined. They has been used to perform a Monte Carlo search analysis of squarks and gluinos in the mSUGRA framework. The lower bound that Ddiameter will be able to put on squarks and gluinos masses with a 100 pb{sup -1} integrated luminosity has been predicted. The use of the energy flow instead of standard reconstruction tools will be able to improve this lower limit. (author)
On-line reconstruction of in-core power distribution by harmonics expansion method
International Nuclear Information System (INIS)
Wang Changhui; Wu Hongchun; Cao Liangzhi; Yang Ping
2011-01-01
Highlights: → A harmonics expansion method for the on-line in-core power reconstruction is proposed. → A harmonics data library is pre-generated off-line and a code named COMS is developed. → Numerical results show that the maximum relative error of the reconstruction is less than 5.5%. → This method has a high computational speed compared to traditional methods. - Abstract: Fixed in-core detectors are most suitable in real-time response to in-core power distributions in pressurized water reactors (PWRs). In this paper, a harmonics expansion method is used to reconstruct the in-core power distribution of a PWR on-line. In this method, the in-core power distribution is expanded by the harmonics of one reference case. The expansion coefficients are calculated using signals provided by fixed in-core detectors. To conserve computing time and improve reconstruction precision, a harmonics data library containing the harmonics of different reference cases is constructed. Upon reconstruction of the in-core power distribution on-line, the two closest reference cases are searched from the harmonics data library to produce expanded harmonics by interpolation. The Unit 1 reactor of DayaBay Nuclear Power Plant (DayaBay NPP) in China is considered for verification. The maximum relative error between the measurement and reconstruction results is less than 5.5%, and the computing time is about 0.53 s for a single reconstruction, indicating that this method is suitable for the on-line monitoring of PWRs.
International Nuclear Information System (INIS)
Niki, Noboru; Mizutani, Toshio; Takahashi, Yoshizo; Inouye, Tamon.
1983-01-01
The nescessity for developing real-time computerized tomography (CT) aiming at the dynamic observation of organs such as hearts has lately been advocated. It is necessary for its realization to reconstruct the images which are markedly faster than present CTs. Although various reconstructing methods have been proposed so far, the method practically employed at present is the filtered backprojection (FBP) method only, which can give high quality image reconstruction, but takes much computing time. In the past, the two-dimensional Fourier transform (TFT) method was regarded as unsuitable to practical use because the quality of images obtained was not good, in spite of the promising method for high speed reconstruction because of its less computing time. However, since it was revealed that the image quality by TFT method depended greatly on interpolation accuracy in two-dimensional Fourier space, the authors have developed a high-speed calculation algorithm that can obtain high quality images by pursuing the relationship between the image quality and the interpolation method. In this case, radial data sampling points in Fourier space are increased to β-th power of 2 times, and the linear or spline interpolation is used. Comparison of this method with the present FBP method resulted in the conclusion that the image quality is almost the same in practical image matrix, the computational time by TFT method becomes about 1/10 of FBP method, and the memory capacity also reduces by about 20 %. (Wakatsuki, Y.)
Energy Technology Data Exchange (ETDEWEB)
Seiz, Julie Burger [Union College, Schenectady, NY (United States)
1997-04-01
This paper presents a review of the Direct Stator Flux Field Orientation control method. This method can be used to control an induction motor`s torque and flux directly and is the application of interest for this thesis. This control method is implemented without the traditional feedback loops and associated hardware. Predictions are made, by mathematical calculations, of the stator voltage vector. The voltage vector is determined twice a switching period. The switching period is fixed throughout the analysis. The three phase inverter duty cycle necessary to control the torque and flux of the induction machine is determined by the voltage space vector Pulse Width Modulation (PWM) technique. Transient performance of either the flux or torque requires an alternate modulation scheme which is also addressed in this thesis. A block diagram of this closed loop system is provided. 22 figs., 7 tabs.
Methods and Simulations of Muon Tomography and Reconstruction
Schreiner, Henry Fredrick, III
This dissertation investigates imaging with cosmic ray muons using scintillator-based portable particle detectors, and covers a variety of the elements required for the detectors to operate and take data, from the detector internal communications and software algorithms to a measurement to allow accurate predictions of the attenuation of physical targets. A discussion of the tracking process for the three layer helical design developed at UT Austin is presented, with details of the data acquisition system, and the highly efficient data format. Upgrades to this system provide a stable system for taking images in harsh or inaccessible environments, such as in a remote jungle in Belize. A Geant4 Monte Carlo simulation was used to develop our understanding of the efficiency of the system, as well as to make predictions for a variety of different targets. The projection process is discussed, with a high-speed algorithm for sweeping a plane through data in near real time, to be used in applications requiring a search through space for target discovery. Several other projections and a foundation of high fidelity 3D reconstructions are covered. A variable binning scheme for rapidly varying statistics over portions of an image plane is also presented and used. A discrepancy in our predictions and the observed attenuation through smaller targets is shown, and it is resolved with a new measurement of low energy spectrum, using a specially designed enclosure to make a series of measurements underwater. This provides a better basis for understanding the images of small amounts of materials, such as for thin cover materials.
Pellet by pellet neutron flux calculations coupled with nodal expansion method
International Nuclear Information System (INIS)
Aldo, Dall'Osso
2003-01-01
We present a technique whose aim is to replace 2-dimensional pin by pin de-homogenization, currently done in core reactor calculations with the nodal expansion method (NEM), by a 3-dimensional finite difference diffusion calculation. This fine calculation is performed as a zoom in each node taking as boundary conditions the results of the NEM calculations. The size of fine mesh is of the order of a fuel pellet. The coupling between fine and NEM calculations is realised by an albedo like boundary condition. Some examples are presented showing fine neutron flux shape near control rods or assembly grids. Other fine flux behaviour as the thermal flux rise in the fuel near the reflector is emphasised. In general the results show the interest of the method in conditions where the separability of radial and axial directions is not granted. (author)
Method to eliminate flux linkage DC component in load transformer for static transfer switch.
He, Yu; Mao, Chengxiong; Lu, Jiming; Wang, Dan; Tian, Bing
2014-01-01
Many industrial and commercial sensitive loads are subject to the voltage sags and interruptions. The static transfer switch (STS) based on the thyristors is applied to improve the power quality and reliability. However, the transfer will result in severe inrush current in the load transformer, because of the DC component in the magnetic flux generated in the transfer process. The inrush current which is always 2 ~ 30 p.u. can cause the disoperation of relay protective devices and bring potential damage to the transformer. The way to eliminate the DC component is to transfer the related phases when the residual flux linkage of the load transformer and the prospective flux linkage of the alternate source are equal. This paper analyzes how the flux linkage of each winding in the load transformer changes in the transfer process. Based on the residual flux linkage when the preferred source is completely disconnected, the method to calculate the proper time point to close each phase of the alternate source is developed. Simulation and laboratory experiments results are presented to show the effectiveness of the transfer method.
Guan, Huifeng; Anastasio, Mark A.
2017-03-01
It is well-known that properly designed image reconstruction methods can facilitate reductions in imaging doses and data-acquisition times in tomographic imaging. The ability to do so is particularly important for emerging modalities such as differential X-ray phase-contrast tomography (D-XPCT), which are currently limited by these factors. An important application of D-XPCT is high-resolution imaging of biomedical samples. However, reconstructing high-resolution images from few-view tomographic measurements remains a challenging task. In this work, a two-step sub-space reconstruction strategy is proposed and investigated for use in few-view D-XPCT image reconstruction. It is demonstrated that the resulting iterative algorithm can mitigate the high-frequency information loss caused by data incompleteness and produce images that have better preserved high spatial frequency content than those produced by use of a conventional penalized least squares (PLS) estimator.
DEFF Research Database (Denmark)
Zhu, Yansong; Jha, Abhinav K.; Dreyer, Jakob K.
2017-01-01
Fluorescence molecular tomography (FMT) is a promising tool for real time in vivo quantification of neurotransmission (NT) as we pursue in our BRAIN initiative effort. However, the acquired image data are noisy and the reconstruction problem is ill-posed. Further, while spatial sparsity of the NT...... matrix coherence. The resultant image data are input to a homotopy-based reconstruction strategy that exploits sparsity via ℓ1 regularization. The reconstructed image is then input to a maximum-likelihood expectation maximization (MLEM) algorithm that retains the sparseness of the input estimate...... and improves upon the quantitation by accurate Poisson noise modeling. The proposed reconstruction method was evaluated in a three-dimensional simulated setup with fluorescent sources in a cuboidal scattering medium with optical properties simulating human brain cortex (reduced scattering coefficient: 9.2 cm-1...
Fast multiview three-dimensional reconstruction method using cost volume filtering
Lee, Seung Joo; Park, Min Ki; Jang, In Yeop; Lee, Kwan H.
2014-03-01
As the number of customers who want to record three-dimensional (3-D) information using a mobile electronic device increases, it becomes more and more important to develop a method which quickly reconstructs a 3-D model from multiview images. A fast multiview-based 3-D reconstruction method is presented, which is suitable for the mobile environment by constructing a cost volume of the 3-D height field. This method consists of two steps: the construction of a reliable base surface and the recovery of shape details. In each step, the cost volume is constructed using photoconsistency and then it is filtered according to the multiscale. The multiscale-based cost volume filtering allows the 3-D reconstruction to maintain the overall shape and to preserve the shape details. We demonstrate the strength of the proposed method in terms of computation time, accuracy, and unconstrained acquisition environment.
Cini Castagnoli, G.; Cane, D.; Taricco, C.; Bhandari, N.
2003-04-01
In a previous work [1] we deduced that during prolonged minima of solar activity since 1700 the galactic cosmic rays (GCR) flux was much higher (˜2 times) respect to what we can infer from GCR modulation deduced solely by the Sunspot Number series. This flux was higher respect to what we observe in the last decades by Neutron Monitor or balloon and spacecraft-borne detectors and confirmed by the three fresh-fall meteorites that we have measured during solar cycle 22. Recently we have deduced the GCR annual mean spectra for the last 300 years [2], starting from the open solar magnetic flux proposed by Solanki et al. [3]. Utilizing the GCR flux we have calculated the 44Ti (T1/2 = 59.2 y) activity in meteorites taking into account the cross sections for its production from the main target element Fe and Ni. We compare the calculated activity with our measurements of the cosmogenic 44Ti in different chondrites fell in the period 1810-1997. The results are in close agreement both in phase and amplitude. The same procedure has been adopted for calculating the production rate of 10Be in atmosphere. Normalizing to the concentration in ice in the solar cycles 20 and 21 we obtain a good agreement with the 10Be profile in Dye3 core [4]. These results demonstrate that our inference of the GCR flux in the past 300 years is reliable. [1] Bonino G., Cini Castagnoli G., Bhandari N., Taricco C., textit {Science}, 270, 1648, 1995 [2] Bonino G., Cini Castagnoli G., Cane D., Taricco C. and Bhandari N., textit {Proc. XXVII Intern. Cosmic Ray Conf.} (Hamburg, 2001) 3769-3772. [3] Solanki S.K., Schüssler M. and Fligge M.,Nature, 408, 445, 2000 [4] Beer J. et al., private communication
A low error reconstruction method for confocal holography to determine 3-dimensional properties
Energy Technology Data Exchange (ETDEWEB)
Jacquemin, P.B., E-mail: pbjacque@nps.edu [Mechanical Engineering, University of Victoria, EOW 548,800 Finnerty Road, Victoria, BC (Canada); Herring, R.A. [Mechanical Engineering, University of Victoria, EOW 548,800 Finnerty Road, Victoria, BC (Canada)
2012-06-15
A confocal holography microscope developed at the University of Victoria uniquely combines holography with a scanning confocal microscope to non-intrusively measure fluid temperatures in three-dimensions (Herring, 1997), (Abe and Iwasaki, 1999), (Jacquemin et al., 2005). The Confocal Scanning Laser Holography (CSLH) microscope was built and tested to verify the concept of 3D temperature reconstruction from scanned holograms. The CSLH microscope used a focused laser to non-intrusively probe a heated fluid specimen. The focused beam probed the specimen instead of a collimated beam in order to obtain different phase-shift data for each scan position. A collimated beam produced the same information for scanning along the optical propagation z-axis. No rotational scanning mechanisms were used in the CSLH microscope which restricted the scan angle to the cone angle of the probe beam. Limited viewing angle scanning from a single view point window produced a challenge for tomographic 3D reconstruction. The reconstruction matrices were either singular or ill-conditioned making reconstruction with significant error or impossible. Establishing boundary conditions with a particular scanning geometry resulted in a method of reconstruction with low error referred to as 'wily'. The wily reconstruction method can be applied to microscopy situations requiring 3D imaging where there is a single viewpoint window, a probe beam with high numerical aperture, and specified boundary conditions for the specimen. The issues and progress of the wily algorithm for the CSLH microscope are reported herein. -- Highlights: Black-Right-Pointing-Pointer Evaluation of an optical confocal holography device to measure 3D temperature of a heated fluid. Black-Right-Pointing-Pointer Processing of multiple holograms containing the cumulative refractive index through the fluid. Black-Right-Pointing-Pointer Reconstruction issues due to restricting angular scanning to the numerical aperture of the
A low error reconstruction method for confocal holography to determine 3-dimensional properties
International Nuclear Information System (INIS)
Jacquemin, P.B.; Herring, R.A.
2012-01-01
A confocal holography microscope developed at the University of Victoria uniquely combines holography with a scanning confocal microscope to non-intrusively measure fluid temperatures in three-dimensions (Herring, 1997), (Abe and Iwasaki, 1999), (Jacquemin et al., 2005). The Confocal Scanning Laser Holography (CSLH) microscope was built and tested to verify the concept of 3D temperature reconstruction from scanned holograms. The CSLH microscope used a focused laser to non-intrusively probe a heated fluid specimen. The focused beam probed the specimen instead of a collimated beam in order to obtain different phase-shift data for each scan position. A collimated beam produced the same information for scanning along the optical propagation z-axis. No rotational scanning mechanisms were used in the CSLH microscope which restricted the scan angle to the cone angle of the probe beam. Limited viewing angle scanning from a single view point window produced a challenge for tomographic 3D reconstruction. The reconstruction matrices were either singular or ill-conditioned making reconstruction with significant error or impossible. Establishing boundary conditions with a particular scanning geometry resulted in a method of reconstruction with low error referred to as “wily”. The wily reconstruction method can be applied to microscopy situations requiring 3D imaging where there is a single viewpoint window, a probe beam with high numerical aperture, and specified boundary conditions for the specimen. The issues and progress of the wily algorithm for the CSLH microscope are reported herein. -- Highlights: ► Evaluation of an optical confocal holography device to measure 3D temperature of a heated fluid. ► Processing of multiple holograms containing the cumulative refractive index through the fluid. ► Reconstruction issues due to restricting angular scanning to the numerical aperture of the beam. ► Minimizing tomographic reconstruction error by defining boundary
A multigrid Newton-Krylov method for flux-limited radiation diffusion
International Nuclear Information System (INIS)
Rider, W.J.; Knoll, D.A.; Olson, G.L.
1998-01-01
The authors focus on the integration of radiation diffusion including flux-limited diffusion coefficients. The nonlinear integration is accomplished with a Newton-Krylov method preconditioned with a multigrid Picard linearization of the governing equations. They investigate the efficiency of the linear and nonlinear iterative techniques
Lower Lip Reconstruction after Tumor Resection; a Single Author's Experience with Various Methods
International Nuclear Information System (INIS)
Rifaat, M.A.
2006-01-01
Background: Squamous cell carcinoma is the most frequently seen malignant tumor of the lower lip The more tissue is lost from the lip after tumor resection, the more challenging is the reconstruction. Many methods have been described, but each has its own advantages and its disadvantages. The author presents through his own clinical experience with lower lip reconstruction at tbe NCI, an evaluation of the commonly practiced techniques. Patients and Methods: Over a 3 year period from May 2002 till May 2005, 17 cases presented at the National Cancer Institute, Cairo University, with lower lip squamous cell carcinoma. The lesions involved various regions of the lower lip excluding the commissures. Following resection, the resulting defects ranged from 1/3 of lip to total lip loss. The age of the patients ranged from 28 to 67 years and they were 13 males and 4 females With regards to the reconstructive procedures used, Karapandzic technique (orbicularis oris myocutaneous flaps) was used in 7 patients, 3 of whom underwent secondary lower lip augmentation with upper lip switch flaps Primary Abbe (Lip switch) nap reconstruction was used in two patients, while 2 other patients were reconstructed with bilateral fan flaps with vermilion reconstruction by mucosal advancement in one case and tongue flap in the other The radial forearm free nap was used only in 2 cases, and direct wound closure was achieved in three cases. All patients were evaluated for early postoperative results emphasizing on flap viability and wound problems and for late results emphasizing on oral continence, microstomia, and aesthetic outcome, in addition to the usual oncological follow-up. Results: All flaps used in this study survived completely including the 2 free flaps. In the early postoperative period, minor wound breakdown occurred in all three cases reconstructed by utilizing adjacent cheek skin flaps, but all wounds healed spontaneously. The latter three cases Involved defects greater than 2
Energy Technology Data Exchange (ETDEWEB)
Chen, Xueli, E-mail: xlchen@xidian.edu.cn, E-mail: jimleung@mail.xidian.edu.cn; Yang, Defu; Zhang, Qitan; Liang, Jimin, E-mail: xlchen@xidian.edu.cn, E-mail: jimleung@mail.xidian.edu.cn [School of Life Science and Technology, Xidian University, Xi' an 710071 (China); Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education (China)
2014-05-14
Even though bioluminescence tomography (BLT) exhibits significant potential and wide applications in macroscopic imaging of small animals in vivo, the inverse reconstruction is still a tough problem that has plagued researchers in a related area. The ill-posedness of inverse reconstruction arises from insufficient measurements and modeling errors, so that the inverse reconstruction cannot be solved directly. In this study, an l{sub 1/2} regularization based numerical method was developed for effective reconstruction of BLT. In the method, the inverse reconstruction of BLT was constrained into an l{sub 1/2} regularization problem, and then the weighted interior-point algorithm (WIPA) was applied to solve the problem through transforming it into obtaining the solution of a series of l{sub 1} regularizers. The feasibility and effectiveness of the proposed method were demonstrated with numerical simulations on a digital mouse. Stability verification experiments further illustrated the robustness of the proposed method for different levels of Gaussian noise.
A Method for the neutron flux determination during the activation process
International Nuclear Information System (INIS)
Maayouf, R.M.A.; Khalil, M.I.
2000-01-01
The present work deals with an accurate method for determining the neutron flux coming out from a neutron source during the experimental measurements. Accordingly, a suitable detector, followed by preamplifier and amplifier, is connected to a data acquisition system designed specially for this purpose; and the number of neutrons detected during every sampling period is stored in the PC. The historical file can be used to compute the average or the integral flux during any time period; considering the detector efficiency, geometrical arrangement and the amplification gain
Direct fourier method reconstruction based on unequally spaced fast fourier transform
International Nuclear Information System (INIS)
Wu Xiaofeng; Zhao Ming; Liu Li
2003-01-01
First, We give an Unequally Spaced Fast Fourier Transform (USFFT) method, which is more exact and theoretically more comprehensible than its former counterpart. Then, with an interesting interpolation scheme, we discusse how to apply USFFT to Direct Fourier Method (DFM) reconstruction of parallel projection data. At last, an emulation experiment result is given. (authors)
Reconstruction of prehistoric plant production and cooking practices by a new isotopic method
Energy Technology Data Exchange (ETDEWEB)
Hastorf, C A [California Univ., Los Angeles (USA). Dept. of Anthropology; DeNiro, M J [California Univ., Los Angeles (USA). Dept. of Earth and Space Sciences
1985-06-06
A new method is presented based on isotopic analysis of burnt organic matter, allowing the characterization of previously unidentifiable plant remains extracted from archaeological contexts. The method is used to reconstruct prehistoric production, preparation and consumption of plant foods, as well as the use of ceramic vessels, in the Upper Mantaro Valley region of the central Peruvian Andes.
Dockray, Samantha; Grant, Nina; Stone, Arthur A.; Kahneman, Daniel; Wardle, Jane; Steptoe, Andrew
2010-01-01
Measurement of affective states in everyday life is of fundamental importance in many types of quality of life, health, and psychological research. Ecological momentary assessment (EMA) is the recognized method of choice, but the respondent burden can be high. The day reconstruction method (DRM) was developed by Kahneman and colleagues ("Science,"…
System and method for image reconstruction, analysis, and/or de-noising
Laleg-Kirati, Taous-Meriem; Kaisserli, Zineb
2015-01-01
A method and system can analyze, reconstruct, and/or denoise an image. The method and system can include interpreting a signal as a potential of a Schrödinger operator, decomposing the signal into squared eigenfunctions, reducing a design parameter
Robust method for stator current reconstruction from DC link in a ...
African Journals Online (AJOL)
Using the switching signals and dc link current, this paper presents a new algorithm for the reconstruction of stator currents of an inverter-fed, three-phase induction motor drive. Unlike the classical and improved methods available in literature, the proposed method is neither based on pulse width modulation pattern ...
An assessment of particle filtering methods and nudging for climate state reconstructions
S. Dubinkina (Svetlana); H. Goosse
2013-01-01
htmlabstractUsing the climate model of intermediate complexity LOVECLIM in an idealized framework, we assess three data-assimilation methods for reconstructing the climate state. The methods are a nudging, a particle filter with sequential importance resampling, and a nudging proposal particle
Phase microscopy using light-field reconstruction method for cell observation.
Xiu, Peng; Zhou, Xin; Kuang, Cuifang; Xu, Yingke; Liu, Xu
2015-08-01
The refractive index (RI) distribution can serve as a natural label for undyed cell imaging. However, the majority of images obtained through quantitative phase microscopy is integrated along the illumination angle and cannot reflect additional information about the refractive map on a certain plane. Herein, a light-field reconstruction method to image the RI map within a depth of 0.2 μm is proposed. It records quantitative phase-delay images using a four-step phase shifting method in different directions and then reconstructs a similar scattered light field for the refractive sample on the focus plane. It can image the RI of samples, transparent cell samples in particular, in a manner similar to the observation of scattering characteristics. The light-field reconstruction method is therefore a powerful tool for use in cytobiology studies. Copyright © 2015 Elsevier Ltd. All rights reserved.
Method and apparatus for reconstructing in-cylinder pressure and correcting for signal decay
Huang, Jian
2013-03-12
A method comprises steps for reconstructing in-cylinder pressure data from a vibration signal collected from a vibration sensor mounted on an engine component where it can generate a signal with a high signal-to-noise ratio, and correcting the vibration signal for errors introduced by vibration signal charge decay and sensor sensitivity. The correction factors are determined as a function of estimated motoring pressure and the measured vibration signal itself with each of these being associated with the same engine cycle. Accordingly, the method corrects for charge decay and changes in sensor sensitivity responsive to different engine conditions to allow greater accuracy in the reconstructed in-cylinder pressure data. An apparatus is also disclosed for practicing the disclosed method, comprising a vibration sensor, a data acquisition unit for receiving the vibration signal, a computer processing unit for processing the acquired signal and a controller for controlling the engine operation based on the reconstructed in-cylinder pressure.
The e/h method of energy reconstruction for combined calorimeter
International Nuclear Information System (INIS)
Kul'chitskij, Yu.A.; Kuz'min, M.V.; Vinogradov, V.B.
1999-01-01
The new simple method of the energy reconstruction for a combined calorimeter, which we called the e/h method, is suggested. It uses only the known e/h ratios and the electron calibration constants and does not require the determination of any parameters by a minimization technique. The method has been tested on the basis of the 1996 test beam data of the ATLAS barrel combined calorimeter and demonstrated the correctness of the reconstruction of the mean values of energies. The obtained fractional energy resolution is [(58 ± 3)%/√E + (2.5 ± 0.3)%] O+ (1.7 ± 0.2) GeV/E. This algorithm can be used for the fast energy reconstruction in the first level trigger
A Method to Assess Flux Hazards at CSP Plants to Reduce Avian Mortality
Energy Technology Data Exchange (ETDEWEB)
Ho, Clifford K.; Wendelin, Timothy; Horstman, Luke; Yellowhair, Julius
2017-06-27
A method to evaluate avian flux hazards at concentrating solar power plants (CSP) has been developed. A heat-transfer model has been coupled to simulations of the irradiance in the airspace above a CSP plant to determine the feather temperature along prescribed bird flight paths. Probabilistic modeling results show that the irradiance and assumed feather properties (thickness, absorptance, heat capacity) have the most significant impact on the simulated feather temperature, which can increase rapidly (hundreds of degrees Celsius in seconds) depending on the parameter values. The avian flux hazard model is being combined with a plant performance model to identify alternative heliostat standby aiming strategies that minimize both avian flux hazards and negative impacts on plant performance.
HNO3 fluxes to a deciduous forest derived using gradient and REA methods
DEFF Research Database (Denmark)
Pryor, S.C.; Barthelmie, R.J.; Jensen, B.
2002-01-01
Summertime nitric acid concentrations over a deciduous forest in the midwestern United States are reported, which range between 0.36 and 3.3 mug m(-3). Fluxes to the forest are computed using the relaxed eddy accumulation technique and gradient methods. In accord with previous studies, the results...... indicate substantial uncertainties in the gradient-based calculations. The relaxed eddy accumulation (REA) derived fluxes are physically reasonable and are shown to be of similar magnitude to dry deposition estimates from gradient sampling. The REA derived mean deposition velocity is approximately 3 cm s......(-1), which is also comparable to growing season estimates derived by Meyers et al. for a similar deciduous forest. Occasional inverted concentration gradients and fluxes are observed but most are not statistically significant. Data are also presented that indicate substantial through canopy...
Convergence analysis for column-action methods in image reconstruction
DEFF Research Database (Denmark)
Elfving, Tommy; Hansen, Per Christian; Nikazad, Touraj
2016-01-01
Column-oriented versions of algebraic iterative methods are interesting alternatives to their row-version counterparts: they converge to a least squares solution, and they provide a basis for saving computational work by skipping small updates. In this paper we consider the case of noise-free data....... We present a convergence analysis of the column algorithms, we discuss two techniques (loping and flagging) for reducing the work, and we establish some convergence results for methods that utilize these techniques. The performance of the algorithms is illustrated with numerical examples from...
Miéville, Frédéric A.; Ayestaran, Paul; Argaud, Christophe; Rizzo, Elena; Ou, Phalla; Brunelle, Francis; Gudinchet, François; Bochud, François; Verdun, Francis R.
2010-04-01
Adaptive Statistical Iterative Reconstruction (ASIR) is a new imaging reconstruction technique recently introduced by General Electric (GE). This technique, when combined with a conventional filtered back-projection (FBP) approach, is able to improve the image noise reduction. To quantify the benefits provided on the image quality and the dose reduction by the ASIR method with respect to the pure FBP one, the standard deviation (SD), the modulation transfer function (MTF), the noise power spectrum (NPS), the image uniformity and the noise homogeneity were examined. Measurements were performed on a control quality phantom when varying the CT dose index (CTDIvol) and the reconstruction kernels. A 64-MDCT was employed and raw data were reconstructed with different percentages of ASIR on a CT console dedicated for ASIR reconstruction. Three radiologists also assessed a cardiac pediatric exam reconstructed with different ASIR percentages using the visual grading analysis (VGA) method. For the standard, soft and bone reconstruction kernels, the SD is reduced when the ASIR percentage increases up to 100% with a higher benefit for low CTDIvol. MTF medium frequencies were slightly enhanced and modifications of the NPS shape curve were observed. However for the pediatric cardiac CT exam, VGA scores indicate an upper limit of the ASIR benefit. 40% of ASIR was observed as the best trade-off between noise reduction and clinical realism of organ images. Using phantom results, 40% of ASIR corresponded to an estimated dose reduction of 30% under pediatric cardiac protocol conditions. In spite of this discrepancy between phantom and clinical results, the ASIR method is as an important option when considering the reduction of radiation dose, especially for pediatric patients.
METHOD OF DETERMINING ECONOMICAL EFFICIENCY OF HOUSING STOCK RECONSTRUCTION IN A CITY
Directory of Open Access Journals (Sweden)
Petreneva Ol’ga Vladimirovna
2016-03-01
Full Text Available RECONSTRUCTION IN A CITY The demand in comfortable housing has always been very high. The building density is not the same in different regions and sometimes there is no land for new housing construction, especially in the central regions of cities. Moreover, in many cities cultural and historical centers remain, which create the historical appearance of the city, that’s why new construction is impossible in these regions. Though taking into account the depreciation and obsolescence, the operation life of many buildings come to an end, they fall into disrepair. In these cases there arises a question on the reconstruction of the existing residential, public and industrial buildings. The aim of the reconstruction is bringing the existing worn-out building stock into correspondence with technical, social and sanitary requirements and living standards and conditions. The authors consider the currency and reasons for reconstruction of residential buildings. They attempt to answer the question, what is more economical efficient: new construction or reconstruction of residential buildings. The article offers a method to calculate the efficiency of residential buildings reconstruction.
Standard Test Method for Measuring Heat Flux Using Flush-Mounted Insert Temperature-Gradient Gages
American Society for Testing and Materials. Philadelphia
2009-01-01
1.1 This test method describes the measurement of the net heat flux normal to a surface using gages inserted flush with the surface. The geometry is the same as heat-flux gages covered by Test Method E 511, but the measurement principle is different. The gages covered by this standard all use a measurement of the temperature gradient normal to the surface to determine the heat that is exchanged to or from the surface. Although in a majority of cases the net heat flux is to the surface, the gages operate by the same principles for heat transfer in either direction. 1.2 This general test method is quite broad in its field of application, size and construction. Two different gage types that are commercially available are described in detail in later sections as examples. A summary of common heat-flux gages is given by Diller (1). Applications include both radiation and convection heat transfer. The gages used for aerospace applications are generally small (0.155 to 1.27 cm diameter), have a fast time response ...
Gaining insight into food webs reconstructed by the inverse method
Kones, J.; Soetaert, K.E.R.; Van Oevelen, D.; Owino, J.; Mavuti, K.
2006-01-01
The use of the inverse method to analyze flow patterns of organic components in ecological systems has had wide application in ecological modeling. Through this approach, an infinite number of food web flows describing the food web and satisfying biological constraints are generated, from which one
Anatomical image-guided fluorescence molecular tomography reconstruction using kernel method
Baikejiang, Reheman; Zhao, Yue; Fite, Brett Z.; Ferrara, Katherine W.; Li, Changqing
2017-01-01
Abstract. Fluorescence molecular tomography (FMT) is an important in vivo imaging modality to visualize physiological and pathological processes in small animals. However, FMT reconstruction is ill-posed and ill-conditioned due to strong optical scattering in deep tissues, which results in poor spatial resolution. It is well known that FMT image quality can be improved substantially by applying the structural guidance in the FMT reconstruction. An approach to introducing anatomical information into the FMT reconstruction is presented using the kernel method. In contrast to conventional methods that incorporate anatomical information with a Laplacian-type regularization matrix, the proposed method introduces the anatomical guidance into the projection model of FMT. The primary advantage of the proposed method is that it does not require segmentation of targets in the anatomical images. Numerical simulations and phantom experiments have been performed to demonstrate the proposed approach’s feasibility. Numerical simulation results indicate that the proposed kernel method can separate two FMT targets with an edge-to-edge distance of 1 mm and is robust to false-positive guidance and inhomogeneity in the anatomical image. For the phantom experiments with two FMT targets, the kernel method has reconstructed both targets successfully, which further validates the proposed kernel method. PMID:28464120
International Nuclear Information System (INIS)
Zeile, Christian; Maione, Ivan A.
2015-01-01
Highlights: • An in operation force measurement system for the ITER EU HCPB TBM has been developed. • The force reconstruction methods are based on strain measurements on the attachment system. • An experimental setup and a corresponding mock-up have been built. • A set of test cases representing ITER relevant excitations has been used for validation. • The influence of modeling errors on the force reconstruction has been investigated. - Abstract: In order to reconstruct forces on the test blanket modules in ITER, two force reconstruction methods, the augmented Kalman filter and a model predictive controller, have been selected and developed to estimate the forces based on strain measurements on the attachment system. A dedicated experimental setup with a corresponding mock-up has been designed and built to validate these methods. A set of test cases has been defined to represent possible excitation of the system. It has been shown that the errors in the estimated forces mainly depend on the accuracy of the identified model used by the algorithms. Furthermore, it has been found that a minimum of 10 strain gauges is necessary to allow for a low error in the reconstructed forces.
Environment-based pin-power reconstruction method for homogeneous core calculations
International Nuclear Information System (INIS)
Leroyer, H.; Brosselard, C.; Girardi, E.
2012-01-01
Core calculation schemes are usually based on a classical two-step approach associated with assembly and core calculations. During the first step, infinite lattice assemblies calculations relying on a fundamental mode approach are used to generate cross-sections libraries for PWRs core calculations. This fundamental mode hypothesis may be questioned when dealing with loading patterns involving several types of assemblies (UOX, MOX), burnable poisons, control rods and burn-up gradients. This paper proposes a calculation method able to take into account the heterogeneous environment of the assemblies when using homogeneous core calculations and an appropriate pin-power reconstruction. This methodology is applied to MOX assemblies, computed within an environment of UOX assemblies. The new environment-based pin-power reconstruction is then used on various clusters of 3x3 assemblies showing burn-up gradients and UOX/MOX interfaces, and compared to reference calculations performed with APOLLO-2. The results show that UOX/MOX interfaces are much better calculated with the environment-based calculation scheme when compared to the usual pin-power reconstruction method. The power peak is always better located and calculated with the environment-based pin-power reconstruction method on every cluster configuration studied. This study shows that taking into account the environment in transport calculations can significantly improve the pin-power reconstruction so far as it is consistent with the core loading pattern. (authors)
Statistically Consistent k-mer Methods for Phylogenetic Tree Reconstruction.
Allman, Elizabeth S; Rhodes, John A; Sullivant, Seth
2017-02-01
Frequencies of k-mers in sequences are sometimes used as a basis for inferring phylogenetic trees without first obtaining a multiple sequence alignment. We show that a standard approach of using the squared Euclidean distance between k-mer vectors to approximate a tree metric can be statistically inconsistent. To remedy this, we derive model-based distance corrections for orthologous sequences without gaps, which lead to consistent tree inference. The identifiability of model parameters from k-mer frequencies is also studied. Finally, we report simulations showing that the corrected distance outperforms many other k-mer methods, even when sequences are generated with an insertion and deletion process. These results have implications for multiple sequence alignment as well since k-mer methods are usually the first step in constructing a guide tree for such algorithms.
An analog computer method for solving flux distribution problems in multi region nuclear reactors
Energy Technology Data Exchange (ETDEWEB)
Radanovic, L; Bingulac, S; Lazarevic, B; Matausek, M [Boris Kidric Institute of Nuclear Sciences Vinca, Beograd (Yugoslavia)
1963-04-15
The paper describes a method developed for determining criticality conditions and plotting flux distribution curves in multi region nuclear reactors on a standard analog computer. The method, which is based on the one-dimensional two group treatment, avoids iterative procedures normally used for boundary value problems and is practically insensitive to errors in initial conditions. The amount of analog equipment required is reduced to a minimum and is independent of the number of core regions and reflectors. (author)
Reconstruction of the limit cycles by the delays method
International Nuclear Information System (INIS)
Castillo D, R.; Ortiz V, J.; Calleros M, G.
2003-01-01
The boiling water reactors (BWRs) are designed for usually to operate in a stable-lineal regime. In a limit cycle the behavior of the one system is no lineal-stable. In a BWR, instabilities of nuclear- thermohydraulics nature can take the reactor to a limit cycle. The limit cycles should to be avoided since the oscillations of power can cause thermal fatigue to the fuel and/or shroud. In this work the employment of the delays method is analyzed for its application in the detection of limit cycles in a nuclear power plant. The foundations of the method and it application to power signals to different operation conditions are presented. The analyzed signals are: to steady state, nuclear-thermohydraulic instability, a non linear transitory and, finally, failure of a controller plant . Among the main results it was found that the delays method can be applied to detect limit cycles in the power monitors of the BWR reactors. It was also found that the first zero of the autocorrelation function is an appropriate approach to select the delay in the detection of limit cycles, for the analyzed cases. (Author)
One step linear reconstruction method for continuous wave diffuse optical tomography
Ukhrowiyah, N.; Yasin, M.
2017-09-01
The method one step linear reconstruction method for continuous wave diffuse optical tomography is proposed and demonstrated for polyvinyl chloride based material and breast phantom. Approximation which used in this method is selecting regulation coefficient and evaluating the difference between two states that corresponding to the data acquired without and with a change in optical properties. This method is used to recovery of optical parameters from measured boundary data of light propagation in the object. The research is demonstrated by simulation and experimental data. Numerical object is used to produce simulation data. Chloride based material and breast phantom sample is used to produce experimental data. Comparisons of results between experiment and simulation data are conducted to validate the proposed method. The results of the reconstruction image which is produced by the one step linear reconstruction method show that the image reconstruction almost same as the original object. This approach provides a means of imaging that is sensitive to changes in optical properties, which may be particularly useful for functional imaging used continuous wave diffuse optical tomography of early diagnosis of breast cancer.
Hadron Energy Reconstruction for ATLAS Barrel Combined Calorimeter Using Non-Parametrical Method
Kulchitskii, Yu A
2000-01-01
Hadron energy reconstruction for the ATLAS barrel prototype combined calorimeter in the framework of the non-parametrical method is discussed. The non-parametrical method utilizes only the known e/h ratios and the electron calibration constants and does not require the determination of any parameters by a minimization technique. Thus, this technique lends itself to fast energy reconstruction in a first level trigger. The reconstructed mean values of the hadron energies are within \\pm1% of the true values and the fractional energy resolution is [(58\\pm 3)%{\\sqrt{GeV}}/\\sqrt{E}+(2.5\\pm0.3)%]\\bigoplus(1.7\\pm0.2) GeV/E. The value of the e/h ratio obtained for the electromagnetic compartment of the combined calorimeter is 1.74\\pm0.04. Results of a study of the longitudinal hadronic shower development are also presented.
Determining Accuracy of Thermal Dissipation Methods-based Sap Flux in Japanese Cedar Trees
Su, Man-Ping; Shinohara, Yoshinori; Laplace, Sophie; Lin, Song-Jin; Kume, Tomonori
2017-04-01
Thermal dissipation method, one kind of sap flux measurement method that can estimate individual tree transpiration, have been widely used because of its low cost and uncomplicated operation. Although thermal dissipation method is widespread, the accuracy of this method is doubted recently because some tree species materials in previous studies were not suitable for its empirical formula from Granier due to difference of wood characteristics. In Taiwan, Cryptomeria japonica (Japanese cedar) is one of the dominant species in mountainous area, quantifying the transpiration of Japanese cedar trees is indispensable to understand water cycling there. However, no one have tested the accuracy of thermal dissipation methods-based sap flux for Japanese cedar trees in Taiwan. Thus, in this study we conducted calibration experiment using twelve Japanese cedar stem segments from six trees to investigate the accuracy of thermal dissipation methods-based sap flux in Japanese cedar trees in Taiwan. By pumping water from segment bottom to top and inserting probes into segments to collect data simultaneously, we compared sap flux densities calculated from real water uptakes (Fd_actual) and empirical formula (Fd_Granier). Exact sapwood area and sapwood depth of each sample were obtained from dying segment with safranin stain solution. Our results showed that Fd_Granier underestimated 39 % of Fd_actual across sap flux densities ranging from 10 to 150 (cm3m-2s-1); while applying sapwood depth corrected formula from Clearwater, Fd_Granier became accurately that only underestimated 0.01 % of Fd_actual. However, when sap flux densities ranging from 10 to 50 (cm3m-2s-1)which is similar with the field data of Japanese cedar trees in a mountainous area of Taiwan, Fd_Granier underestimated 51 % of Fd_actual, and underestimated 26 % with applying Clearwater sapwood depth corrected formula. These results suggested sapwood depth significantly impacted on the accuracy of thermal dissipation
International Nuclear Information System (INIS)
Devaux, J.Y.; Mazelier, L.; Lefkopoulos, D.
1997-01-01
We have earlier shown that the method of singular value decomposition (SVD) allows the image reconstruction in single-photon-tomography with precision higher than the classical method of filtered back-projections. Actually, the establishing of an elementary response matrix which incorporates both the photon attenuation phenomenon, the scattering, the translation non-invariance principle and the detector response, allows to take into account the totality of physical parameters of acquisition. By an non-consecutive optimized truncation of the singular values we have obtained a significant improvement in the efficiency of the regularization of bad conditioning of this problem. The present study aims at verifying the stability of this truncation under modifications of acquisition conditions. Two series of parameters were tested, first, those modifying the geometry of acquisition: the influence of rotation center, the asymmetric disposition of the elementary-volume sources against the detector and the precision of rotation angle, and secondly, those affecting the correspondence between the matrix and the space to be reconstructed: the effect of partial volume and a noise propagation in the experimental model. For the parameters which introduce a spatial distortion, the alteration of reconstruction has been, as expected, comparable to that observed with the classical reconstruction and proportional with the amplitude of shift from the normal one. In exchange, for the effect of partial volume and of noise, the study of truncation signature revealed a variation in the optimal choice of the conserved singular values but with no effect on the global precision of reconstruction
Analysis of dental root apical morphology: a new method for dietary reconstructions in primates.
Hamon, NoÉmie; Emonet, Edouard-Georges; Chaimanee, Yaowalak; Guy, Franck; Tafforeau, Paul; Jaeger, Jean-Jacques
2012-06-01
The reconstruction of paleo-diets is an important task in the study of fossil primates. Previously, paleo-diet reconstructions were performed using different methods based on extant primate models. In particular, dental microwear or isotopic analyses provided accurate reconstructions for some fossil primates. However, there is sometimes difficult or impossible to apply these methods to fossil material. Therefore, the development of new, independent methods of diet reconstructions is crucial to improve our knowledge of primates paleobiology and paleoecology. This study aims to investigate the correlation between tooth root apical morphology and diet in primates, and its potential for paleo-diet reconstructions. Dental roots are composed of two portions: the eruptive portion with a smooth and regular surface, and the apical penetrative portion which displays an irregular and corrugated surface. Here, the angle formed by these two portions (aPE), and the ratio of penetrative portion over total root length (PPI), are calculated for each mandibular tooth root. A strong correlation between these two variables and the proportion of some food types (fruits, leaves, seeds, animal matter, and vertebrates) in diet is found, allowing the use of tooth root apical morphology as a tool for dietary reconstructions in primates. The method was then applied to the fossil hominoid Khoratpithecus piriyai, from the Late Miocene of Thailand. The paleo-diet deduced from aPE and PPI is dominated by fruits (>50%), associated with animal matter (1-25%). Leaves, vertebrates and most probably seeds were excluded from the diet of Khoratpithecus, which is consistent with previous studies. Copyright © 2012 Wiley Periodicals, Inc.
Measurement of the epithermal neutron flux of the Argonauta reactor by the Sandwich method
International Nuclear Information System (INIS)
Nascimento, H.M.
1973-01-01
A common method of obtaining information about the neutron spectrum in the energy range of 1 eV to a few keV is by using resonance sandwich detectors. A sandwich detector is usually made up of three foils placed one on top of the other, each having the same thickness and being made of the same material which has a pronounced absorption resonance. To make an adequate evaluation, the sandwich method was compared with one using an isolated detector. The results obtained from approximate theoretical calculations were checked experimentally, using In, Au and Mn foils, in an isotropic 1/E flux in the Argonaut Reactor at I.E.N. As practical application of this method, the deviation from a 1/E spectrum of the epithermal neutron flux in the core and external graphite reflector of the Argonaut Reactor has been measured with the sandwich foils previously calibrated in a 1/E spectrum. (author)
A method for measuring element fluxes in an undisturbed soil: nitrogen and carbon from earthworms
International Nuclear Information System (INIS)
Bouche, M.B.
1984-01-01
Data on chemical cycles, as nitrogen or carbon cycles, are extrapolated to the fields or ecosystems without the possibility for checking conclusions; i.e. from scientific knowledge (para-ecology). A new method, by natural introduction of an earthworm compartment into an undisturbed soil, with earthworms labelled both by isotopes ( 15 N, 14 C) and by staining is described. This method allows us to measure fluxes of chemicals. The first results, gathered during the improvement of the method in partly artificial conditions, are cross-checked with other data given by direct observation in the field. Measured flux (2.2 mg N/g fresh mass empty gut/day/15 0 C) is far more important than para-ecological estimations; animal metabolism plays directly an important role in nitrogen and carbon cycles. (author)
Energy Technology Data Exchange (ETDEWEB)
Liu, Wenyang [Department of Bioengineering, University of California, Los Angeles, California 90095 (United States); Cheung, Yam; Sabouri, Pouya; Arai, Tatsuya J.; Sawant, Amit [Department of Radiation Oncology, University of Texas Southwestern, Dallas, Texas 75390 (United States); Ruan, Dan, E-mail: druan@mednet.ucla.edu [Department of Bioengineering, University of California, Los Angeles, California 90095 and Department of Radiation Oncology, University of California, Los Angeles, California 90095 (United States)
2015-11-15
Purpose: To accurately and efficiently reconstruct a continuous surface from noisy point clouds captured by a surface photogrammetry system (VisionRT). Methods: The authors have developed a level-set based surface reconstruction method on point clouds captured by a surface photogrammetry system (VisionRT). The proposed method reconstructs an implicit and continuous representation of the underlying patient surface by optimizing a regularized fitting energy, offering extra robustness to noise and missing measurements. By contrast to explicit/discrete meshing-type schemes, their continuous representation is particularly advantageous for subsequent surface registration and motion tracking by eliminating the need for maintaining explicit point correspondences as in discrete models. The authors solve the proposed method with an efficient narrowband evolving scheme. The authors evaluated the proposed method on both phantom and human subject data with two sets of complementary experiments. In the first set of experiment, the authors generated a series of surfaces each with different black patches placed on one chest phantom. The resulting VisionRT measurements from the patched area had different degree of noise and missing levels, since VisionRT has difficulties in detecting dark surfaces. The authors applied the proposed method to point clouds acquired under these different configurations, and quantitatively evaluated reconstructed surfaces by comparing against a high-quality reference surface with respect to root mean squared error (RMSE). In the second set of experiment, the authors applied their method to 100 clinical point clouds acquired from one human subject. In the absence of ground-truth, the authors qualitatively validated reconstructed surfaces by comparing the local geometry, specifically mean curvature distributions, against that of the surface extracted from a high-quality CT obtained from the same patient. Results: On phantom point clouds, their method
International Nuclear Information System (INIS)
Bosevski, T.
1986-01-01
An improved collision probability method for thermal-neutron-flux calculation in a cylindrical reactor cell has been developed. Expanding the neutron flux and source into a series of even powers of the radius, one' gets a convenient method for integration of the one-energy group integral transport equation. It is shown that it is possible to perform an analytical integration in the x-y plane in one variable and to use the effective Gaussian integration over another one. Choosing a convenient distribution of space points in fuel and moderator the transport matrix calculation and cell reaction rate integration were condensed. On the basis of the proposed method, the computer program DISKRET for the ZUSE-Z 23 K computer has been written. The suitability of the proposed method for the calculation of the thermal-neutron-flux distribution in a reactor cell can be seen from the test results obtained. Compared with the other collision probability methods, the proposed treatment excels with a mathematical simplicity and a faster convergence. (author)
Methods of reconstruction of multi-particle events in the new coordinate-tracking setup
Vorobyev, V. S.; Shutenko, V. V.; Zadeba, E. A.
2018-01-01
At the Unique Scientific Facility NEVOD (MEPhI), a large coordinate-tracking detector based on drift chambers for investigations of muon bundles generated by ultrahigh energy primary cosmic rays is being developed. One of the main characteristics of the bundle is muon multiplicity. Three methods of reconstruction of multiple events were investigated: the sequential search method, method of finding the straight line and method of histograms. The last method determines the number of tracks with the same zenith angle in the event. It is most suitable for the determination of muon multiplicity: because of a large distance to the point of generation of muons, their trajectories are quasiparallel. The paper presents results of application of three reconstruction methods to data from the experiment, and also first results of the detector operation.
Phase derivative method for reconstruction of slightly off-axis digital holograms.
Guo, Cheng-Shan; Wang, Ben-Yi; Sha, Bei; Lu, Yu-Jie; Xu, Ming-Yuan
2014-12-15
A phase derivative (PD) method is proposed for reconstruction of off-axis holograms. In this method, a phase distribution of the tested object wave constrained within 0 to pi radian is firstly worked out by a simple analytical formula; then it is corrected to its right range from -pi to pi according to the sign characteristics of its first-order derivative. A theoretical analysis indicates that this PD method is particularly suitable for reconstruction of slightly off-axis holograms because it only requires the spatial frequency of the reference beam larger than spatial frequency of the tested object wave in principle. In addition, because the PD method belongs to a pure local method with no need of any integral operation or phase shifting algorithm in process of the phase retrieval, it could have some advantages in reducing computer load and memory requirements to the image processing system. Some experimental results are given to demonstrate the feasibility of the method.
Liu, Wenyang; Cheung, Yam; Sabouri, Pouya; Arai, Tatsuya J; Sawant, Amit; Ruan, Dan
2015-11-01
To accurately and efficiently reconstruct a continuous surface from noisy point clouds captured by a surface photogrammetry system (VisionRT). The authors have developed a level-set based surface reconstruction method on point clouds captured by a surface photogrammetry system (VisionRT). The proposed method reconstructs an implicit and continuous representation of the underlying patient surface by optimizing a regularized fitting energy, offering extra robustness to noise and missing measurements. By contrast to explicit/discrete meshing-type schemes, their continuous representation is particularly advantageous for subsequent surface registration and motion tracking by eliminating the need for maintaining explicit point correspondences as in discrete models. The authors solve the proposed method with an efficient narrowband evolving scheme. The authors evaluated the proposed method on both phantom and human subject data with two sets of complementary experiments. In the first set of experiment, the authors generated a series of surfaces each with different black patches placed on one chest phantom. The resulting VisionRT measurements from the patched area had different degree of noise and missing levels, since VisionRT has difficulties in detecting dark surfaces. The authors applied the proposed method to point clouds acquired under these different configurations, and quantitatively evaluated reconstructed surfaces by comparing against a high-quality reference surface with respect to root mean squared error (RMSE). In the second set of experiment, the authors applied their method to 100 clinical point clouds acquired from one human subject. In the absence of ground-truth, the authors qualitatively validated reconstructed surfaces by comparing the local geometry, specifically mean curvature distributions, against that of the surface extracted from a high-quality CT obtained from the same patient. On phantom point clouds, their method achieved submillimeter
Zheng, N.
2017-12-01
degree of uncertainty with quantitative analysis. The study can provide theoretical basis and technical support for accurately measuring sensible heat fluxes of forest ecosystem with scintillometer method, and can also provide work foundation for further study on role of forest ecosystem in energy balance and climate change.
Correlation based method for comparing and reconstructing quasi-identical two-dimensional structures
International Nuclear Information System (INIS)
Mejia-Barbosa, Y.
2000-03-01
We show a method for comparing and reconstructing two similar amplitude-only structures, which are composed by the same number of identical apertures. The structures are two-dimensional and differ only in the location of one of the apertures. The method is based on a subtraction algorithm, which involves the auto-correlations and cross-correlation functions of the compared structures. Experimental results illustrate the feasibility of the method. (author)
McCloskey, Rosemary M.; Liang, Richard H.; Harrigan, P. Richard; Brumme, Zabrina L.
2014-01-01
ABSTRACT A population of human immunodeficiency virus (HIV) within a host often descends from a single transmitted/founder virus. The high mutation rate of HIV, coupled with long delays between infection and diagnosis, make isolating and characterizing this strain a challenge. In theory, ancestral reconstruction could be used to recover this strain from sequences sampled in chronic infection; however, the accuracy of phylogenetic techniques in this context is unknown. To evaluate the accuracy of these methods, we applied ancestral reconstruction to a large panel of published longitudinal clonal and/or single-genome-amplification HIV sequence data sets with at least one intrapatient sequence set sampled within 6 months of infection or seroconversion (n = 19,486 sequences, median [interquartile range] = 49 [20 to 86] sequences/set). The consensus of the earliest sequences was used as the best possible estimate of the transmitted/founder. These sequences were compared to ancestral reconstructions from sequences sampled at later time points using both phylogenetic and phylogeny-naive methods. Overall, phylogenetic methods conferred a 16% improvement in reproducing the consensus of early sequences, compared to phylogeny-naive methods. This relative advantage increased with intrapatient sequence diversity (P reconstructing ancestral indel variation, especially within indel-rich regions of the HIV genome. Although further improvements are needed, our results indicate that phylogenetic methods for ancestral reconstruction significantly outperform phylogeny-naive alternatives, and we identify experimental conditions and study designs that can enhance accuracy of transmitted/founder virus reconstruction. IMPORTANCE When HIV is transmitted into a new host, most of the viruses fail to infect host cells. Consequently, an HIV infection tends to be descended from a single “founder” virus. A priority target for the vaccine research, these transmitted/founder viruses are
International Nuclear Information System (INIS)
Menezes, Welton Alves; Alves Filho, Hermes; Barros, Ricardo C.
2009-01-01
In this paper the X,Y-geometry SD-SGF-CN spectral nodal method, cf. spectral diamond-spectral Green's function-constant nodal, is used to determine the one-speed node-edge average angular fluxes in heterogeneous domains. This hybrid spectral nodal method uses the spectral diamond (SD) auxiliary equation for the multiplying regions and the spectral Green's function (SGF) auxiliary equation for the non-multiplying regions of the domain. Moreover, we consider constant approximations for the transverse-leakage terms in the transverse integrated S N nodal equations. We solve the SD-SGF-CN equations using the one-node block inversion (NBI) iterative scheme, which uses the most recent estimates available for the node-entering fluxes to evaluate the node-exiting fluxes in the directions that constitute the incoming fluxes for the adjacent node. Using these results, we offer an algorithm for analytical reconstruction of the coarse-mesh nodal solution within each spatial node, as localized numerical solutions are not generated by usual accurate nodal methods. Numerical results are presented to illustrate the accuracy of the present algorithm. (author)
Anatomic and histological characteristics of vagina reconstructed by McIndoe method
Directory of Open Access Journals (Sweden)
Kozarski Jefta
2009-01-01
Full Text Available Background/Aim. Congenital absence of vagina is known from ancient times of Greek. According to the literature data, incidence is 1/4 000 to 1/20 000. Treatment of this anomaly includes non-operative and operative procedures. McIndoe procedure uses split skin graft by Thiersch. The aim of this study was to establish anatomic and histological characteristics of vagina reconstructed by McIndoe method in Mayer Küster-Rockitansky Hauser (MKRH syndrome and compare them with normal vagina. Methods. The study included 21 patients of 18 and more years with congenital anomaly known as aplasio vaginae within the Mayer Küster-Rockitansky Hauser syndrome. The patients were operated on by the plastic surgeon using the McIndoe method. The study was a retrospective review of the data from the history of the disease, objective and gynecological examination and cytological analysis of native preparations of vaginal stain (Papanicolau. Comparatively, 21 females of 18 and more years with normal vaginas were also studied. All the subjects were divided into the groups R (reconstructed and C (control and the subgroups according to age up to 30 years (1 R, 1C, from 30 to 50 (2R, 2C, and over 50 (3R, 3C. Statistical data processing was performed by using the Student's t-test and Mann-Writney U-test. A value of p < 0.05 was considered statistically significant. Results. The results show that there are differences in the depth and the wideness of reconstructed vagina, but the obtained values are still in the range of normal ones. Cytological differences between a reconstructed and the normal vagina were found. Conclusion. A reconstructed vagina is smaller than the normal one regarding depth and width, but within the range of normal values. A split skin graft used in the reconstruction, keeps its own cytological, i.e. histological and, so, biological characteristics.
Knies, David; Wittmüß, Philipp; Appel, Sebastian; Sawodny, Oliver; Ederer, Michael; Feuer, Ronny
2015-10-28
The coccolithophorid unicellular alga Emiliania huxleyi is known to form large blooms, which have a strong effect on the marine carbon cycle. As a photosynthetic organism, it is subjected to a circadian rhythm due to the changing light conditions throughout the day. For a better understanding of the metabolic processes under these periodically-changing environmental conditions, a genome-scale model based on a genome reconstruction of the E. huxleyi strain CCMP 1516 was created. It comprises 410 reactions and 363 metabolites. Biomass composition is variable based on the differentiation into functional biomass components and storage metabolites. The model is analyzed with a flux balance analysis approach called diurnal flux balance analysis (diuFBA) that was designed for organisms with a circadian rhythm. It allows storage metabolites to accumulate or be consumed over the diurnal cycle, while keeping the structure of a classical FBA problem. A feature of this approach is that the production and consumption of storage metabolites is not defined externally via the biomass composition, but the result of optimal resource management adapted to the diurnally-changing environmental conditions. The model in combination with this approach is able to simulate the variable biomass composition during the diurnal cycle in proximity to literature data.
Directory of Open Access Journals (Sweden)
David Knies
2015-10-01
Full Text Available The coccolithophorid unicellular alga Emiliania huxleyi is known to form large blooms, which have a strong effect on the marine carbon cycle. As a photosynthetic organism, it is subjected to a circadian rhythm due to the changing light conditions throughout the day. For a better understanding of the metabolic processes under these periodically-changing environmental conditions, a genome-scale model based on a genome reconstruction of the E. huxleyi strain CCMP 1516 was created. It comprises 410 reactions and 363 metabolites. Biomass composition is variable based on the differentiation into functional biomass components and storage metabolites. The model is analyzed with a flux balance analysis approach called diurnal flux balance analysis (diuFBA that was designed for organisms with a circadian rhythm. It allows storage metabolites to accumulate or be consumed over the diurnal cycle, while keeping the structure of a classical FBA problem. A feature of this approach is that the production and consumption of storage metabolites is not defined externally via the biomass composition, but the result of optimal resource management adapted to the diurnally-changing environmental conditions. The model in combination with this approach is able to simulate the variable biomass composition during the diurnal cycle in proximity to literature data.
Energy Technology Data Exchange (ETDEWEB)
Garreta, Vincent; Guiot, Joel; Hely, Christelle [CEREGE, UMR 6635, CNRS, Universite Aix-Marseille, Europole de l' Arbois, Aix-en-Provence (France); Miller, Paul A.; Sykes, Martin T. [Lund University, Department of Physical Geography and Ecosystems Analysis, Geobiosphere Science Centre, Lund (Sweden); Brewer, Simon [Universite de Liege, Institut d' Astrophysique et de Geophysique, Liege (Belgium); Litt, Thomas [University of Bonn, Paleontological Institute, Bonn (Germany)
2010-08-15
Climate reconstructions from data sensitive to past climates provide estimates of what these climates were like. Comparing these reconstructions with simulations from climate models allows to validate the models used for future climate prediction. It has been shown that for fossil pollen data, gaining estimates by inverting a vegetation model allows inclusion of past changes in carbon dioxide values. As a new generation of dynamic vegetation model is available we have developed an inversion method for one model, LPJ-GUESS. When this novel method is used with high-resolution sediment it allows us to bypass the classic assumptions of (1) climate and pollen independence between samples and (2) equilibrium between the vegetation, represented as pollen, and climate. Our dynamic inversion method is based on a statistical model to describe the links among climate, simulated vegetation and pollen samples. The inversion is realised thanks to a particle filter algorithm. We perform a validation on 30 modern European sites and then apply the method to the sediment core of Meerfelder Maar (Germany), which covers the Holocene at a temporal resolution of approximately one sample per 30 years. We demonstrate that reconstructed temperatures are constrained. The reconstructed precipitation is less well constrained, due to the dimension considered (one precipitation by season), and the low sensitivity of LPJ-GUESS to precipitation changes. (orig.)
International Nuclear Information System (INIS)
Milechina, L.; Cederwall, B.
2003-01-01
Gamma-ray tracking, a new detection technique for nuclear spectroscopy, requires efficient algorithms for reconstructing the interaction paths of multiple γ rays in a detector volume. In the present work, we discuss the effect of the atomic electron momentum distribution in Ge as well as employment of different types of figure-of-merit within the context of the so called backtracking method
Three-dimensional Reconstruction Method Study Based on Interferometric Circular SAR
Directory of Open Access Journals (Sweden)
Hou Liying
2016-10-01
Full Text Available Circular Synthetic Aperture Radar (CSAR can acquire targets’ scattering information in all directions by a 360° observation, but a single-track CSAR cannot efficiently obtain height scattering information for a strong directive scatter. In this study, we examine the typical target of the three-dimensional circular SAR interferometry theoryand validate the theory in a darkroom experiment. We present a 3D reconstruction of the actual tank metal model of interferometric CSAR for the first time, verify the validity of the method, and demonstrate the important potential applications of combining 3D reconstruction with omnidirectional observation.
Inverse Heat Conduction Methods in the CHAR Code for Aerothermal Flight Data Reconstruction
Oliver, A. Brandon; Amar, Adam J.
2016-01-01
Reconstruction of flight aerothermal environments often requires the solution of an inverse heat transfer problem, which is an ill-posed problem of determining boundary conditions from discrete measurements in the interior of the domain. This paper will present the algorithms implemented in the CHAR code for use in reconstruction of EFT-1 flight data and future testing activities. Implementation details will be discussed, and alternative hybrid-methods that are permitted by the implementation will be described. Results will be presented for a number of problems.
Baxes, Gregory A. (Inventor); Linger, Timothy C. (Inventor)
2011-01-01
Systems and methods are provided for progressive mesh storage and reconstruction using wavelet-encoded height fields. A method for progressive mesh storage includes reading raster height field data, and processing the raster height field data with a discrete wavelet transform to generate wavelet-encoded height fields. In another embodiment, a method for progressive mesh storage includes reading texture map data, and processing the texture map data with a discrete wavelet transform to generate wavelet-encoded texture map fields. A method for reconstructing a progressive mesh from wavelet-encoded height field data includes determining terrain blocks, and a level of detail required for each terrain block, based upon a viewpoint. Triangle strip constructs are generated from vertices of the terrain blocks, and an image is rendered utilizing the triangle strip constructs. Software products that implement these methods are provided.
Tomography reconstruction methods for damage diagnosis of wood structure in construction field
Qiu, Qiwen; Lau, Denvid
2018-03-01
The structural integrity of wood building element plays a critical role in the public safety, which requires effective methods for diagnosis of internal damage inside the wood body. Conventionally, the non-destructive testing (NDT) methods such as X-ray computed tomography, thermography, radar imaging reconstruction method, ultrasonic tomography, nuclear magnetic imaging techniques, and sonic tomography have been used to obtain the information about the internal structure of wood. In this paper, the applications, advantages and disadvantages of these traditional tomography methods are reviewed. Additionally, the present article gives an overview of recently developed tomography approach that relies on the use of mechanical and electromagnetic waves for assessing the structural integrity of wood buildings. This developed tomography reconstruction method is believed to provide a more accurate, reliable, and comprehensive assessment of wood structural integrity
Directory of Open Access Journals (Sweden)
K.-M. Erkkilä
2018-01-01
Full Text Available Freshwaters bring a notable contribution to the global carbon budget by emitting both carbon dioxide (CO2 and methane (CH4 to the atmosphere. Global estimates of freshwater emissions traditionally use a wind-speed-based gas transfer velocity, kCC (introduced by Cole and Caraco, 1998, for calculating diffusive flux with the boundary layer method (BLM. We compared CH4 and CO2 fluxes from BLM with kCC and two other gas transfer velocities (kTE and kHE, which include the effects of water-side cooling to the gas transfer besides shear-induced turbulence, with simultaneous eddy covariance (EC and floating chamber (FC fluxes during a 16-day measurement campaign in September 2014 at Lake Kuivajärvi in Finland. The measurements included both lake stratification and water column mixing periods. Results show that BLM fluxes were mainly lower than EC, with the more recent model kTE giving the best fit with EC fluxes, whereas FC measurements resulted in higher fluxes than simultaneous EC measurements. We highly recommend using up-to-date gas transfer models, instead of kCC, for better flux estimates. BLM CO2 flux measurements had clear differences between daytime and night-time fluxes with all gas transfer models during both stratified and mixing periods, whereas EC measurements did not show a diurnal behaviour in CO2 flux. CH4 flux had higher values in daytime than night-time during lake mixing period according to EC measurements, with highest fluxes detected just before sunset. In addition, we found clear differences in daytime and night-time concentration difference between the air and surface water for both CH4 and CO2. This might lead to biased flux estimates, if only daytime values are used in BLM upscaling and flux measurements in general. FC measurements did not detect spatial variation in either CH4 or CO2 flux over Lake Kuivajärvi. EC measurements, on the other hand, did not show any spatial variation in CH4 fluxes but did show a clear difference
Numerical studies of the flux-to-current ratio method in the KIPT neutron source facility
International Nuclear Information System (INIS)
Cao, Y.; Gohar, Y.; Zhong, Z.
2013-01-01
The reactivity of a subcritical assembly has to be monitored continuously in order to assure its safe operation. In this paper, the flux-to-current ratio method has been studied as an approach to provide the on-line reactivity measurement of the subcritical system. Monte Carlo numerical simulations have been performed using the KIPT neutron source facility model. It is found that the reactivity obtained from the flux-to-current ratio method is sensitive to the detector position in the subcritical assembly. However, if multiple detectors are located about 12 cm above the graphite reflector and 54 cm radially, the technique is shown to be very accurate in determining the k eff this facility in the range of 0.75 to 0.975. (authors)
Optical properties reconstruction using the adjoint method based on the radiative transfer equation
Addoum, Ahmad; Farges, Olivier; Asllanaj, Fatmir
2018-01-01
An efficient algorithm is proposed to reconstruct the spatial distribution of optical properties in heterogeneous media like biological tissues. The light transport through such media is accurately described by the radiative transfer equation in the frequency-domain. The adjoint method is used to efficiently compute the objective function gradient with respect to optical parameters. Numerical tests show that the algorithm is accurate and robust to retrieve simultaneously the absorption μa and scattering μs coefficients for lowly and highly absorbing medium. Moreover, the simultaneous reconstruction of μs and the anisotropy factor g of the Henyey-Greenstein phase function is achieved with a reasonable accuracy. The main novelty in this work is the reconstruction of g which might open the possibility to image this parameter in tissues as an additional contrast agent in optical tomography.
Rehanging Reynolds at the British Institution: Methods for Reconstructing Ephemeral Displays
Directory of Open Access Journals (Sweden)
Catherine Roach
2016-11-01
Full Text Available Reconstructions of historic exhibitions made with current technologies can present beguiling illusions, but they also put us in danger of recreating the past in our own image. This article and the accompanying reconstruction explore methods for representing lost displays, with an emphasis on visualizing uncertainty, illuminating process, and understanding the mediated nature of period images. These issues are highlighted in a partial recreation of a loan show held at the British Institution, London, in 1823, which featured the works of Sir Joshua Reynolds alongside continental old masters. This recreation demonstrates how speculative reconstructions can nonetheless shed light on ephemeral displays, revealing powerful visual and conceptual dialogues that took place on the crowded walls of nineteenth-century exhibitions.
Fast data reconstructed method of Fourier transform imaging spectrometer based on multi-core CPU
Yu, Chunchao; Du, Debiao; Xia, Zongze; Song, Li; Zheng, Weijian; Yan, Min; Lei, Zhenggang
2017-10-01
Imaging spectrometer can gain two-dimensional space image and one-dimensional spectrum at the same time, which shows high utility in color and spectral measurements, the true color image synthesis, military reconnaissance and so on. In order to realize the fast reconstructed processing of the Fourier transform imaging spectrometer data, the paper designed the optimization reconstructed algorithm with OpenMP parallel calculating technology, which was further used for the optimization process for the HyperSpectral Imager of `HJ-1' Chinese satellite. The results show that the method based on multi-core parallel computing technology can control the multi-core CPU hardware resources competently and significantly enhance the calculation of the spectrum reconstruction processing efficiency. If the technology is applied to more cores workstation in parallel computing, it will be possible to complete Fourier transform imaging spectrometer real-time data processing with a single computer.
Han, Miaomiao; Guo, Zhirong; Liu, Haifeng; Li, Qinghua
2018-05-01
Tomographic Gamma Scanning (TGS) is a method used for the nondestructive assay of radioactive wastes. In TGS, the actual irregular edge voxels are regarded as regular cubic voxels in the traditional treatment method. In this study, in order to improve the performance of TGS, a novel edge treatment method is proposed that considers the actual shapes of these voxels. The two different edge voxel treatment methods were compared by computing the pixel-level relative errors and normalized mean square errors (NMSEs) between the reconstructed transmission images and the ideal images. Both methods were coupled with two different interative algorithms comprising Algebraic Reconstruction Technique (ART) with a non-negativity constraint and Maximum Likelihood Expectation Maximization (MLEM). The results demonstrated that the traditional method for edge voxel treatment can introduce significant error and that the real irregular edge voxel treatment method can improve the performance of TGS by obtaining better transmission reconstruction images. With the real irregular edge voxel treatment method, MLEM algorithm and ART algorithm can be comparable when assaying homogenous matrices, but MLEM algorithm is superior to ART algorithm when assaying heterogeneous matrices. Copyright © 2018 Elsevier Ltd. All rights reserved.
THE APPLICATION OF QRQC METHOD TO SOLVE PROBLEMS AND TO IMPROVE THE PRODUCTION FLUX (2
Directory of Open Access Journals (Sweden)
Ancuta BALTEANU
2016-05-01
Full Text Available The subject proposed was developed over two parts. In the first written work it was presented the initial situation within a production flow that is sensed in the appearance of defects and nonconformities in obtain the final products. In this second paper - respectively this - we will show the use of this method in a situation that requires elimination of a technological problem appeared in the production flux and in relieving its positive consequence.
Critical heat flux detection in rods simulating fuel elements by using dilation method
International Nuclear Information System (INIS)
Mesquita, A.Z.
1993-01-01
In out-reactor heat transfer experiments, fuel elements are often simulated by electrically heated rods. In order to prevent the heating rod from being damaged by burnout, when the critical heat flux occurs a safety system is provided which checks the axial thermal expansion of the rod. In case of sudden temperature increase, the corresponding elongation causes a fast interruption of the electrical power supply. The experiments presented here show that this method is more effective than one that uses thermocouples. (author)
Wheeler, Mary
2013-11-16
We study the numerical approximation on irregular domains with general grids of the system of poroelasticity, which describes fluid flow in deformable porous media. The flow equation is discretized by a multipoint flux mixed finite element method and the displacements are approximated by a continuous Galerkin finite element method. First-order convergence in space and time is established in appropriate norms for the pressure, velocity, and displacement. Numerical results are presented that illustrate the behavior of the method. © Springer Science+Business Media Dordrecht 2013.
Direct fourier methods in 3D-reconstruction from cone-beam data
International Nuclear Information System (INIS)
Axelsson, C.
1994-01-01
The problem of 3D-reconstruction is encountered in both medical and industrial applications of X-ray tomography. A method able to utilize a complete set of projections complying with Tuys condition was proposed by Grangeat. His method is mathematically exact and consists of two distinct phases. In phase 1 cone-beam projection data are used to produce the derivative of the radon transform. In phase 2, after interpolation, the radon transform data are used to reconstruct the three-dimensional object function. To a large extent our method is an extension of the Grangeat method. Our aim is to reduce the computational complexity, i.e. to produce a faster method. The most taxing procedure during phase 1 is computation of line-integrals in the detector plane. By applying the direct Fourier method in reverse for this computation, we reduce the complexity of phase 1 from O(N 4 ) to O(N 3 logN). Phase 2 can be performed either as a straight 3D-reconstruction or as a sequence of two 2D-reconstructions in vertical and horizontal planes, respectively. Direct Fourier methods can be applied for the 2D- and for the 3D-reconstruction, which reduces the complexity of phase 2 from O(N 4 ) to O(N 3 logN) as well. In both cases, linogram techniques are applied. For 3D-reconstruction the inversion formula contains the second derivative filter instead of the well-known ramp-filter employed in the 2D-case. The derivative filter is more well-behaved than the 2D ramp-filter. This implies that less zeropadding is necessary which brings about a further reduction of the computational efforts. The method has been verified by experiments on simulated data. The image quality is satisfactory and independent of cone-beam angles. For a 512 3 volume we estimate that our method is ten times faster than Grangeats method
Directory of Open Access Journals (Sweden)
ROXANA VĂIDEAN
2015-10-01
Full Text Available Debris Flow Activity Reconstruction Using Dendrogeomorphological Methods. Study Case (Piule Iorgovanu Mountains. Debris flows are one of the most destructive mass-movements that manifest in the mountainous regions around the world. As they usually occur on the steep slopes of the mountain streams where human settlements are scarce, they are hardly monitored. But when they do interact with builtup areas or transportation corridors they cause enormous damages and even casualties. The rise of human pressure in the hazardous regions has led to an increase in the severity of the negative consequences related to debris flows. Consequently, a complete database for hazard assessment of the areas which show evidence of debris flow activity is needed. Because of the lack of archival records knowledge about their frequency remains poor. One of the most precise methods used in the reconstruction of past debris flow activity are dendrogeomorphological methods. Using growth anomalies of the affected trees, a valuable event chronology can be obtained. Therefore, it is the purpose of this study to reconstruct debris flow activity on a small catchment located on the northern slope of Piule Iorgovanu Mountains. The trees growing near the channel of transport and on the debris fan, exhibit different types of disturbances. A number of 98 increment cores, 19 cross-sections and 1 semi-transversal cross-section was used. Based on the growth anomalies identified in the samples there were reconstructed a number of 19 events spanning a period of almost a century.
Influence of image reconstruction methods on statistical parametric mapping of brain PET images
International Nuclear Information System (INIS)
Yin Dayi; Chen Yingmao; Yao Shulin; Shao Mingzhe; Yin Ling; Tian Jiahe; Cui Hongyan
2007-01-01
Objective: Statistic parametric mapping (SPM) was widely recognized as an useful tool in brain function study. The aim of this study was to investigate if imaging reconstruction algorithm of PET images could influence SPM of brain. Methods: PET imaging of whole brain was performed in six normal volunteers. Each volunteer had two scans with true and false acupuncturing. The PET scans were reconstructed using ordered subsets expectation maximization (OSEM) and filtered back projection (FBP) with 3 varied parameters respectively. The images were realigned, normalized and smoothed using SPM program. The difference between true and false acupuncture scans was tested using a matched pair t test at every voxel. Results: (1) SPM corrected multiple comparison (P corrected uncorrected <0.001): SPM derived from the images with different reconstruction method were different. The largest difference, in number and position of the activated voxels, was noticed between FBP and OSEM re- construction algorithm. Conclusions: The method of PET image reconstruction could influence the results of SPM uncorrected multiple comparison. Attention should be paid when the conclusion was drawn using SPM uncorrected multiple comparison. (authors)
A Stochastic Geometry Method for Pylon Reconstruction from Airborne LiDAR Data
Directory of Open Access Journals (Sweden)
Bo Guo
2016-03-01
Full Text Available Object detection and reconstruction from remotely sensed data are active research topic in photogrammetric and remote sensing communities. Power engineering device monitoring by detecting key objects is important for power safety. In this paper, we introduce a novel method for the reconstruction of self-supporting pylons widely used in high voltage power-line systems from airborne LiDAR data. Our work constructs pylons from a library of 3D parametric models, which are represented using polyhedrons based on stochastic geometry. Firstly, laser points of pylons are extracted from the dataset using an automatic classification method. An energy function made up of two terms is then defined: the first term measures the adequacy of the objects with respect to the data, and the second term has the ability to favor or penalize certain configurations based on prior knowledge. Finally, estimation is undertaken by minimizing the energy using simulated annealing. We use a Markov Chain Monte Carlo sampler, leading to an optimal configuration of objects. Two main contributions of this paper are: (1 building a framework for automatic pylon reconstruction; and (2 efficient global optimization. The pylons can be precisely reconstructed through energy optimization. Experiments producing convincing results validated the proposed method using a dataset of complex structure.
High-speed fan-beam reconstruction using direct two-dimensional Fourier transform method
International Nuclear Information System (INIS)
Niki, Noboru; Mizutani, Toshio; Takahashi, Yoshizo; Inouye, Tamon.
1984-01-01
Since the first development of X-ray computer tomography (CT), various efforts have been made to obtain high quality of high-speed image. However, the development of high resolution CT and the ultra-high speed CT to be applied to hearts is still desired. The X-ray beam scanning method was already changed from the parallel beam system to the fan-beam system in order to greatly shorten the scanning time. Also, the filtered back projection (DFBP) method has been employed to directly processing fan-beam projection data as reconstruction method. Although the two-dimensional Fourier transform (TFT) method significantly faster than FBP method was proposed, it has not been sufficiently examined for fan-beam projection data. Thus, the ITFT method was investigated, which first executes rebinning algorithm to convert the fan-beam projection data to the parallel beam projection data, thereafter, uses two-dimensional Fourier transform. By this method, although high speed is expected, the reconstructed images might be degraded due to the adoption of rebinning algorithm. Therefore, the effect of the interpolation error of rebinning algorithm on the reconstructed images has been analyzed theoretically, and finally, the result of the employment of spline interpolation which allows the acquisition of high quality images with less errors has been shown by the numerical and visual evaluation based on simulation and actual data. Computation time was reduced to 1/15 for the image matrix of 512 and to 1/30 for doubled matrix. (Wakatsuki, Y.)
International Nuclear Information System (INIS)
Guo, Siyang; Lin, Jiarui; Yang, Linghui; Ren, Yongjie; Guo, Yin
2017-01-01
The workshop Measurement Position System (wMPS) is a distributed measurement system which is suitable for the large-scale metrology. However, there are some inevitable measurement problems in the shipbuilding industry, such as the restriction by obstacles and limited measurement range. To deal with these factors, this paper presents a method of reconstructing the spatial measurement network by mobile transmitter. A high-precision coordinate control network with more than six target points is established. The mobile measuring transmitter can be added into the measurement network using this coordinate control network with the spatial resection method. This method reconstructs the measurement network and broadens the measurement scope efficiently. To verify this method, two comparison experiments are designed with the laser tracker as the reference. The results demonstrate that the accuracy of point-to-point length is better than 0.4mm and the accuracy of coordinate measurement is better than 0.6mm. (paper)
a Borehole-Dilution Method for Quantifying Vertical Darcy Fluxes in the Hyporheic Zone
Augustine, S. D.; Annable, M. D.; Cho, J.
2017-12-01
The borehole dilution method has consistently and successfully been used for estimating local water fluxes, however, this method can be relatively labor intensive and expensive. The focus of this research is aimed at developing a low-cost, borehole dilution method for quantifying vertical water fluxes in the hyporheic zone at the surface-groundwater interface. This would allow for the deployment of multiple units within a targeted surface water body and thus produce high-resolution, spatially distributed data on the infiltration rates over a short period of time with minimal set-up requirements. The device consists of a 2-inch, inner diameter PVC pipe containing short, screened sections in its upper and lower segments. The working unit is driven into the sediment and acts as a continuous flow reactor creating a pathway between the subsurface pore-water and the overlying surface water where the presence of a hydraulic gradient facilitates vertical movement. We developed a simple electrode and tracer-injection system housed within the unit to inject and measure salt tracer concentrations at the desired intervals while monitoring and storing those measurements using open-source Arduino technology. Preliminary lab and field scale trials provided data that was fit to both zero and first order reaction rate functions for analysis. The field test was conducted over approximately one day within a wet retention basin. The initial results estimated a vertical Darcy flux of 113.5 cm/d. Additional testing over a range of expected Darcy fluxes will be presented along with an evaluation considering enhanced water flow due to the high hydraulic conductivity of the device.
System and method for image reconstruction, analysis, and/or de-noising
Laleg-Kirati, Taous-Meriem
2015-11-12
A method and system can analyze, reconstruct, and/or denoise an image. The method and system can include interpreting a signal as a potential of a Schrödinger operator, decomposing the signal into squared eigenfunctions, reducing a design parameter of the Schrödinger operator, analyzing discrete spectra of the Schrödinger operator and combining the analysis of the discrete spectra to construct the image.
An accelerated test method of luminous flux depreciation for LED luminaires and lamps
International Nuclear Information System (INIS)
Qian, C.; Fan, X.J.; Fan, J.J.; Yuan, C.A.; Zhang, G.Q.
2016-01-01
Light Emitting Diode (LED) luminaires and lamps are energy-saving and environmental friendly alternatives to traditional lighting products. However, current luminous flux depreciation test at luminaire and lamp level requires a minimum of 6000 h testing, which is even longer than the product development cycle time. This paper develops an accelerated test method for luminous flux depreciation to reduce the test time within 2000 h at an elevated temperature. The method is based on lumen maintenance boundary curve, obtained from a collection of LED source lumen depreciation data, known as LM-80 data. The exponential decay model and Arrhenius acceleration relationship are used to determine the new threshold of lumen maintenance and acceleration factor. The proposed method has been verified by a number of simulation studies and experimental data for a wide range of LED luminaire and lamp types from both internal and external experiments. The qualification results obtained by the accelerated test method agree well with traditional 6000 h tests. - Highlights: • We develop an accelerated test method for LED luminaires and lamps. • The method is proposed based on a “Boundary Curve” concept. • The parameters of the boundary curve are extracted from LM-80 test reports. • Qualification results from the proposed method agree with ES requirements.
International Nuclear Information System (INIS)
Tuna, U.; Johansson, J.; Ruotsalainen, U.
2014-01-01
The aim of the study was (1) to evaluate the reconstruction strategies with dynamic [ 11 C]-raclopride human positron emission tomography (PET) studies acquired from ECAT high-resolution research tomograph (HRRT) scanner and (2) to justify for the selected gap-filling method for analytical reconstruction with simulated phantom data. A new transradial bicubic interpolation method has been implemented to enable faster analytical 3D-reprojection (3DRP) reconstructions for the ECAT HRRT PET scanner data. The transradial bicubic interpolation method was compared to the other gap-filling methods visually and quantitatively using the numerical Shepp-Logan phantom. The performance of the analytical 3DRP reconstruction method with this new gap-filling method was evaluated in comparison with the iterative statistical methods: ordinary Poisson ordered subsets expectation maximization (OPOSEM) and resolution modeled OPOSEM methods. The image reconstruction strategies were evaluated using human data at different count statistics and consequently at different noise levels. In the assessments, 14 [ 11 C]-raclopride dynamic PET studies (test-retest studies of 7 healthy subjects) acquired from the HRRT PET scanner were used. Besides the visual comparisons of the methods, we performed regional quantitative evaluations over the cerebellum, caudate and putamen structures. We compared the regional time-activity curves (TACs), areas under the TACs and binding potential (BP ND ) values. The results showed that the new gap-filling method preserves the linearity of the 3DRP method. Results with the 3DRP after gap-filling method exhibited hardly any dependency on the count statistics (noise levels) in the sinograms while we observed changes in the quantitative results with the EM-based methods for different noise contamination in the data. With this study, we showed that 3DRP with transradial bicubic gap-filling method is feasible for the reconstruction of high-resolution PET data with
International Nuclear Information System (INIS)
Shafii, M. Ali; Su'ud, Zaki; Waris, Abdul; Kurniasih, Neny; Ariani, Menik; Yulianti, Yanti
2010-01-01
Nuclear reactor design and analysis of next-generation reactors require a comprehensive computing which is better to be executed in a high performance computing. Flat flux (FF) approach is a common approach in solving an integral transport equation with collision probability (CP) method. In fact, the neutron flux distribution is not flat, even though the neutron cross section is assumed to be equal in all regions and the neutron source is uniform throughout the nuclear fuel cell. In non-flat flux (NFF) approach, the distribution of neutrons in each region will be different depending on the desired interpolation model selection. In this study, the linear interpolation using Finite Element Method (FEM) has been carried out to be treated the neutron distribution. The CP method is compatible to solve the neutron transport equation for cylindrical geometry, because the angle integration can be done analytically. Distribution of neutrons in each region of can be explained by the NFF approach with FEM and the calculation results are in a good agreement with the result from the SRAC code. In this study, the effects of the mesh on the k eff and other parameters are investigated.
Restoration of the analytically reconstructed OpenPET images by the method of convex projections
Energy Technology Data Exchange (ETDEWEB)
Tashima, Hideaki; Murayama, Hideo; Yamaya, Taiga [National Institute of Radiological Sciences, Chiba (Japan); Katsunuma, Takayuki; Suga, Mikio [Chiba Univ. (Japan). Graduate School of Engineering; Kinouchi, Shoko [National Institute of Radiological Sciences, Chiba (Japan); Chiba Univ. (Japan). Graduate School of Engineering; Obi, Takashi [Tokyo Institute of Technology (Japan). Interdisciplinary Graduate School of Science and Engineering; Kudo, Hiroyuki [Tsukuba Univ. (Japan). Graduate School of Systems and Information Engineering
2011-07-01
We have proposed the OpenPET geometry which has gaps between detector rings and physically opened field-of-view. The image reconstruction of the OpenPET is classified into an incomplete problem because it does not satisfy the Orlov's condition. Even so, the simulation and experimental studies have shown that applying iterative methods such as the maximum likelihood expectation maximization (ML-EM) algorithm successfully reconstruct images in the gap area. However, the imaging process of the iterative methods in the OpenPET imaging is not clear. Therefore, the aim of this study is to analytically analyze the OpenPET imaging and estimate implicit constraints involved in the iterative methods. To apply explicit constraints in the OpenPET imaging, we used the method of convex projections for restoration of the images reconstructed by the analytical way in which low-frequency components are lost. Numerical simulations showed that the similar restoration effects are involved both in the ML-EM and the method of convex projections. Therefore, the iterative methods have advantageous effect of restoring lost frequency components of the OpenPET imaging. (orig.)
Quartet-net: a quartet-based method to reconstruct phylogenetic networks.
Yang, Jialiang; Grünewald, Stefan; Wan, Xiu-Feng
2013-05-01
Phylogenetic networks can model reticulate evolutionary events such as hybridization, recombination, and horizontal gene transfer. However, reconstructing such networks is not trivial. Popular character-based methods are computationally inefficient, whereas distance-based methods cannot guarantee reconstruction accuracy because pairwise genetic distances only reflect partial information about a reticulate phylogeny. To balance accuracy and computational efficiency, here we introduce a quartet-based method to construct a phylogenetic network from a multiple sequence alignment. Unlike distances that only reflect the relationship between a pair of taxa, quartets contain information on the relationships among four taxa; these quartets provide adequate capacity to infer a more accurate phylogenetic network. In applications to simulated and biological data sets, we demonstrate that this novel method is robust and effective in reconstructing reticulate evolutionary events and it has the potential to infer more accurate phylogenetic distances than other conventional phylogenetic network construction methods such as Neighbor-Joining, Neighbor-Net, and Split Decomposition. This method can be used in constructing phylogenetic networks from simple evolutionary events involving a few reticulate events to complex evolutionary histories involving a large number of reticulate events. A software called "Quartet-Net" is implemented and available at http://sysbio.cvm.msstate.edu/QuartetNet/.
A hybrid 3D SEM reconstruction method optimized for complex geologic material surfaces.
Yan, Shang; Adegbule, Aderonke; Kibbey, Tohren C G
2017-08-01
Reconstruction methods are widely used to extract three-dimensional information from scanning electron microscope (SEM) images. This paper presents a new hybrid reconstruction method that combines stereoscopic reconstruction with shape-from-shading calculations to generate highly-detailed elevation maps from SEM image pairs. The method makes use of an imaged glass sphere to determine the quantitative relationship between observed intensity and angles between the beam and surface normal, and the detector and surface normal. Two specific equations are derived to make use of image intensity information in creating the final elevation map. The equations are used together, one making use of intensities in the two images, the other making use of intensities within a single image. The method is specifically designed for SEM images captured with a single secondary electron detector, and is optimized to capture maximum detail from complex natural surfaces. The method is illustrated with a complex structured abrasive material, and a rough natural sand grain. Results show that the method is capable of capturing details such as angular surface features, varying surface roughness, and surface striations. Copyright © 2017 Elsevier Ltd. All rights reserved.
Wang, Qi; Wang, Huaxiang; Cui, Ziqiang; Yang, Chengyi
2012-11-01
Electrical impedance tomography (EIT) calculates the internal conductivity distribution within a body using electrical contact measurements. The image reconstruction for EIT is an inverse problem, which is both non-linear and ill-posed. The traditional regularization method cannot avoid introducing negative values in the solution. The negativity of the solution produces artifacts in reconstructed images in presence of noise. A statistical method, namely, the expectation maximization (EM) method, is used to solve the inverse problem for EIT in this paper. The mathematical model of EIT is transformed to the non-negatively constrained likelihood minimization problem. The solution is obtained by the gradient projection-reduced Newton (GPRN) iteration method. This paper also discusses the strategies of choosing parameters. Simulation and experimental results indicate that the reconstructed images with higher quality can be obtained by the EM method, compared with the traditional Tikhonov and conjugate gradient (CG) methods, even with non-negative processing. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Tae Joon Choi
2016-01-01
Full Text Available Titanium micro-mesh implants are widely used in orbital wall reconstructions because they have several advantageous characteristics. However, the rough and irregular marginal spurs of the cut edges of the titanium mesh sheet impede the efficacious and minimally traumatic insertion of the implant, because these spurs may catch or hook the orbital soft tissue, skin, or conjunctiva during the insertion procedure. In order to prevent this problem, we developed an easy method of inserting a titanium micro-mesh, in which it is wrapped with the aseptic transparent plastic film that is used to pack surgical instruments or is attached to one side of the inner suture package. Fifty-four patients underwent orbital wall reconstruction using a transconjunctival or transcutaneous approach. The wrapped implant was easily inserted without catching or injuring the orbital soft tissue, skin, or conjunctiva. In most cases, the implant was inserted in one attempt. Postoperative computed tomographic scans showed excellent placement of the titanium micro-mesh and adequate anatomic reconstruction of the orbital walls. This wrapping insertion method may be useful for making the insertion of titanium micro-mesh implants in the reconstruction of orbital wall fractures easier and less traumatic.
Bai, Bing
2012-03-01
There has been a lot of work on total variation (TV) regularized tomographic image reconstruction recently. Many of them use gradient-based optimization algorithms with a differentiable approximation of the TV functional. In this paper we apply TV regularization in Positron Emission Tomography (PET) image reconstruction. We reconstruct the PET image in a Bayesian framework, using Poisson noise model and TV prior functional. The original optimization problem is transformed to an equivalent problem with inequality constraints by adding auxiliary variables. Then we use an interior point method with logarithmic barrier functions to solve the constrained optimization problem. In this method, a series of points approaching the solution from inside the feasible region are found by solving a sequence of subproblems characterized by an increasing positive parameter. We use preconditioned conjugate gradient (PCG) algorithm to solve the subproblems directly. The nonnegativity constraint is enforced by bend line search. The exact expression of the TV functional is used in our calculations. Simulation results show that the algorithm converges fast and the convergence is insensitive to the values of the regularization and reconstruction parameters.
Bailey, Geoffrey N; Reynolds, Sally C; King, Geoffrey C P
2011-03-01
This paper examines the relationship between complex and tectonically active landscapes and patterns of human evolution. We show how active tectonics can produce dynamic landscapes with geomorphological and topographic features that may be critical to long-term patterns of hominin land use, but which are not typically addressed in landscape reconstructions based on existing geological and paleoenvironmental principles. We describe methods of representing topography at a range of scales using measures of roughness based on digital elevation data, and combine the resulting maps with satellite imagery and ground observations to reconstruct features of the wider landscape as they existed at the time of hominin occupation and activity. We apply these methods to sites in South Africa, where relatively stable topography facilitates reconstruction. We demonstrate the presence of previously unrecognized tectonic effects and their implications for the interpretation of hominin habitats and land use. In parts of the East African Rift, reconstruction is more difficult because of dramatic changes since the time of hominin occupation, while fossils are often found in places where activity has now almost ceased. However, we show that original, dynamic landscape features can be assessed by analogy with parts of the Rift that are currently active and indicate how this approach can complement other sources of information to add new insights and pose new questions for future investigation of hominin land use and habitats. Copyright © 2010 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Zbijewski, Wojciech; Beekman, Freek J
2006-01-01
X-ray CT images obtained with iterative reconstruction (IR) can be hampered by the so-called edge and aliasing artefacts, which appear as interference patterns and severe overshoots in the areas of sharp intensity transitions. Previously, we have demonstrated that these artefacts are caused by discretization errors during the projection simulation step in IR. Although these errors are inherent to IR, they can be adequately suppressed by reconstruction on an image grid that is finer than that typically used for analytical methods such as filtered back-projection. Two other methods that may prevent edge artefacts are: (i) smoothing the projections prior to reconstruction or (ii) using an image representation different from voxels; spherically symmetric Kaiser-Bessel functions are a frequently employed example of such a representation. In this paper, we compare reconstruction on a fine grid with the two above-mentioned alternative strategies for edge artefact reduction. We show that the use of a fine grid results in a more adequate suppression of artefacts than the smoothing of projections or using the Kaiser-Bessel image representation
Development of Coincidence Method for Determination Thermal Neutron Flux on RSG-GAS
International Nuclear Information System (INIS)
Bakhri, Syaiful; Hamzah, Amir
2004-01-01
The research to develop detection radiation system using coincidence method has been done to determine thermal neutron flux in RS1 and RS2 irradiation facilities RSG-GAS. At this research has arranged beta-gamma coincidence equipment system and parameter of measurement according to Au-198 beta-gamma spectrum. Gold foils that have irradiated for period of time, counted, and the activities of radiation is analyzed to get neutron flux. Result of research indicate that systems measurement of absolute activity with gamma beta coincidence method functioning well and can be applied at activity measurement of gold foil for irradiation facility characterization. The results show that thermal neutron flux in RS1 and RS2, respectively is 2.007E+12 n/cm 2 s and 2.147E+12 n/cm 2 s. To examine the system performance, the result was compared to measure activity using high resolution of Hp Ge detector and achieved discrepancy is about 1.26% and 6.70%. (author)
Clinical correlative evaluation of an iterative method for reconstruction of brain SPECT images
International Nuclear Information System (INIS)
Nobili, Flavio; Vitali, Paolo; Calvini, Piero; Bollati, Francesca; Girtler, Nicola; Delmonte, Marta; Mariani, Giuliano; Rodriguez, Guido
2001-01-01
Background: Brain SPECT and PET investigations have showed discrepancies in Alzheimer's disease (AD) when considering data deriving from deeply located structures, such as the mesial temporal lobe. These discrepancies could be due to a variety of factors, including substantial differences in gamma-cameras and underlying technology. Mesial temporal structures are deeply located within the brain and the commonly used Filtered Back-Projection (FBP) technique does not fully take into account either the physical parameters of gamma-cameras or geometry of collimators. In order to overcome these limitations, alternative reconstruction methods have been proposed, such as the iterative method of the Conjugate Gradients with modified matrix (CG). However, the clinical applications of these methods have so far been only anecdotal. The present study was planned to compare perfusional SPECT data as derived from the conventional FBP method and from the iterative CG method, which takes into account the geometrical and physical characteristics of the gamma-camera, by a correlative approach with neuropsychology. Methods: Correlations were compared between perfusion of the hippocampal region, as achieved by both the FBP and the CG reconstruction methods, and a short-memory test (Selective Reminding Test, SRT), specifically addressing one of its function. A brain-dedicated camera (CERASPECT) was used for SPECT studies with 99m Tc-hexamethylpropylene-amine-oxime in 23 consecutive patients (mean age: 74.2±6.5) with mild (Mini-Mental Status Examination score ≥15, mean 20.3±3), probable AD. Counts from a hippocampal region in each hemisphere were referred to the average thalamic counts. Results: Hippocampal perfusion significantly correlated with the MMSE score with similar statistical significance (p<0.01) between the two reconstruction methods. Correlation between hippocampal perfusion and the SRT score was better with the CG method (r=0.50 for both hemispheres, p<0.01) than with
Clinical correlative evaluation of an iterative method for reconstruction of brain SPECT images
Energy Technology Data Exchange (ETDEWEB)
Nobili, Flavio E-mail: fnobili@smartino.ge.it; Vitali, Paolo; Calvini, Piero; Bollati, Francesca; Girtler, Nicola; Delmonte, Marta; Mariani, Giuliano; Rodriguez, Guido
2001-08-01
Background: Brain SPECT and PET investigations have showed discrepancies in Alzheimer's disease (AD) when considering data deriving from deeply located structures, such as the mesial temporal lobe. These discrepancies could be due to a variety of factors, including substantial differences in gamma-cameras and underlying technology. Mesial temporal structures are deeply located within the brain and the commonly used Filtered Back-Projection (FBP) technique does not fully take into account either the physical parameters of gamma-cameras or geometry of collimators. In order to overcome these limitations, alternative reconstruction methods have been proposed, such as the iterative method of the Conjugate Gradients with modified matrix (CG). However, the clinical applications of these methods have so far been only anecdotal. The present study was planned to compare perfusional SPECT data as derived from the conventional FBP method and from the iterative CG method, which takes into account the geometrical and physical characteristics of the gamma-camera, by a correlative approach with neuropsychology. Methods: Correlations were compared between perfusion of the hippocampal region, as achieved by both the FBP and the CG reconstruction methods, and a short-memory test (Selective Reminding Test, SRT), specifically addressing one of its function. A brain-dedicated camera (CERASPECT) was used for SPECT studies with {sup 99m}Tc-hexamethylpropylene-amine-oxime in 23 consecutive patients (mean age: 74.2{+-}6.5) with mild (Mini-Mental Status Examination score {>=}15, mean 20.3{+-}3), probable AD. Counts from a hippocampal region in each hemisphere were referred to the average thalamic counts. Results: Hippocampal perfusion significantly correlated with the MMSE score with similar statistical significance (p<0.01) between the two reconstruction methods. Correlation between hippocampal perfusion and the SRT score was better with the CG method (r=0.50 for both hemispheres, p<0
Directory of Open Access Journals (Sweden)
Sarah D. Lichenstein
2016-09-01
Full Text Available Purpose: Diffusion MRI provides a non-invasive way of estimating structural connectivity in the brain. Many studies have used diffusion phantoms as benchmarks to assess the performance of different tractography reconstruction algorithms and assumed that the results can be applied to in vivo studies. Here we examined whether quality metrics derived from a common, publically available, diffusion phantom can reliably predict tractography performance in human white matter tissue. Material and Methods: We compared estimates of fiber length and fiber crossing among a simple tensor model (diffusion tensor imaging, a more complicated model (ball-and-sticks and model-free (diffusion spectrum imaging, generalized q-sampling imaging reconstruction methods using a capillary phantom and in vivo human data (N=14. Results: Our analysis showed that evaluation outcomes differ depending on whether they were obtained from phantom or human data. Specifically, the diffusion phantom favored a more complicated model over a simple tensor model or model-free methods for resolving crossing fibers. On the other hand, the human studies showed the opposite pattern of results, with the model-free methods being more advantageous than model-based methods or simple tensor models. This performance difference was consistent across several metrics, including estimating fiber length and resolving fiber crossings in established white matter pathways. Conclusions: These findings indicate that the construction of current capillary diffusion phantoms tends to favor complicated reconstruction models over a simple tensor model or model-free methods, whereas the in vivo data tends to produce opposite results. This brings into question the previous phantom-based evaluation approaches and suggests that a more realistic phantom or simulation is necessary to accurately predict the relative performance of different tractography reconstruction methods. Acronyms: BSM: ball-and-sticks model; d
Energy Technology Data Exchange (ETDEWEB)
Cameron, R D; White, R G; Luick, J R
1976-06-01
The accuracy of the tritium water dilution method in estimating water flux was evaluated in reindeer under various conditions of temperature and diet. Two non-pregnant female reindeer were restrained in metabolism stalls, within controlled-environment chambers, at temperatures of +10, -5, and -20/sup 0/C; varying amounts of a commercial pelleted ration (crude protein, 13 percent) or mixed lichens (crude protein, 3 percent) were offered, and water was provided ad libitum either as snow or in liquid form. Total body water volume and water turnover were estimated using tritiated water, and the daily outputs of feces and urine were measured for each of 12 different combinations of diet and temperature. Statistical analysis of the data showed that the tritium water dilution technique gives accurate determinations of total body water flux over a wide range of environmental and nutritional conditions.
Accident or homicide--virtual crime scene reconstruction using 3D methods.
Buck, Ursula; Naether, Silvio; Räss, Beat; Jackowski, Christian; Thali, Michael J
2013-02-10
The analysis and reconstruction of forensically relevant events, such as traffic accidents, criminal assaults and homicides are based on external and internal morphological findings of the injured or deceased person. For this approach high-tech methods are gaining increasing importance in forensic investigations. The non-contact optical 3D digitising system GOM ATOS is applied as a suitable tool for whole body surface and wound documentation and analysis in order to identify injury-causing instruments and to reconstruct the course of event. In addition to the surface documentation, cross-sectional imaging methods deliver medical internal findings of the body. These 3D data are fused into a whole body model of the deceased. Additional to the findings of the bodies, the injury inflicting instruments and incident scene is documented in 3D. The 3D data of the incident scene, generated by 3D laser scanning and photogrammetry, is also included into the reconstruction. Two cases illustrate the methods. In the fist case a man was shot in his bedroom and the main question was, if the offender shot the man intentionally or accidentally, as he declared. In the second case a woman was hit by a car, driving backwards into a garage. It was unclear if the driver drove backwards once or twice, which would indicate that he willingly injured and killed the woman. With this work, we demonstrate how 3D documentation, data merging and animation enable to answer reconstructive questions regarding the dynamic development of patterned injuries, and how this leads to a real data based reconstruction of the course of event. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Flux weighted method for solution of stiff neutron dynamic equations and its application
International Nuclear Information System (INIS)
Li Huiyun; Jiao Huixian
1987-12-01
To analyze reactivity event for nuclear power plants, it is necessary to solve the neutron dynamic equations, which is a group of typical stiff constant differential equations. Very small time steps could only be adopted when the group of equations is solved by common methods. However, a large time steps might be selected if the Flux Weighted Medthod introduced in this paper is used. Generally, weighted factor θ i1 is set as a constant. Naturally, this treatment method can decrease the accuracy of calculation for the increase of the steadiness of solving the equations. An accurate theoretical formula of 4 x 4 matrix of θ i1 is rigorously derived so that the accuracy of calculation is ensured, as well as the steadiness of solved equations is increased. This method have the advantage over classical Runge-kutta Method and other methods. The time steps could be increased by a factor of 1 ∼ 3 orders of magnitude so as to save a lot of computating time. The programe solving neutron dynamic equation, which is prepared by using Flux Weighted Method, could be sued for real time analog of training simulator, as well as for analysis and computation of reactivity event (including rod jumping out event)
DEFF Research Database (Denmark)
Lu, Kaiyuan; Rasmussen, Peter Omand; Ritchie, Ewen
2011-01-01
This paper presents a new method for computation of the nonlinear flux linkage in 3-D finite-element models (FEMs) of electrical machines. Accurate computation of the nonlinear flux linkage in 3-D FEM is not an easy task. Compared to the existing energy-perturbation method, the new technique......-perturbation method. The new method proposed is validated using experimental results on two different permanent magnet machines....
A multipoint flux mixed finite element method on distorted quadrilaterals and hexahedra
Wheeler, Mary
2011-11-06
In this paper, we develop a new mixed finite element method for elliptic problems on general quadrilateral and hexahedral grids that reduces to a cell-centered finite difference scheme. A special non-symmetric quadrature rule is employed that yields a positive definite cell-centered system for the pressure by eliminating local velocities. The method is shown to be accurate on highly distorted rough quadrilateral and hexahedral grids, including hexahedra with non-planar faces. Theoretical and numerical results indicate first-order convergence for the pressure and face fluxes. © 2011 Springer-Verlag.
A Lift-Off-Tolerant Magnetic Flux Leakage Testing Method for Drill Pipes at Wellhead
Wu, Jianbo; Fang, Hui; Li, Long; Wang, Jie; Huang, Xiaoming; Kang, Yihua; Sun, Yanhua; Tang, Chaoqing
2017-01-01
To meet the great needs for MFL (magnetic flux leakage) inspection of drill pipes at wellheads, a lift-off-tolerant MFL testing method is proposed and investigated in this paper. Firstly, a Helmholtz coil magnetization method and the whole MFL testing scheme are proposed. Then, based on the magnetic field focusing effect of ferrite cores, a lift-off-tolerant MFL sensor is developed and tested. It shows high sensitivity at a lift-off distance of 5.0 mm. Further, the follow-up high repeatabilit...
Neural network CT image reconstruction method for small amount of projection data
Ma, X F; Takeda, T
2000-01-01
This paper presents a new method for two-dimensional image reconstruction by using a multi-layer neural network. Though a conventionally used object function of such a neural network is composed of a sum of squared errors of the output data, we define an object function composed of a sum of squared residuals of an integral equation. By employing an appropriate numerical line integral for this integral equation, we can construct a neural network which can be used for CT image reconstruction for cases with small amount of projection data. We applied this method to some model problems and obtained satisfactory results. This method is especially useful for analyses of laboratory experiments or field observations where only a small amount of projection data is available in comparison with the well-developed medical applications.
Neural network CT image reconstruction method for small amount of projection data
International Nuclear Information System (INIS)
Ma, X.F.; Fukuhara, M.; Takeda, T.
2000-01-01
This paper presents a new method for two-dimensional image reconstruction by using a multi-layer neural network. Though a conventionally used object function of such a neural network is composed of a sum of squared errors of the output data, we define an object function composed of a sum of squared residuals of an integral equation. By employing an appropriate numerical line integral for this integral equation, we can construct a neural network which can be used for CT image reconstruction for cases with small amount of projection data. We applied this method to some model problems and obtained satisfactory results. This method is especially useful for analyses of laboratory experiments or field observations where only a small amount of projection data is available in comparison with the well-developed medical applications
A semi-automatic method for positioning a femoral bone reconstruction for strict view generation.
Milano, Federico; Ritacco, Lucas; Gomez, Adrian; Gonzalez Bernaldo de Quiros, Fernan; Risk, Marcelo
2010-01-01
In this paper we present a semi-automatic method for femoral bone positioning after 3D image reconstruction from Computed Tomography images. This serves as grounding for the definition of strict axial, longitudinal and anterior-posterior views, overcoming the problem of patient positioning biases in 2D femoral bone measuring methods. After the bone reconstruction is aligned to a standard reference frame, new tomographic slices can be generated, on which unbiased measures may be taken. This could allow not only accurate inter-patient comparisons but also intra-patient comparisons, i.e., comparisons of images of the same patient taken at different times. This method could enable medical doctors to diagnose and follow up several bone deformities more easily.
Negara, Ardiansyah
2013-01-01
Anisotropy of hydraulic properties of subsurface geologic formations is an essential feature that has been established as a consequence of the different geologic processes that they undergo during the longer geologic time scale. With respect to petroleum reservoirs, in many cases, anisotropy plays significant role in dictating the direction of flow that becomes no longer dependent only on the pressure gradient direction but also on the principal directions of anisotropy. Furthermore, in complex systems involving the flow of multiphase fluids in which the gravity and the capillarity play an important role, anisotropy can also have important influences. Therefore, there has been great deal of motivation to consider anisotropy when solving the governing conservation laws numerically. Unfortunately, the two-point flux approximation of finite difference approach is not capable of handling full tensor permeability fields. Lately, however, it has been possible to adapt the multipoint flux approximation that can handle anisotropy to the framework of finite difference schemes. In multipoint flux approximation method, the stencil of approximation is more involved, i.e., it requires the involvement of 9-point stencil for the 2-D model and 27-point stencil for the 3-D model. This is apparently challenging and cumbersome when making the global system of equations. In this work, we apply the equation-type approach, which is the experimenting pressure field approach that enables the solution of the global problem breaks into the solution of multitude of local problems that significantly reduce the complexity without affecting the accuracy of numerical solution. This approach also leads in reducing the computational cost during the simulation. We have applied this technique to a variety of anisotropy scenarios of 3-D subsurface flow problems and the numerical results demonstrate that the experimenting pressure field technique fits very well with the multipoint flux approximation
International Nuclear Information System (INIS)
Laraufie, Romain; Deck, Sébastien
2013-01-01
Highlights: • Present various Reynolds stresses reconstruction methods from a RANS-SA flow field. • Quantify the accuracy of the reconstruction methods for a wide range of Reynolds. • Evaluate the capabilities of the overall process (Reconstruction + SEM). • Provide practical guidelines to realize a streamwise RANS/LES (or WMLES) transition. -- Abstract: Hybrid or zonal RANS/LES approaches are recognized as the most promising way to accurately simulate complex unsteady flows under current computational limitations. One still open issue concerns the transition from a RANS to a LES or WMLES resolution in the stream-wise direction, when near wall turbulence is involved. Turbulence content has then to be prescribed at the transition to prevent from turbulence decay leading to possible flow relaminarization. The present paper aims to propose an efficient way to generate this switch, within the flow, based on a synthetic turbulence inflow condition, named Synthetic Eddy Method (SEM). As the knowledge of the whole Reynolds stresses is often missing, the scope of this paper is focused on generating the quantities required at the SEM inlet from a RANS calculation, namely the first and second order statistics of the aerodynamic field. Three different methods based on two different approaches are presented and their capability to accurately generate the needed aerodynamic values is investigated. Then, the ability of the combination SEM + Reconstruction method to manufacture well-behaved turbulence is demonstrated through spatially developing flat plate turbulent boundary layers. In the mean time, important intrinsic features of the Synthetic Eddy method are pointed out. The necessity of introducing, within the SEM, accurate data, with regards to the outer part of the boundary layer, is illustrated. Finally, user’s guidelines are given depending on the Reynolds number based on the momentum thickness, since one method is suitable for low Reynolds number while the
A singular-value method for reconstruction of nonradial and lossy objects.
Jiang, Wei; Astheimer, Jeffrey; Waag, Robert
2012-03-01
Efficient inverse scattering algorithms for nonradial lossy objects are presented using singular-value decomposition to form reduced-rank representations of the scattering operator. These algorithms extend eigenfunction methods that are not applicable to nonradial lossy scattering objects because the scattering operators for these objects do not have orthonormal eigenfunction decompositions. A method of local reconstruction by segregation of scattering contributions from different local regions is also presented. Scattering from each region is isolated by forming a reduced-rank representation of the scattering operator that has domain and range spaces comprised of far-field patterns with retransmitted fields that focus on the local region. Methods for the estimation of the boundary, average sound speed, and average attenuation slope of the scattering object are also given. These methods yielded approximations of scattering objects that were sufficiently accurate to allow residual variations to be reconstructed in a single iteration. Calculated scattering from a lossy elliptical object with a random background, internal features, and white noise is used to evaluate the proposed methods. Local reconstruction yielded images with spatial resolution that is finer than a half wavelength of the center frequency and reproduces sound speed and attenuation slope with relative root-mean-square errors of 1.09% and 11.45%, respectively.
Zhu, Ming; Liu, Tingting; Zhang, Xiangqun; Li, Caiyun
2018-01-01
Recently, a decomposition method of acoustic relaxation absorption spectra was used to capture the entire molecular multimode relaxation process of gas. In this method, the acoustic attenuation and phase velocity were measured jointly based on the relaxation absorption spectra. However, fast and accurate measurements of the acoustic attenuation remain challenging. In this paper, we present a method of capturing the molecular relaxation process by only measuring acoustic velocity, without the necessity of obtaining acoustic absorption. The method is based on the fact that the frequency-dependent velocity dispersion of a multi-relaxation process in a gas is the serial connection of the dispersions of interior single-relaxation processes. Thus, one can capture the relaxation times and relaxation strengths of N decomposed single-relaxation dispersions to reconstruct the entire multi-relaxation dispersion using the measurements of acoustic velocity at 2N + 1 frequencies. The reconstructed dispersion spectra are in good agreement with experimental data for various gases and mixtures. The simulations also demonstrate the robustness of our reconstructive method.
Evaluation of two methods for using MR information in PET reconstruction
International Nuclear Information System (INIS)
Caldeira, L.; Scheins, J.; Almeida, P.; Herzog, H.
2013-01-01
Using magnetic resonance (MR) information in maximum a posteriori (MAP) algorithms for positron emission tomography (PET) image reconstruction has been investigated in the last years. Recently, three methods to introduce this information have been evaluated and the Bowsher prior was considered the best. Its main advantage is that it does not require image segmentation. Another method that has been widely used for incorporating MR information is using boundaries obtained by segmentation. This method has also shown improvements in image quality. In this paper, two methods for incorporating MR information in PET reconstruction are compared. After a Bayes parameter optimization, the reconstructed images were compared using the mean squared error (MSE) and the coefficient of variation (CV). MSE values are 3% lower in Bowsher than using boundaries. CV values are 10% lower in Bowsher than using boundaries. Both methods performed better than using no prior, that is, maximum likelihood expectation maximization (MLEM) or MAP without anatomic information in terms of MSE and CV. Concluding, incorporating MR information using the Bowsher prior gives better results in terms of MSE and CV than boundaries. MAP algorithms showed again to be effective in noise reduction and convergence, specially when MR information is incorporated. The robustness of the priors in respect to noise and inhomogeneities in the MR image has however still to be performed
International Nuclear Information System (INIS)
Koskinas, M.F.
1979-01-01
Experimental and theoretical details of the foil activation method applied to neutrons flux measurements at the IEA-R1 reactor are presented. The thermal - and epithermal - neutron flux were determined form activation measurements of gold, cobalt and manganese foils; and for the fast neutron flux determination, aluminum, iron and nickel foils were used. The measurements of the activity induced in the metal foils were performed using a Ge-Li gamma spectrometry system. In each energy range of the reactor neutron spectrum, the agreement among the experimental flux values obtained using the three kind of materials, indicates the consistency of the theoretical approach and of the nuclear parameters selected. (Author) [pt
Nelson, A. J.; Koloutsou-Vakakis, S.; Rood, M. J.; Lichiheb, N.; Heuer, M.; Myles, L.
2017-12-01
Ammonia (NH3) is a precursor to fine particulate matter (PM) in the ambient atmosphere. Agricultural activities represent over 80% of anthropogenic emissions of NH3 in the United States. The use of nitrogen-based fertilizers contribute > 50% of total NH3 emissions in central Illinois. The U.S. EPA Science Advisory Board has called for improved methods to measure, model, and report atmospheric NH3 concentrations and emissions from agriculture. High uncertainties in the temporal and spatial distribution of NH3 emissions contribute to poor performance of air quality models in predicting ambient PM concentrations. This study reports and compares NH3 flux measurements of differing temporal resolution obtained with two methods: relaxed eddy accumulation (REA) and flux-gradient (FG). REA and FG systems were operated concurrently above a corn canopy at the University of Illinois at Urbana-Champaign (UIUC) Energy Biosciences Institute (EBI) Energy Farm during the 2014 corn-growing season. The REA system operated during daytime, providing average fluxes over four-hour sampling intervals, where time resolution was limited by detection limit of denuders. The FG system employed a cavity ring-down spectrometer, and was operated continuously, reporting 30 min flux averages. A flux-footprint evaluation was used for quality control, resulting in 1,178 qualified FG measurements, 82 of which were coincident with REA measurements. Similar emission trends were observed with both systems, with peak NH3 emission observed one week after fertilization. For all coincident samples, mean NH3 flux was 205 ± 300 ng-N-m2s-1 and 110 ± 256 ng-N-m2s-1 as measured with REA and FG, respectively, where positive flux indicates emission. This is the first reported inter-comparison of REA and FG methods as used for quantifying NH3 fluxes from cropland. Preliminary analysis indicates the improved temporal resolution and continuous sampling enabled by FG allow for the identification of emission pulses
Comparison of methods for measuring flux gradients in type II superconductors
International Nuclear Information System (INIS)
Kroeger, D.M.; Koch, C.C.; Charlesworth, J.P.
1975-01-01
A comparison has been made of four methods of measuring the critical current density J/sub c/ in hysteretic type II superconductors, having a wide range of K and J/sub c/ values, in magnetic fields up to 70 kOe. Two of the methods, (a) resistive measurements and (b) magnetization measurements, were carried out in static magnetic fields. The other two methods involved analysis of the response of the sample to a small alternating field superimposed on the static field. The response was analyzed either (c) by measuring the third-harmonic content or (d) by integration of the waveform to obtain measure of flux penetration. The results are discussed with reference to the agreement between the different techniques and the consistency of the critical state hypothesis on which all these techniques are based. It is concluded that flux-penetration measurements by method (d) provide the most detailed information about J/sub c/ but that one must be wary of minor failures of the critical state hypothesis. Best results are likely to be obtained by using more than one method. (U.S.)
Calculations of Neutron Flux Distributions by Means of Integral Transport Methods
Energy Technology Data Exchange (ETDEWEB)
Carlvik, I
1967-05-15
Flux distributions have been calculated mainly in one energy group, for a number of systems representing geometries interesting for reactor calculations. Integral transport methods of two kinds were utilised, collision probabilities (CP) and the discrete method (DIT). The geometries considered comprise the three one-dimensional geometries, planes, sphericals and annular, and further a square cell with a circular fuel rod and a rod cluster cell with a circular outer boundary. For the annular cells both methods (CP and DIT) were used and the results were compared. The purpose of the work is twofold, firstly to demonstrate the versatility and efficacy of integral transport methods and secondly to serve as a guide for anybody who wants to use the methods.
Hirahara, Noriyuki; Monma, Hiroyuki; Shimojo, Yoshihide; Matsubara, Takeshi; Hyakudomi, Ryoji; Yano, Seiji; Tanaka, Tsuneo
2011-01-01
Abstract Here we report the method of anastomosis based on double stapling technique (hereinafter, DST) using a trans-oral anvil delivery system (EEATM OrVilTM) for reconstructing the esophagus and lifted jejunum following laparoscopic total gastrectomy or proximal gastric resection. As a basic technique, laparoscopic total gastrectomy employed Roux-en-Y reconstruction, laparoscopic proximal gastrectomy employed double tract reconstruction, and end-to-side anastomosis was used for the cut-off...
Does thorax EIT image analysis depend on the image reconstruction method?
Zhao, Zhanqi; Frerichs, Inéz; Pulletz, Sven; Müller-Lisse, Ullrich; Möller, Knut
2013-04-01
Different methods were proposed to analyze the resulting images of electrical impedance tomography (EIT) measurements during ventilation. The aim of our study was to examine if the analysis methods based on back-projection deliver the same results when applied on images based on other reconstruction algorithms. Seven mechanically ventilated patients with ARDS were examined by EIT. The thorax contours were determined from the routine CT images. EIT raw data was reconstructed offline with (1) filtered back-projection with circular forward model (BPC); (2) GREIT reconstruction method with circular forward model (GREITC) and (3) GREIT with individual thorax geometry (GREITT). Three parameters were calculated on the resulting images: linearity, global ventilation distribution and regional ventilation distribution. The results of linearity test are 5.03±2.45, 4.66±2.25 and 5.32±2.30 for BPC, GREITC and GREITT, respectively (median ±interquartile range). The differences among the three methods are not significant (p = 0.93, Kruskal-Wallis test). The proportions of ventilation in the right lung are 0.58±0.17, 0.59±0.20 and 0.59±0.25 for BPC, GREITC and GREITT, respectively (p = 0.98). The differences of the GI index based on different reconstruction methods (0.53±0.16, 0.51±0.25 and 0.54±0.16 for BPC, GREITC and GREITT, respectively) are also not significant (p = 0.93). We conclude that the parameters developed for images generated with GREITT are comparable with filtered back-projection and GREITC.
WATSFAR: numerical simulation of soil WATer and Solute fluxes using a FAst and Robust method
Crevoisier, David; Voltz, Marc
2013-04-01
To simulate the evolution of hydro- and agro-systems, numerous spatialised models are based on a multi-local approach and improvement of simulation accuracy by data-assimilation techniques are now used in many application field. The latest acquisition techniques provide a large amount of experimental data, which increase the efficiency of parameters estimation and inverse modelling approaches. In turn simulations are often run on large temporal and spatial domains which requires a large number of model runs. Eventually, despite the regular increase in computing capacities, the development of fast and robust methods describing the evolution of saturated-unsaturated soil water and solute fluxes is still a challenge. Ross (2003, Agron J; 95:1352-1361) proposed a method, solving 1D Richards' and convection-diffusion equation, that fulfil these characteristics. The method is based on a non iterative approach which reduces the numerical divergence risks and allows the use of coarser spatial and temporal discretisations, while assuring a satisfying accuracy of the results. Crevoisier et al. (2009, Adv Wat Res; 32:936-947) proposed some technical improvements and validated this method on a wider range of agro- pedo- climatic situations. In this poster, we present the simulation code WATSFAR which generalises the Ross method to other mathematical representations of soil water retention curve (i.e. standard and modified van Genuchten model) and includes a dual permeability context (preferential fluxes) for both water and solute transfers. The situations tested are those known to be the less favourable when using standard numerical methods: fine textured and extremely dry soils, intense rainfall and solute fluxes, soils near saturation, ... The results of WATSFAR have been compared with the standard finite element model Hydrus. The analysis of these comparisons highlights two main advantages for WATSFAR, i) robustness: even on fine textured soil or high water and solute
A comparison of reconstruction methods for undersampled atomic force microscopy images
International Nuclear Information System (INIS)
Luo, Yufan; Andersson, Sean B
2015-01-01
Non-raster scanning and undersampling of atomic force microscopy (AFM) images is a technique for improving imaging rate and reducing the amount of tip–sample interaction needed to produce an image. Generation of the final image can be done using a variety of image processing techniques based on interpolation or optimization. The choice of reconstruction method has a large impact on the quality of the recovered image and the proper choice depends on the sample under study. In this work we compare interpolation through the use of inpainting algorithms with reconstruction based on optimization through the use of the basis pursuit algorithm commonly used for signal recovery in compressive sensing. Using four different sampling patterns found in non-raster AFM, namely row subsampling, spiral scanning, Lissajous scanning, and random scanning, we subsample data from existing images and compare reconstruction performance against the original image. The results illustrate that inpainting generally produces superior results when the image contains primarily low frequency content while basis pursuit is better when the images have mixed, but sparse, frequency content. Using support vector machines, we then classify images based on their frequency content and sparsity and, from this classification, develop a fast decision strategy to select a reconstruction algorithm to be used on subsampled data. The performance of the classification and decision test are demonstrated on test AFM images. (paper)
A feasible method for clinical delivery verification and dose reconstruction in tomotherapy
International Nuclear Information System (INIS)
Kapatoes, J.M.; Olivera, G.H.; Ruchala, K.J.; Smilowitz, J.B.; Reckwerdt, P.J.; Mackie, T.R.
2001-01-01
Delivery verification is the process in which the energy fluence delivered during a treatment is verified. This verified energy fluence can be used in conjunction with an image in the treatment position to reconstruct the full three-dimensional dose deposited. A method for delivery verification that utilizes a measured database of detector signal is described in this work. This database is a function of two parameters, radiological path-length and detector-to-phantom distance, both of which are computed from a CT image taken at the time of delivery. Such a database was generated and used to perform delivery verification and dose reconstruction. Two experiments were conducted: a simulated prostate delivery on an inhomogeneous abdominal phantom, and a nasopharyngeal delivery on a dog cadaver. For both cases, it was found that the verified fluence and dose results using the database approach agreed very well with those using previously developed and proven techniques. Delivery verification with a measured database and CT image at the time of treatment is an accurate procedure for tomotherapy. The database eliminates the need for any patient-specific, pre- or post-treatment measurements. Moreover, such an approach creates an opportunity for accurate, real-time delivery verification and dose reconstruction given fast image reconstruction and dose computation tools
An eigenfunction method for reconstruction of large-scale and high-contrast objects.
Waag, Robert C; Lin, Feng; Varslot, Trond K; Astheimer, Jeffrey P
2007-07-01
A multiple-frequency inverse scattering method that uses eigenfunctions of a scattering operator is extended to image large-scale and high-contrast objects. The extension uses an estimate of the scattering object to form the difference between the scattering by the object and the scattering by the estimate of the object. The scattering potential defined by this difference is expanded in a basis of products of acoustic fields. These fields are defined by eigenfunctions of the scattering operator associated with the estimate. In the case of scattering objects for which the estimate is radial, symmetries in the expressions used to reconstruct the scattering potential greatly reduce the amount of computation. The range of parameters over which the reconstruction method works well is illustrated using calculated scattering by different objects. The method is applied to experimental data from a 48-mm diameter scattering object with tissue-like properties. The image reconstructed from measurements has, relative to a conventional B-scan formed using a low f-number at the same center frequency, significantly higher resolution and less speckle, implying that small, high-contrast structures can be demonstrated clearly using the extended method.
RECONSTRUCTING THE INITIAL DENSITY FIELD OF THE LOCAL UNIVERSE: METHODS AND TESTS WITH MOCK CATALOGS
International Nuclear Information System (INIS)
Wang Huiyuan; Mo, H. J.; Yang Xiaohu; Van den Bosch, Frank C.
2013-01-01
Our research objective in this paper is to reconstruct an initial linear density field, which follows the multivariate Gaussian distribution with variances given by the linear power spectrum of the current cold dark matter model and evolves through gravitational instabilities to the present-day density field in the local universe. For this purpose, we develop a Hamiltonian Markov Chain Monte Carlo method to obtain the linear density field from a posterior probability function that consists of two components: a prior of a Gaussian density field with a given linear spectrum and a likelihood term that is given by the current density field. The present-day density field can be reconstructed from galaxy groups using the method developed in Wang et al. Using a realistic mock Sloan Digital Sky Survey DR7, obtained by populating dark matter halos in the Millennium simulation (MS) with galaxies, we show that our method can effectively and accurately recover both the amplitudes and phases of the initial, linear density field. To examine the accuracy of our method, we use N-body simulations to evolve these reconstructed initial conditions to the present day. The resimulated density field thus obtained accurately matches the original density field of the MS in the density range 0.3∼ –1 , much smaller than the translinear scale, which corresponds to a wavenumber of ∼0.15 h Mpc –1
International Nuclear Information System (INIS)
Ghassoun, Jillali; Jehoauni, Abdellatif
2000-01-01
In practice, the estimation of the flux obtained by Fredholm integral equation needs a truncation of the Neuman series. The order N of the truncation must be large in order to get a good estimation. But a large N induces a very large computation time. So the conditional Monte Carlo method is used to reduce time without affecting the estimation quality. In a previous works, in order to have rapid convergence of calculations it was considered only weakly diffusing media so that has permitted to truncate the Neuman series after an order of 20 terms. But in the most practical shields, such as water, graphite and beryllium the scattering probability is high and if we truncate the series at 20 terms we get bad estimation of flux, so it becomes useful to use high orders in order to have good estimation. We suggest two simple techniques based on the conditional Monte Carlo. We have proposed a simple density of sampling the steps for the random walk. Also a modified stretching factor density depending on a biasing parameter which affects the sample vector by stretching or shrinking the original random walk in order to have a chain that ends at a given point of interest. Also we obtained a simple empirical formula which gives the neutron flux for a medium characterized by only their scattering probabilities. The results are compared to the exact analytic solution, we have got a good agreement of results with a good acceleration of convergence calculations. (author)
TREEDE, Point Fluxes and Currents Based on Track Rotation Estimator by Monte-Carlo Method
International Nuclear Information System (INIS)
Dubi, A.
1985-01-01
1 - Description of problem or function: TREEDE is a Monte Carlo transport code based on the Track Rotation estimator, used, in general, to calculate fluxes and currents at a point. This code served as a test code in the development of the concept of the Track Rotation estimator, and therefore analogue Monte Carlo is used (i.e. no importance biasing). 2 - Method of solution: The basic idea is to follow the particle's track in the medium and then to rotate it such that it passes through the detector point. That is, rotational symmetry considerations (even in non-spherically symmetric configurations) are applied to every history, so that a very large fraction of the track histories can be rotated and made to pass through the point of interest; in this manner the 1/r 2 singularity in the un-collided flux estimator (next event estimator) is avoided. TREEDE, being a test code, is used to estimate leakage or in-medium fluxes at given points in a 3-dimensional finite box, where the source is an isotropic point source at the centre of the z = 0 surface. However, many of the constraints of geometry and source can be easily removed. The medium is assumed homogeneous with isotropic scattering, and one energy group only is considered. 3 - Restrictions on the complexity of the problem: One energy group, a homogeneous medium, isotropic scattering
An Optimal Estimation Method to Obtain Surface Layer Turbulent Fluxes from Profile Measurements
Kang, D.
2015-12-01
In the absence of direct turbulence measurements, the turbulence characteristics of the atmospheric surface layer are often derived from measurements of the surface layer mean properties based on Monin-Obukhov Similarity Theory (MOST). This approach requires two levels of the ensemble mean wind, temperature, and water vapor, from which the fluxes of momentum, sensible heat, and water vapor can be obtained. When only one measurement level is available, the roughness heights and the assumed properties of the corresponding variables at the respective roughness heights are used. In practice, the temporal mean with large number of samples are used in place of the ensemble mean. However, in many situations the samples of data are taken from multiple levels. It is thus desirable to derive the boundary layer flux properties using all measurements. In this study, we used an optimal estimation approach to derive surface layer properties based on all available measurements. This approach assumes that the samples are taken from a population whose ensemble mean profile follows the MOST. An optimized estimate is obtained when the results yield a minimum cost function defined as a weighted summation of all error variance at each sample altitude. The weights are based one sample data variance and the altitude of the measurements. This method was applied to measurements in the marine atmospheric surface layer from a small boat using radiosonde on a tethered balloon where temperature and relative humidity profiles in the lowest 50 m were made repeatedly in about 30 minutes. We will present the resultant fluxes and the derived MOST mean profiles using different sets of measurements. The advantage of this method over the 'traditional' methods will be illustrated. Some limitations of this optimization method will also be discussed. Its application to quantify the effects of marine surface layer environment on radar and communication signal propagation will be shown as well.
SU-D-206-03: Segmentation Assisted Fast Iterative Reconstruction Method for Cone-Beam CT
International Nuclear Information System (INIS)
Wu, P; Mao, T; Gong, S; Wang, J; Niu, T; Sheng, K; Xie, Y
2016-01-01
Purpose: Total Variation (TV) based iterative reconstruction (IR) methods enable accurate CT image reconstruction from low-dose measurements with sparse projection acquisition, due to the sparsifiable feature of most CT images using gradient operator. However, conventional solutions require large amount of iterations to generate a decent reconstructed image. One major reason is that the expected piecewise constant property is not taken into consideration at the optimization starting point. In this work, we propose an iterative reconstruction method for cone-beam CT (CBCT) using image segmentation to guide the optimization path more efficiently on the regularization term at the beginning of the optimization trajectory. Methods: Our method applies general knowledge that one tissue component in the CT image contains relatively uniform distribution of CT number. This general knowledge is incorporated into the proposed reconstruction using image segmentation technique to generate the piecewise constant template on the first-pass low-quality CT image reconstructed using analytical algorithm. The template image is applied as an initial value into the optimization process. Results: The proposed method is evaluated on the Shepp-Logan phantom of low and high noise levels, and a head patient. The number of iterations is reduced by overall 40%. Moreover, our proposed method tends to generate a smoother reconstructed image with the same TV value. Conclusion: We propose a computationally efficient iterative reconstruction method for CBCT imaging. Our method achieves a better optimization trajectory and a faster convergence behavior. It does not rely on prior information and can be readily incorporated into existing iterative reconstruction framework. Our method is thus practical and attractive as a general solution to CBCT iterative reconstruction. This work is supported by the Zhejiang Provincial Natural Science Foundation of China (Grant No. LR16F010001), National High-tech R
SU-D-206-03: Segmentation Assisted Fast Iterative Reconstruction Method for Cone-Beam CT
Energy Technology Data Exchange (ETDEWEB)
Wu, P; Mao, T; Gong, S; Wang, J; Niu, T [Sir Run Run Shaw Hospital, Zhejiang University School of Medicine, Institute of Translational Medicine, Zhejiang University, Hangzhou, Zhejiang (China); Sheng, K [Department of Radiation Oncology, University of California, Los Angeles, Los Angeles, CA (United States); Xie, Y [Institute of Biomedical and Health Engineering, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong (China)
2016-06-15
Purpose: Total Variation (TV) based iterative reconstruction (IR) methods enable accurate CT image reconstruction from low-dose measurements with sparse projection acquisition, due to the sparsifiable feature of most CT images using gradient operator. However, conventional solutions require large amount of iterations to generate a decent reconstructed image. One major reason is that the expected piecewise constant property is not taken into consideration at the optimization starting point. In this work, we propose an iterative reconstruction method for cone-beam CT (CBCT) using image segmentation to guide the optimization path more efficiently on the regularization term at the beginning of the optimization trajectory. Methods: Our method applies general knowledge that one tissue component in the CT image contains relatively uniform distribution of CT number. This general knowledge is incorporated into the proposed reconstruction using image segmentation technique to generate the piecewise constant template on the first-pass low-quality CT image reconstructed using analytical algorithm. The template image is applied as an initial value into the optimization process. Results: The proposed method is evaluated on the Shepp-Logan phantom of low and high noise levels, and a head patient. The number of iterations is reduced by overall 40%. Moreover, our proposed method tends to generate a smoother reconstructed image with the same TV value. Conclusion: We propose a computationally efficient iterative reconstruction method for CBCT imaging. Our method achieves a better optimization trajectory and a faster convergence behavior. It does not rely on prior information and can be readily incorporated into existing iterative reconstruction framework. Our method is thus practical and attractive as a general solution to CBCT iterative reconstruction. This work is supported by the Zhejiang Provincial Natural Science Foundation of China (Grant No. LR16F010001), National High-tech R
Bouma, Henri; van der Mark, Wannes; Eendebak, Pieter T.; Landsmeer, Sander H.; van Eekeren, Adam W. M.; ter Haar, Frank B.; Wieringa, F. Pieter; van Basten, Jean-Paul
2012-06-01
Compared to open surgery, minimal invasive surgery offers reduced trauma and faster recovery. However, lack of direct view limits space perception. Stereo-endoscopy improves depth perception, but is still restricted to the direct endoscopic field-of-view. We describe a novel technology that reconstructs 3D-panoramas from endoscopic video streams providing a much wider cumulative overview. The method is compatible with any endoscope. We demonstrate that it is possible to generate photorealistic 3D-environments from mono- and stereoscopic endoscopy. The resulting 3D-reconstructions can be directly applied in simulators and e-learning. Extended to real-time processing, the method looks promising for telesurgery or other remote vision-guided tasks.
Ogawa, Takahiro; Haseyama, Miki
2013-03-01
A missing texture reconstruction method based on an error reduction (ER) algorithm, including a novel estimation scheme of Fourier transform magnitudes is presented in this brief. In our method, Fourier transform magnitude is estimated for a target patch including missing areas, and the missing intensities are estimated by retrieving its phase based on the ER algorithm. Specifically, by monitoring errors converged in the ER algorithm, known patches whose Fourier transform magnitudes are similar to that of the target patch are selected from the target image. In the second approach, the Fourier transform magnitude of the target patch is estimated from those of the selected known patches and their corresponding errors. Consequently, by using the ER algorithm, we can estimate both the Fourier transform magnitudes and phases to reconstruct the missing areas.
International Nuclear Information System (INIS)
Soussaline, F.; LeCoq, C.; Raynaud, C.; Kellershohn, C.
1982-09-01
The aim of this study is to evaluate the potential of the RIM technique when used in brain studies. The analytical Regulatorizing Iterative Method (RIM) is designed to provide fast and accurate reconstruction of tomographic images when non-uniform attenuation is to be accounted for. As indicated by phantom studies, this method improves the contrast and the signal-to-noise ratio as compared to those obtained with FBP (Filtered Back Projection) technique. Preliminary results obtained in brain studies using AMPI-123 (isopropil-amphetamine I-123) are very encouraging in terms of quantitative regional cellular activity. However, the clinical usefulness of this mathematically accurate reconstruction procedure is going to be demonstrated in our Institution, in comparing quantitative data in heart or liver studies where control values can be obtained
International Nuclear Information System (INIS)
Soussaline, F.; LeCoq, C.; Raynaud, C.; Kellershohn
1982-01-01
The potential of the Regularizing Iterative Method (RIM), when used in brain studies, is evaluated. RIM is designed to provide fast and accurate reconstruction of tomographic images when non-uniform attenuation is to be accounted for. As indicated by phantom studies, this method improves the contrast and the signal-to-noise ratio as compared to those obtained with Filtered Back Projection (FBP) technique. Preliminary results obtained in brain studies using isopropil-amphetamine I-123 (AMPI-123) are very encouraging in terms of quantitative regional cellular activity. However, the clinical usefulness of this mathematically accurate reconstruction procedure is going to be demonstrated, in comparing quantitative data in heart or liver studies where control values can be obtained
International Nuclear Information System (INIS)
Gao, H
2016-01-01
Purpose: This work is to develop a general framework, namely filtered iterative reconstruction (FIR) method, to incorporate analytical reconstruction (AR) method into iterative reconstruction (IR) method, for enhanced CT image quality. Methods: FIR is formulated as a combination of filtered data fidelity and sparsity regularization, and then solved by proximal forward-backward splitting (PFBS) algorithm. As a result, the image reconstruction decouples data fidelity and image regularization with a two-step iterative scheme, during which an AR-projection step updates the filtered data fidelity term, while a denoising solver updates the sparsity regularization term. During the AR-projection step, the image is projected to the data domain to form the data residual, and then reconstructed by certain AR to a residual image which is in turn weighted together with previous image iterate to form next image iterate. Since the eigenvalues of AR-projection operator are close to the unity, PFBS based FIR has a fast convergence. Results: The proposed FIR method is validated in the setting of circular cone-beam CT with AR being FDK and total-variation sparsity regularization, and has improved image quality from both AR and IR. For example, AIR has improved visual assessment and quantitative measurement in terms of both contrast and resolution, and reduced axial and half-fan artifacts. Conclusion: FIR is proposed to incorporate AR into IR, with an efficient image reconstruction algorithm based on PFBS. The CBCT results suggest that FIR synergizes AR and IR with improved image quality and reduced axial and half-fan artifacts. The authors was partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000), and the Shanghai Pujiang Talent Program (#14PJ1404500).
International Nuclear Information System (INIS)
Mieville, Frederic A.; Gudinchet, Francois; Rizzo, Elena; Ou, Phalla; Brunelle, Francis; Bochud, Francois O.; Verdun, Francis R.
2011-01-01
Radiation dose exposure is of particular concern in children due to the possible harmful effects of ionizing radiation. The adaptive statistical iterative reconstruction (ASIR) method is a promising new technique that reduces image noise and produces better overall image quality compared with routine-dose contrast-enhanced methods. To assess the benefits of ASIR on the diagnostic image quality in paediatric cardiac CT examinations. Four paediatric radiologists based at two major hospitals evaluated ten low-dose paediatric cardiac examinations (80 kVp, CTDI vol 4.8-7.9 mGy, DLP 37.1-178.9 mGy.cm). The average age of the cohort studied was 2.6 years (range 1 day to 7 years). Acquisitions were performed on a 64-MDCT scanner. All images were reconstructed at various ASIR percentages (0-100%). For each examination, radiologists scored 19 anatomical structures using the relative visual grading analysis method. To estimate the potential for dose reduction, acquisitions were also performed on a Catphan phantom and a paediatric phantom. The best image quality for all clinical images was obtained with 20% and 40% ASIR (p < 0.001) whereas with ASIR above 50%, image quality significantly decreased (p < 0.001). With 100% ASIR, a strong noise-free appearance of the structures reduced image conspicuity. A potential for dose reduction of about 36% is predicted for a 2- to 3-year-old child when using 40% ASIR rather than the standard filtered back-projection method. Reconstruction including 20% to 40% ASIR slightly improved the conspicuity of various paediatric cardiac structures in newborns and children with respect to conventional reconstruction (filtered back-projection) alone. (orig.)
Grant, K.; Rohling, E. J.; Amies, J.
2017-12-01
Sea-level (SL) reconstructions over glacial-interglacial timeframes are critical for understanding the equilibrium response of ice sheets to sustained warming. In particular, continuous and high-resolution SL records are essential for accurately quantifying `natural' rates of SL rise. Global SL changes are well-constrained since the last glacial maximum ( 20,000 years ago, ky) by radiometrically-dated corals and paleoshoreline data, and fairly well-constrained over the last glacial cycle ( 150 ky). Prior to that, however, studies of ice-volume:SL relationships tend to rely on benthic δ18O, as geomorphological evidence is far more sparse and less reliably dated. An alternative SL reconstruction method (the `marginal basin' approach) was developed for the Red Sea over 500 ky, and recently attempted for the Mediterranean over 5 My (Rohling et al., 2014, Nature). This method exploits the strong sensitivity of seawater δ18O in these basins to SL changes in the relatively narrow and shallow straits which connect the basins with the open ocean. However, the initial Mediterranean SL method did not resolve sea-level highstands during Northern Hemisphere insolation maxima, when African monsoon run-off - strongly depleted in δ18O - reached the Mediterranean. Here, we present improvements to the `marginal basin' sea-level reconstruction method. These include a new `Med-Red SL stack', which combines new probabilistic Mediterranean and Red Sea sea-level stacks spanning the last 500 ky. We also show how a box model-data comparison of water-column δ18O changes over a monsoon interval allows us to quantify the monsoon versus SL δ18O imprint on Mediterranean foraminiferal carbonate δ18O records. This paves the way for a more accurate and fully continuous SL reconstruction extending back through the Pliocene.
Energy Technology Data Exchange (ETDEWEB)
Mieville, Frederic A. [University Hospital Center and University of Lausanne, Institute of Radiation Physics, Lausanne (Switzerland); University Hospital Center and University of Lausanne, Institute of Radiation Physics - Medical Radiology, Lausanne (Switzerland); Gudinchet, Francois; Rizzo, Elena [University Hospital Center and University of Lausanne, Department of Radiology, Lausanne (Switzerland); Ou, Phalla; Brunelle, Francis [Necker Children' s Hospital, Department of Radiology, Paris (France); Bochud, Francois O.; Verdun, Francis R. [University Hospital Center and University of Lausanne, Institute of Radiation Physics, Lausanne (Switzerland)
2011-09-15
Radiation dose exposure is of particular concern in children due to the possible harmful effects of ionizing radiation. The adaptive statistical iterative reconstruction (ASIR) method is a promising new technique that reduces image noise and produces better overall image quality compared with routine-dose contrast-enhanced methods. To assess the benefits of ASIR on the diagnostic image quality in paediatric cardiac CT examinations. Four paediatric radiologists based at two major hospitals evaluated ten low-dose paediatric cardiac examinations (80 kVp, CTDI{sub vol} 4.8-7.9 mGy, DLP 37.1-178.9 mGy.cm). The average age of the cohort studied was 2.6 years (range 1 day to 7 years). Acquisitions were performed on a 64-MDCT scanner. All images were reconstructed at various ASIR percentages (0-100%). For each examination, radiologists scored 19 anatomical structures using the relative visual grading analysis method. To estimate the potential for dose reduction, acquisitions were also performed on a Catphan phantom and a paediatric phantom. The best image quality for all clinical images was obtained with 20% and 40% ASIR (p < 0.001) whereas with ASIR above 50%, image quality significantly decreased (p < 0.001). With 100% ASIR, a strong noise-free appearance of the structures reduced image conspicuity. A potential for dose reduction of about 36% is predicted for a 2- to 3-year-old child when using 40% ASIR rather than the standard filtered back-projection method. Reconstruction including 20% to 40% ASIR slightly improved the conspicuity of various paediatric cardiac structures in newborns and children with respect to conventional reconstruction (filtered back-projection) alone. (orig.)
Directory of Open Access Journals (Sweden)
Yeo Beom Yoon
2014-04-01
Full Text Available Windows are the primary aperture to introduce solar radiation to the interior space of a building. This experiment explores the use of EnergyPlus software for analyzing the illuminance level on the floor of a room with reference to its distance from the window. For this experiment, a double clear glass window has been used. The preliminary modelling in EnergyPlus showed a consistent result with the experimentally monitored data in real time. EnergyPlus has two mainly used daylighting algorithms: DElight method employing radiosity technique and Detailed method employing split-flux technique. Further analysis for illuminance using DElight and Detailed methods showed significant difference in the results. Finally, we compared the algorithms of the two analysis methods in EnergyPlus.
Four-dimensional cone beam CT reconstruction and enhancement using a temporal nonlocal means method
Energy Technology Data Exchange (ETDEWEB)
Jia Xun; Tian Zhen; Lou Yifei; Sonke, Jan-Jakob; Jiang, Steve B. [Center for Advanced Radiotherapy Technologies and Department of Radiation Medicine and Applied Sciences, University of California San Diego, La Jolla, California 92037 (United States); School of Electrical and Computer Engineering, Georgia Institute of Technology, Atlanta, Georgia 30318 (United States); Department of Radiation Oncology, Netherlands Cancer Institute-Antoni van Leeuwenhoek Hospital, Plesmanlaan 121, 1066 CX Amsterdam (Netherlands); Center for Advanced Radiotherapy Technologies and Department of Radiation Medicine and Applied Sciences, University of California San Diego, La Jolla, California 92037 (United States)
2012-09-15
Purpose: Four-dimensional cone beam computed tomography (4D-CBCT) has been developed to provide respiratory phase-resolved volumetric imaging in image guided radiation therapy. Conventionally, it is reconstructed by first sorting the x-ray projections into multiple respiratory phase bins according to a breathing signal extracted either from the projection images or some external surrogates, and then reconstructing a 3D CBCT image in each phase bin independently using FDK algorithm. This method requires adequate number of projections for each phase, which can be achieved using a low gantry rotation or multiple gantry rotations. Inadequate number of projections in each phase bin results in low quality 4D-CBCT images with obvious streaking artifacts. 4D-CBCT images at different breathing phases share a lot of redundant information, because they represent the same anatomy captured at slightly different temporal points. Taking this redundancy along the temporal dimension into account can in principle facilitate the reconstruction in the situation of inadequate number of projection images. In this work, the authors propose two novel 4D-CBCT algorithms: an iterative reconstruction algorithm and an enhancement algorithm, utilizing a temporal nonlocal means (TNLM) method. Methods: The authors define a TNLM energy term for a given set of 4D-CBCT images. Minimization of this term favors those 4D-CBCT images such that any anatomical features at one spatial point at one phase can be found in a nearby spatial point at neighboring phases. 4D-CBCT reconstruction is achieved by minimizing a total energy containing a data fidelity term and the TNLM energy term. As for the image enhancement, 4D-CBCT images generated by the FDK algorithm are enhanced by minimizing the TNLM function while keeping the enhanced images close to the FDK results. A forward-backward splitting algorithm and a Gauss-Jacobi iteration method are employed to solve the problems. The algorithms implementation on
Directory of Open Access Journals (Sweden)
Kravtsenyuk Olga V
2007-01-01
Full Text Available The possibility of improving the spatial resolution of diffuse optical tomograms reconstructed by the photon average trajectories (PAT method is substantiated. The PAT method recently presented by us is based on a concept of an average statistical trajectory for transfer of light energy, the photon average trajectory (PAT. The inverse problem of diffuse optical tomography is reduced to a solution of an integral equation with integration along a conditional PAT. As a result, the conventional algorithms of projection computed tomography can be used for fast reconstruction of diffuse optical images. The shortcoming of the PAT method is that it reconstructs the images blurred due to averaging over spatial distributions of photons which form the signal measured by the receiver. To improve the resolution, we apply a spatially variant blur model based on an interpolation of the spatially invariant point spread functions simulated for the different small subregions of the image domain. Two iterative algorithms for solving a system of linear algebraic equations, the conjugate gradient algorithm for least squares problem and the modified residual norm steepest descent algorithm, are used for deblurring. It is shown that a gain in spatial resolution can be obtained.
Directory of Open Access Journals (Sweden)
Vladimir V. Lyubimov
2007-01-01
Full Text Available The possibility of improving the spatial resolution of diffuse optical tomograms reconstructed by the photon average trajectories (PAT method is substantiated. The PAT method recently presented by us is based on a concept of an average statistical trajectory for transfer of light energy, the photon average trajectory (PAT. The inverse problem of diffuse optical tomography is reduced to a solution of an integral equation with integration along a conditional PAT. As a result, the conventional algorithms of projection computed tomography can be used for fast reconstruction of diffuse optical images. The shortcoming of the PAT method is that it reconstructs the images blurred due to averaging over spatial distributions of photons which form the signal measured by the receiver. To improve the resolution, we apply a spatially variant blur model based on an interpolation of the spatially invariant point spread functions simulated for the different small subregions of the image domain. Two iterative algorithms for solving a system of linear algebraic equations, the conjugate gradient algorithm for least squares problem and the modified residual norm steepest descent algorithm, are used for deblurring. It is shown that a 27% gain in spatial resolution can be obtained.
Loomis, E N; Grim, G P; Wilde, C; Wilson, D C; Morgan, G; Wilke, M; Tregillis, I; Merrill, F; Clark, D; Finch, J; Fittinghoff, D; Bower, D
2010-10-01
Development of analysis techniques for neutron imaging at the National Ignition Facility is an important and difficult task for the detailed understanding of high-neutron yield inertial confinement fusion implosions. Once developed, these methods must provide accurate images of the hot and cold fuels so that information about the implosion, such as symmetry and areal density, can be extracted. One method under development involves the numerical inversion of the pinhole image using knowledge of neutron transport through the pinhole aperture from Monte Carlo simulations. In this article we present results of source reconstructions based on simulated images that test the methods effectiveness with regard to pinhole misalignment.
A new method for three-dimensional laparoscopic ultrasound model reconstruction
DEFF Research Database (Denmark)
Fristrup, C W; Pless, T; Durup, J
2004-01-01
BACKGROUND: Laparoscopic ultrasound is an important modality in the staging of gastrointestinal tumors. Correct staging depends on good spatial understanding of the regional tumor infiltration. Three-dimensional (3D) models may facilitate the evaluation of tumor infiltration. The aim of the study...... accuracy of the new method was tested ex vivo, and the clinical feasibility was tested on a small series of patients. RESULTS: Both electromagnetic tracked reconstructions and the new 3D method gave good volumetric information with no significant difference. Clinical use of the new 3D method showed...
Stephen, Joanna M; Kittl, Christoph; Williams, Andy; Zaffagnini, Stefano; Marcheggiani Muccioli, Giulio Maria; Fink, Christian; Amis, Andrew A
2016-05-01
There remains a lack of evidence regarding the optimal method when reconstructing the medial patellofemoral ligament (MPFL) and whether some graft constructs can be more forgiving to surgical errors, such as overtensioning or tunnel malpositioning, than others. The null hypothesis was that there would not be a significant difference between reconstruction methods (eg, graft type and fixation) in the adverse biomechanical effects (eg, patellar maltracking or elevated articular contact pressure) resulting from surgical errors such as tunnel malpositioning or graft overtensioning. Controlled laboratory study. Nine fresh-frozen cadaveric knees were placed on a customized testing rig, where the femur was fixed but the tibia could be moved freely from 0° to 90° of flexion. Individual quadriceps heads and the iliotibial tract were separated and loaded to 205 N of tension using a weighted pulley system. Patellofemoral contact pressures and patellar tracking were measured at 0°, 10°, 20°, 30°, 60°, and 90° of flexion using pressure-sensitive film inserted between the patella and trochlea, in conjunction with an optical tracking system. The MPFL was transected and then reconstructed in a randomized order using a (1) double-strand gracilis tendon, (2) quadriceps tendon, and (3) tensor fasciae latae allograft. Pressure maps and tracking measurements were recorded for each reconstruction method in 2 N and 10 N of tension and with the graft positioned in the anatomic, proximal, and distal femoral tunnel positions. Statistical analysis was undertaken using repeated-measures analyses of variance, Bonferroni post hoc analyses, and paired t tests. Anatomically placed grafts during MPFL reconstruction tensioned to 2 N resulted in the restoration of intact medial joint contact pressures and patellar tracking for all 3 graft types investigated (P > .050). However, femoral tunnels positioned proximal or distal to the anatomic origin resulted in significant increases in the mean
Residual-based a posteriori error estimation for multipoint flux mixed finite element methods
Du, Shaohong; Sun, Shuyu; Xie, Xiaoping
2015-01-01
A novel residual-type a posteriori error analysis technique is developed for multipoint flux mixed finite element methods for flow in porous media in two or three space dimensions. The derived a posteriori error estimator for the velocity and pressure error in L-norm consists of discretization and quadrature indicators, and is shown to be reliable and efficient. The main tools of analysis are a locally postprocessed approximation to the pressure solution of an auxiliary problem and a quadrature error estimate. Numerical experiments are presented to illustrate the competitive behavior of the estimator.
THE APPLICATION OF QRQC METHOD TO SOLVE PROBLEMS AND TO IMPROVE THE PRODUCTION FLUX (1
Directory of Open Access Journals (Sweden)
Ancuţa BĂLTEANU
2015-05-01
Full Text Available ORQC is a quality management system which aims customer satisfaction through immediate action. The subject proposed will be developed over two parts. The first written work – respectively this - will presented initial situation within a production flow that is sensed in the appearance of defects and nonconformities in obtain the final products. In this second paper we will show the use of this method in a situation that requires elimination of a technological problem appeared in the production flux and in relieving its positive consequence.
Residual-based a posteriori error estimation for multipoint flux mixed finite element methods
Du, Shaohong
2015-10-26
A novel residual-type a posteriori error analysis technique is developed for multipoint flux mixed finite element methods for flow in porous media in two or three space dimensions. The derived a posteriori error estimator for the velocity and pressure error in L-norm consists of discretization and quadrature indicators, and is shown to be reliable and efficient. The main tools of analysis are a locally postprocessed approximation to the pressure solution of an auxiliary problem and a quadrature error estimate. Numerical experiments are presented to illustrate the competitive behavior of the estimator.
Standard Test Method for Measuring Heat Flux Using a Water-Cooled Calorimeter
American Society for Testing and Materials. Philadelphia
2005-01-01
1.1 This test method covers the measurement of a steady heat flux to a given water-cooled surface by means of a system energy balance. 1.2 The values stated in SI units are to be regarded as standard. No other units of measurement are included in this standard. 1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.
Ibrahim, Anis; Haniff Harun, Mohd; Yusup, Yusri
2017-04-01
A study presents the measurements of carbon dioxide and latent and sensible heat fluxes above a mature oil palm plantation on mineral soil in Keratong, Pahang, Peninsular Malaysia. The sampling campaign was conducted over an 25-month period, from September 2013 to February 2015 and May 2016 to November 2016, using the eddy covariance method. The main aim of this work is to assess carbon dioxide and energy fluxes over this plantation at different time scales, seasonal and diurnal, and determine the effects of season and relevant meteorological parameters on the latter fluxes. Energy balance closure analyses gave a slope between latent and sensible heat fluxes and total incoming energy to be 0.69 with an R2 value of 0.86 and energy balance ratio of 0.80. The averaged net radiation was 108 W m-2. The results show that at the diurnal scale, carbon dioxide, latent and sensible heat fluxes exhibited a clear diurnal trend where carbon dioxide flux was at its minimum - 3.59 μmol m-2 s-1 in the mid-afternoon and maximum in the morning while latent and sensible behaved conversely to the carbon dioxide flux. The average carbon dioxide flux was - 0.37 μmol m-2 s-1. At the seasonal timescale, carbon dioxide fluxes did not show any apparent trend except during the Northeast Monsoon where the highest variability of the monthly means of carbon dioxide occurred.
Tomiwa, K. G.
2017-09-01
The search for new physics in the H → γγ+met relies on how well the missing transverse energy is reconstructed. The Met algorithm used by the ATLAS experiment in turns uses input variables like photon and jets which depend on the reconstruction of the primary vertex. This document presents the performance of di-photon vertex reconstruction algorithms (hardest vertex method and Neural Network method). Comparing the performance of these algorithms for the nominal Standard Model sample and the Beyond Standard Model sample, we see the overall performance of the Neural Network method of primary vertex selection performed better than the Hardest vertex method.
International Nuclear Information System (INIS)
Hayward, Robert M.; Rahnema, Farzad; Zhang, Dingkang
2013-01-01
Highlights: ► A new hybrid stochastic–deterministic transport theory method to couple with diffusion theory. ► The method is implemented in 2D hexagonal geometry. ► The new method produces excellent results when compared with Monte Carlo reference solutions. ► The method is fast, solving all test cases in less than 12 s. - Abstract: A new hybrid stochastic–deterministic transport theory method, which is designed to couple with diffusion theory, is presented. The new method is an extension of the incident flux response expansion method, and it combines the speed of diffusion theory with the accuracy of transport theory. With ease of use in mind, the new method is derived in such a way that it can be implemented with only minimal modifications to an existing diffusion theory method. A new angular expansion, which is necessary for the diffusion theory coupling, is developed in 2D and 3D. The method is implemented in 2D hexagonal geometry, and an HTTR benchmark problem is used to test its accuracy in a standalone configuration. It is found that the new method produces excellent results (with average relative error in partial current less than 0.033%) when compared with Monte Carlo reference solutions. Furthermore, the method is fast, solving all test cases in less than 12 s
A successive over-relaxation for slab geometry Simplified SN method with interface flux iteration
International Nuclear Information System (INIS)
Yavuz, M.
1995-01-01
A Successive Over-Relaxation scheme is proposed for speeding up the solution of one-group slab geometry transport problems using a Simplified S N method. The solution of the Simplified S N method that is completely free from all spatial truncation errors is based on the expansion of the angular flux in spherical-harmonics solutions. One way to obtain the (numerical) solution of the Simplified S N method is to use Interface Flux Iteration, which can be considered as the Gauss-Seidel relaxation scheme; the new information is immediately used in the calculations. To accelerate the convergence, an over relaxation parameter is employed in the solution algorithm. The over relaxation parameters for a number of cases depending on scattering ratios and mesh sizes are determined by Fourier analyzing infinite-medium Simplified S 2 equations. Using such over relaxation parameters in the iterative scheme, a significant increase in the convergence of transport problems can be achieved for coarse spatial cells whose spatial widths are greater than one mean-free-path
Directory of Open Access Journals (Sweden)
Meng Lu
2013-01-01
Full Text Available Thickness of tundish cover flux (TCF plays an important role in continuous casting (CC steelmaking process. Traditional measurement method of TCF thickness is single/double wire methods, which have several problems such as personal security, easily affected by operators, and poor repeatability. To solve all these problems, in this paper, we specifically designed and built an instrumentation and presented a novel method to measure the TCF thickness. The instrumentation was composed of a measurement bar, a mechanical device, a high-definition industrial camera, a Siemens S7-200 programmable logic controller (PLC, and a computer. Our measurement method was based on the computer vision algorithms, including image denoising method, monocular range measurement method, scale invariant feature transform (SIFT, and image gray gradient detection method. Using the present instrumentation and method, images in the CC tundish can be collected by camera and transferred to computer to do imaging processing. Experiments showed that our instrumentation and method worked well at scene of steel plants, can accurately measure the thickness of TCF, and overcome the disadvantages of traditional measurement methods, or even replace the traditional ones.
Benazzi, S; Stansfield, E; Milani, C; Gruppioni, G
2009-07-01
The process of forensic identification of missing individuals is frequently reliant on the superimposition of cranial remains onto an individual's picture and/or facial reconstruction. In the latter, the integrity of the skull or a cranium is an important factor in successful identification. Here, we recommend the usage of computerized virtual reconstruction and geometric morphometrics for the purposes of individual reconstruction and identification in forensics. We apply these methods to reconstruct a complete cranium from facial remains that allegedly belong to the famous Italian humanist of the fifteenth century, Angelo Poliziano (1454-1494). Raw data was obtained by computed tomography scans of the Poliziano face and a complete reference skull of a 37-year-old Italian male. Given that the amount of distortion of the facial remains is unknown, two reconstructions are proposed: The first calculates the average shape between the original and its reflection, and the second discards the less preserved left side of the cranium under the assumption that there is no deformation on the right. Both reconstructions perform well in the superimposition with the original preserved facial surface in a virtual environment. The reconstruction by means of averaging between the original and reflection yielded better results during the superimposition with portraits of Poliziano. We argue that the combination of computerized virtual reconstruction and geometric morphometric methods offers a number of advantages over traditional plastic reconstruction, among which are speed, reproducibility, easiness of manipulation when superimposing with pictures in virtual environment, and assumptions control.
Developing a framework for evaluating tallgrass prairie reconstruction methods and management
Larson, Diane L.; Ahlering, Marissa; Drobney, Pauline; Esser, Rebecca; Larson, Jennifer L.; Viste-Sparkman, Karen
2018-01-01
The thousands of hectares of prairie reconstructed each year in the tallgrass prairie biome can provide a valuable resource for evaluation of seed mixes, planting methods, and post-planting management if methods used and resulting characteristics of the prairies are recorded and compiled in a publicly accessible database. The objective of this study was to evaluate the use of such data to understand the outcomes of reconstructions over a 10-year period at two U.S. Fish and Wildlife Service refuges. Variables included number of species planted, seed source (combine-harvest or combine-harvest plus hand-collected), fire history, and planting method and season. In 2015 we surveyed vegetation on 81 reconstructions and calculated proportion of planted species observed; introduced species richness; native species richness, evenness and diversity; and mean coefficient of conservatism. We conducted exploratory analyses to learn how implied communities based on seed mix compared with observed vegetation; which seeding or management variables were influential in the outcome of the reconstructions; and consistency of responses between the two refuges. Insights from this analysis include: 1) proportion of planted species observed in 2015 declined as planted richness increased, but lack of data on seeding rate per species limited conclusions about value of added species; 2) differing responses to seeding and management between the two refuges suggest the importance of geographic variability that could be addressed using a public database; and 3) variables such as fire history are difficult to quantify consistently and should be carefully evaluated in the context of a public data repository.
International Nuclear Information System (INIS)
Shinn, J.H.
1976-01-01
Two methods of dust-flux measurements are discussed which have been utilized to estimate aerosol plutonium deposition and resuspension. In previous studies the methods were found to be sufficiently detailed to permit parameterization of dust-flux to the erodibility of the soil, and a seventh-power dependency of dust-flux (or plutonium flux) to wind speed was observed in worst case conditions. The eddy-correlation method is technically more difficult, requires high-speed data acquisition, and requires an instrument response time better than one second, but the eddy-correlation method has been shown feasible with new fast-response sensors, and it is more useful in limited areas because it can be used as a probe. The flux-gradient method is limited by critical assumptions and is more bulky, but the method is more commonly used and accepted. The best approach is to use both methods simultaneously. It is suggested that several questions should be investigated by the methods, such as saltation stimulation of dust-flux, simultaneous suspension and deposition, foliar deposition and trapping, erodibility of crusted surfaces, and horizontally heterogeneous erodibility
A Lift-Off-Tolerant Magnetic Flux Leakage Testing Method for Drill Pipes at Wellhead.
Wu, Jianbo; Fang, Hui; Li, Long; Wang, Jie; Huang, Xiaoming; Kang, Yihua; Sun, Yanhua; Tang, Chaoqing
2017-01-21
To meet the great needs for MFL (magnetic flux leakage) inspection of drill pipes at wellheads, a lift-off-tolerant MFL testing method is proposed and investigated in this paper. Firstly, a Helmholtz coil magnetization method and the whole MFL testing scheme are proposed. Then, based on the magnetic field focusing effect of ferrite cores, a lift-off-tolerant MFL sensor is developed and tested. It shows high sensitivity at a lift-off distance of 5.0 mm. Further, the follow-up high repeatability MFL probing system is designed and manufactured, which was embedded with the developed sensors. It can track the swing movement of drill pipes and allow the pipe ends to pass smoothly. Finally, the developed system is employed in a drilling field for drill pipe inspection. Test results show that the proposed method can fulfill the requirements for drill pipe inspection at wellheads, which is of great importance in drill pipe safety.
A Lift-Off-Tolerant Magnetic Flux Leakage Testing Method for Drill Pipes at Wellhead
Directory of Open Access Journals (Sweden)
Jianbo Wu
2017-01-01
Full Text Available To meet the great needs for MFL (magnetic flux leakage inspection of drill pipes at wellheads, a lift-off-tolerant MFL testing method is proposed and investigated in this paper. Firstly, a Helmholtz coil magnetization method and the whole MFL testing scheme are proposed. Then, based on the magnetic field focusing effect of ferrite cores, a lift-off-tolerant MFL sensor is developed and tested. It shows high sensitivity at a lift-off distance of 5.0 mm. Further, the follow-up high repeatability MFL probing system is designed and manufactured, which was embedded with the developed sensors. It can track the swing movement of drill pipes and allow the pipe ends to pass smoothly. Finally, the developed system is employed in a drilling field for drill pipe inspection. Test results show that the proposed method can fulfill the requirements for drill pipe inspection at wellheads, which is of great importance in drill pipe safety.
System Characterizations and Optimized Reconstruction Methods for Novel X-ray Imaging Modalities
Guan, Huifeng
In the past decade there have been many new emerging X-ray based imaging technologies developed for different diagnostic purposes or imaging tasks. However, there exist one or more specific problems that prevent them from being effectively or efficiently employed. In this dissertation, four different novel X-ray based imaging technologies are discussed, including propagation-based phase-contrast (PB-XPC) tomosynthesis, differential X-ray phase-contrast tomography (D-XPCT), projection-based dual-energy computed radiography (DECR), and tetrahedron beam computed tomography (TBCT). System characteristics are analyzed or optimized reconstruction methods are proposed for these imaging modalities. In the first part, we investigated the unique properties of propagation-based phase-contrast imaging technique when combined with the X-ray tomosynthesis. Fourier slice theorem implies that the high frequency components collected in the tomosynthesis data can be more reliably reconstructed. It is observed that the fringes or boundary enhancement introduced by the phase-contrast effects can serve as an accurate indicator of the true depth position in the tomosynthesis in-plane image. In the second part, we derived a sub-space framework to reconstruct images from few-view D-XPCT data set. By introducing a proper mask, the high frequency contents of the image can be theoretically preserved in a certain region of interest. A two-step reconstruction strategy is developed to mitigate the risk of subtle structures being oversmoothed when the commonly used total-variation regularization is employed in the conventional iterative framework. In the thirt part, we proposed a practical method to improve the quantitative accuracy of the projection-based dual-energy material decomposition. It is demonstrated that applying a total-projection-length constraint along with the dual-energy measurements can achieve a stabilized numerical solution of the decomposition problem, thus overcoming the
International Nuclear Information System (INIS)
Pill-Hoon Choung
1999-01-01
Although there are various applications of allogenic bone grafts, a new technique of prevascularized lyophilized allogenic bone grafting for maxillo-mandibular reconstruction will be presented. Allogenic bone has been made by author's protocol for jaw defects as a powder, chip or block bone type. The author used lyophilized allogenic bone grafts for discontinuity defects as a block bone. In those cases, neovascularization and resorption of the allogenic bone were important factors for success of grafting. To overcome the problems, the author designed the technique of prefabricated vascularization of allogenic bone, which was lyophilized cranium, with an application of bovine BMP or not. Lyophilized cranial bone was designed for the defects and was put into the scalp. After confirming a hot spot via scintigram several months later, vascularized allogenic bone was harvested pedicled on the parietotemporal fascia based on the superficial temporal artery and vein. Vascularized allogenic cranial bone was rotated into the defect and fixed rigidly. Postoperatively, there was no severe resorption and functional disturbance of the mandible. In this technique, BMP seems to be an important role to help osteogenesis and neovascularization. Eight patients underwent prefabricated vascularization of allogenic bone grafts. Among them, four cases of reconstruction in mandibular discontinuity defects and one case of reconstruction in maxillectomy defect underwent this method, which will be presented with good results. This method may be an alternative technique of microvascular free bone graft
International Nuclear Information System (INIS)
Soussaline, F.; Bidaut, L.; Raynaud, C.; Le Coq, G.
1983-06-01
An analytical solution to the SPECT reconstruction problem, where the actual attenuation effect can be included, was developped using a regularizing iterative method (RIM). The potential of this approach in quantitative brain studies when using a tracer for cerebrovascular disorders is now under evaluation. Mathematical simulations for a distributed activity in the brain surrounded by the skull and physical phantom studies were performed, using a rotating camera based SPECT system, allowing the calibration of the system and the evaluation of the adapted method to be used. On the simulation studies, the contrast obtained along a profile, was less than 5%, the standard deviation 8% and the quantitative accuracy 13%, for a uniform emission distribution of mean = 100 per pixel and a double attenuation coefficient of μ = 0.115 cm -1 and 0.5 cm -1 . Clinical data obtained after injection of 123 I (AMPI) were reconstructed using the RIM without and with cerebrovascular diseases or lesion defects. Contour finding techniques were used for the delineation of the brain and the skull, and measured attenuation coefficients were assumed within these two regions. Using volumes of interest, selected on homogeneous regions on an hemisphere and reported symetrically, the statistical uncertainty for 300 K events in the tomogram was found to be 12%, the index of symetry was of 4% for normal distribution. These results suggest that quantitative SPECT reconstruction for brain distribution is feasible, and that combined with an adapted tracer and an adequate model physiopathological parameters could be extracted
International Nuclear Information System (INIS)
Manrique, John Peter O.; Costa, Alessandro M.
2016-01-01
The spectral distribution of megavoltage X-rays used in radiotherapy departments is a fundamental quantity from which, in principle, all relevant information required for radiotherapy treatments can be determined. To calculate the dose delivered to the patient who make radiation therapy, are used treatment planning systems (TPS), which make use of convolution and superposition algorithms and which requires prior knowledge of the photon fluence spectrum to perform the calculation of three-dimensional doses and thus ensure better accuracy in the tumor control probabilities preserving the normal tissue complication probabilities low. In this work we have obtained the photon fluence spectrum of X-ray of the SIEMENS ONCOR linear accelerator of 6 MV, using an character-inverse method to the reconstruction of the spectra of photons from transmission curves measured for different thicknesses of aluminum; the method used for reconstruction of the spectra is a stochastic technique known as generalized simulated annealing (GSA), based on the work of quasi-equilibrium statistic of Tsallis. For the validation of the reconstructed spectra we calculated the curve of percentage depth dose (PDD) for energy of 6 MV, using Monte Carlo simulation with Penelope code, and from the PDD then calculate the beam quality index TPR_2_0_/_1_0. (author)
International Nuclear Information System (INIS)
Knob, P.J.
1982-07-01
This work is concerned with the detection of flux disturbances in pebble bed high temperature reactors by means of flux measurements in the side reflector. Included among the disturbances studied are xenon oscillations, rod group insertions, and individual rod insertions. Using the three-dimensional diffusion code CITATION, core calculations for both a very small reactor (KAHTER) and a large reactor (PNP-3000) were carried out to determine the neutron fluxes at the detector positions. These flux values were then used in flux mapping codes for reconstructing the flux distribution in the core. As an extension of the already existing two-dimensional MOFA code, which maps azimuthal disturbances, a new three-dimensional flux mapping code ZELT was developed for handling axial disturbances as well. It was found that both flux mapping programs give satisfactory results for small and large pebble bed reactors alike. (orig.) [de
Reconstruction of Sound Source Pressures in an Enclosure Using the Phased Beam Tracing Method
DEFF Research Database (Denmark)
Jeong, Cheol-Ho; Ih, Jeong-Guon
2009-01-01
. First, surfaces of an extended source are divided into reasonably small segments. From each source segment, one beam is projected into the field and all emitted beams are traced. Radiated beams from the source reach array sensors after traveling various paths including the wall reflections. Collecting...... all the pressure histories at the field points, source-observer relations can be constructed in a matrix-vector form for each frequency. By multiplying the measured field data with the pseudo-inverse of the calculated transfer function, one obtains the distribution of source pressure. An omni......-directional sphere and a cubic source in a rectangular enclosure were taken as examples in the simulation tests. A reconstruction error was investigated by Monte Carlo simulation in terms of field point locations. When the source information was reconstructed by the present method, it was shown that the sound power...
Brief review of image reconstruction methods for imaging in nuclear medicine
International Nuclear Information System (INIS)
Murayama, Hideo
1999-01-01
Emission computed tomography (ECT) has as its major emphasis the quantitative determination of the moment to moment changes in the chemistry and flow physiology of injected or inhaled compounds labeled with radioactive atoms in a human body. The major difference lies in the fact that ECT seeks to describe the location and intensity of sources of emitted photons in an attenuating medium whereas transmission X-ray computed tomography (TCT) seeks to determine the distribution of the attenuating medium. A second important difference between ECT and TCT is that of available statistics. ECT statistics are low because each photon without control in emitting direction must be detected and analyzed, not as in TCT. The following sections review the historical development of image reconstruction methods for imaging in nuclear medicine, relevant intrinsic concepts for image reconstruction on ECT, and current status of volume imaging as well as a unique approach on iterative techniques for ECT. (author). 130 refs
High-order noise analysis for low dose iterative image reconstruction methods: ASIR, IRIS, and MBAI
Do, Synho; Singh, Sarabjeet; Kalra, Mannudeep K.; Karl, W. Clem; Brady, Thomas J.; Pien, Homer
2011-03-01
Iterative reconstruction techniques (IRTs) has been shown to suppress noise significantly in low dose CT imaging. However, medical doctors hesitate to accept this new technology because visual impression of IRT images are different from full-dose filtered back-projection (FBP) images. Most common noise measurements such as the mean and standard deviation of homogeneous region in the image that do not provide sufficient characterization of noise statistics when probability density function becomes non-Gaussian. In this study, we measure L-moments of intensity values of images acquired at 10% of normal dose and reconstructed by IRT methods of two state-of-art clinical scanners (i.e., GE HDCT and Siemens DSCT flash) by keeping dosage level identical to each other. The high- and low-dose scans (i.e., 10% of high dose) were acquired from each scanner and L-moments of noise patches were calculated for the comparison.
International Nuclear Information System (INIS)
Osorio, A.M.; Hadler, J.C.; Iunes, P.J.; Paulo, S.R. de
1993-06-01
In order to use the fission track dating method the flux gradient was verified within the sample holder, in some irradiation positions of the IEA-R1 reactor at IPEN/CNEN, Sao Paulo. The fission track dating method considers only the thermal neutron fission tracks, to subtract the other contributions sample irradiations with a cadmium cover was performed. The neutron flux cadmium influence was studied. (author)
Directory of Open Access Journals (Sweden)
P.L. Israelevich
Full Text Available In this study we test a stream function method suggested by Israelevich and Ershkovich for instantaneous reconstruction of global, high-latitude ionospheric convection patterns from a limited set of experimental observations, namely, from the electric field or ion drift velocity vector measurements taken along two polar satellite orbits only. These two satellite passes subdivide the polar cap into several adjacent areas. Measured electric fields or ion drifts can be considered as boundary conditions (together with the zero electric potential condition at the low-latitude boundary for those areas, and the entire ionospheric convection pattern can be reconstructed as a solution of the boundary value problem for the stream function without any preliminary information on ionospheric conductivities. In order to validate the stream function method, we utilized the IZMIRAN electrodynamic model (IZMEM recently calibrated by the DMSP ionospheric electrostatic potential observations. For the sake of simplicity, we took the modeled electric fields along the noon-midnight and dawn-dusk meridians as the boundary conditions. Then, the solution(s of the boundary value problem (i.e., a reconstructed potential distribution over the entire polar region is compared with the original IZMEM/DMSP electric potential distribution(s, as well as with the various cross cuts of the polar cap. It is found that reconstructed convection patterns are in good agreement with the original modelled patterns in both the northern and southern polar caps. The analysis is carried out for the winter and summer conditions, as well as for a number of configurations of the interplanetary magnetic field.
Key words: Ionosphere (electric fields and currents; plasma convection; modelling and forecasting
Directory of Open Access Journals (Sweden)
Jacob J Setterbo
Full Text Available Racetrack surface is a risk factor for racehorse injuries and fatalities. Current research indicates that race surface mechanical properties may be influenced by material composition, moisture content, temperature, and maintenance. Race surface mechanical testing in a controlled laboratory setting would allow for objective evaluation of dynamic properties of surface and factors that affect surface behavior.To develop a method for reconstruction of race surfaces in the laboratory and validate the method by comparison with racetrack measurements of dynamic surface properties.Track-testing device (TTD impact tests were conducted to simulate equine hoof impact on dirt and synthetic race surfaces; tests were performed both in situ (racetrack and using laboratory reconstructions of harvested surface materials. Clegg Hammer in situ measurements were used to guide surface reconstruction in the laboratory. Dynamic surface properties were compared between in situ and laboratory settings. Relationships between racetrack TTD and Clegg Hammer measurements were analyzed using stepwise multiple linear regression.Most dynamic surface property setting differences (racetrack-laboratory were small relative to surface material type differences (dirt-synthetic. Clegg Hammer measurements were more strongly correlated with TTD measurements on the synthetic surface than the dirt surface. On the dirt surface, Clegg Hammer decelerations were negatively correlated with TTD forces.Laboratory reconstruction of racetrack surfaces guided by Clegg Hammer measurements yielded TTD impact measurements similar to in situ values. The negative correlation between TTD and Clegg Hammer measurements confirms the importance of instrument mass when drawing conclusions from testing results. Lighter impact devices may be less appropriate for assessing dynamic surface properties compared to testing equipment designed to simulate hoof impact (TTD.Dynamic impact properties of race surfaces
Pulsed magnetic flux leakage method for hairline crack detection and characterization
Okolo, Chukwunonso K.; Meydan, Turgut
2018-04-01
The Magnetic Flux leakage (MFL) method is a well-established branch of electromagnetic Non-Destructive Testing (NDT), extensively used for evaluating defects both on the surface and far-surface of pipeline structures. However the conventional techniques are not capable of estimating their approximate size, location and orientation, hence an additional transducer is required to provide the extra information needed. This research is aimed at solving the inevitable problem of granular bond separation which occurs during manufacturing, leaving pipeline structures with miniature cracks. It reports on a quantitative approach based on the Pulsed Magnetic Flux Leakage (PMFL) method, for the detection and characterization of the signals produced by tangentially oriented rectangular surface and far-surface hairline cracks. This was achieved through visualization and 3D imaging of the leakage field. The investigation compared finite element numerical simulation with experimental data. Experiments were carried out using a 10mm thick low carbon steel plate containing artificial hairline cracks with various depth sizes, and different features were extracted from the transient signal. The influence of sensor lift-off and pulse width variation on the magnetic field distribution which affects the detection capability of various hairline cracks located at different depths in the specimen is explored. The findings show that the proposed technique can be used to classify both surface and far-surface hairline cracks and can form the basis for an enhanced hairline crack detection and characterization for pipeline health monitoring.
International Nuclear Information System (INIS)
Kheymits, M D; Leonov, A A; Zverev, V G; Galper, A M; Arkhangelskaya, I V; Arkhangelskiy, A I; Yurkin, Yu T; Bakaldin, A V; Suchkov, S I; Topchiev, N P; Dalkarov, O D
2016-01-01
The GAMMA-400 gamma-ray space-based telescope has as its main goals to measure cosmic γ-ray fluxes and the electron-positron cosmic-ray component produced, theoretically, in dark-matter-particles decay or annihilation processes, to search for discrete γ-ray sources and study them in detail, to examine the energy spectra of diffuse γ-rays — both galactic and extragalactic — and to study gamma-ray bursts (GRBs) and γ-rays from the active Sun. Scientific goals of GAMMA-400 telescope require fine angular resolution. The telescope is of a pair-production type. In the converter-tracker, the incident gamma-ray photon converts into electron-positron pair in the tungsten layer and then the tracks are detected by silicon- strip position-sensitive detectors. Multiple scattering processes become a significant obstacle in the incident-gamma direction reconstruction for energies below several gigaelectronvolts. The method of utilising this process to improve the resolution is proposed in the presented work. (paper)
Impact of reconstruction methods and pathological factors on survival after pancreaticoduodenectomy
Directory of Open Access Journals (Sweden)
Salah Binziad
2013-01-01
Full Text Available Background: Surgery remains the mainstay of therapy for pancreatic head (PH and periampullary carcinoma (PC and provides the only chance of cure. Improvements of surgical technique, increased surgical experience and advances in anesthesia, intensive care and parenteral nutrition have substantially decreased surgical complications and increased survival. We evaluate the effects of reconstruction type, complications and pathological factors on survival and quality of life. Materials and Methods: This is a prospective study to evaluate the impact of various reconstruction methods of the pancreatic remnant after pancreaticoduodenectomy and the pathological characteristics of PC patients over 3.5 years. Patient characteristics and descriptive analysis in the three variable methods either with or without stent were compared with Chi-square test. Multivariate analysis was performed with the logistic regression analysis test and multinomial logistic regression analysis test. Survival rate was analyzed by use Kaplan-Meier test. Results: Forty-one consecutive patients with PC were enrolled. There were 23 men (56.1% and 18 women (43.9%, with a median age of 56 years (16 to 70 years. There were 24 cases of PH cancer, eight cases of PC, four cases of distal CBD cancer and five cases of duodenal carcinoma. Nine patients underwent duct-to-mucosa pancreatico jejunostomy (PJ, 17 patients underwent telescoping pancreatico jejunostomy (PJ and 15 patients pancreaticogastrostomy (PG. The pancreatic duct was stented in 30 patients while in 11 patients, the duct was not stented. The PJ duct-to-mucosa caused significantly less leakage, but longer operative and reconstructive times. Telescoping PJ was associated with the shortest hospital stay. There were 5 postoperative mortalities, while postoperative morbidities included pancreatic fistula-6 patients, delayed gastric emptying in-11, GI fistula-3, wound infection-12, burst abdomen-6 and pulmonary infection-2. Factors
International Nuclear Information System (INIS)
Kosarev, E.L.
1980-01-01
A new method to reconstruct spatial star distribution in globular clusters is presented. The method gives both the estimation of unknown spatial distribution and the probable reconstruction error. This error has statistical origin and depends only on the number of stars in a cluster. The method is applied to reconstruct the spatial density of 441 flare stars in Pleiades. The spatial density has a maximum in the centre of the cluster of about 1.6-2.5 pc -3 and with increasing distance from the center smoothly falls down to zero approximately with the Gaussian law with a scale parameter of 3.5 pc
FBP and BPF reconstruction methods for circular X-ray tomography with off-center detector
International Nuclear Information System (INIS)
Schaefer, Dirk; Grass, Michael; Haar, Peter van de
2011-01-01
Purpose: Circular scanning with an off-center planar detector is an acquisition scheme that allows to save detector area while keeping a large field of view (FOV). Several filtered back-projection (FBP) algorithms have been proposed earlier. The purpose of this work is to present two newly developed back-projection filtration (BPF) variants and evaluate the image quality of these methods compared to the existing state-of-the-art FBP methods. Methods: The first new BPF algorithm applies redundancy weighting of overlapping opposite projections before differentiation in a single projection. The second one uses the Katsevich-type differentiation involving two neighboring projections followed by redundancy weighting and back-projection. An averaging scheme is presented to mitigate streak artifacts inherent to circular BPF algorithms along the Hilbert filter lines in the off-center transaxial slices of the reconstructions. The image quality is assessed visually on reconstructed slices of simulated and clinical data. Quantitative evaluation studies are performed with the Forbild head phantom by calculating root-mean-squared-deviations (RMSDs) to the voxelized phantom for different detector overlap settings and by investigating the noise resolution trade-off with a wire phantom in the full detector and off-center scenario. Results: The noise-resolution behavior of all off-center reconstruction methods corresponds to their full detector performance with the best resolution for the FDK based methods with the given imaging geometry. With respect to RMSD and visual inspection, the proposed BPF with Katsevich-type differentiation outperforms all other methods for the smallest chosen detector overlap of about 15 mm. The best FBP method is the algorithm that is also based on the Katsevich-type differentiation and subsequent redundancy weighting. For wider overlap of about 40-50 mm, these two algorithms produce similar results outperforming the other three methods. The clinical
Extension of the heat flux method to liquid (bio-)fuels
Energy Technology Data Exchange (ETDEWEB)
Meuwissen, R.
2009-01-15
The adiabatic burning velocity S{sub L} of a fuel/oxidizer mixture is a key parameter governing many properties of combustion, such as the shape and stabilization of the flame. It can be applied as an input parameter for many combustion models. Furthermore, kinetic schemes can be validated by the use of this parameter. A great extend of research has been performed on determining the adiabatic burning velocities of gaseous fuels. Liquid fuels however, have been examined far less extensive. Literature available shows eminent scatter amongst the data of independent groups and distinctive techniques. The methods used for measuring burning velocities need certain corrections for flame properties which cause additional uncertainties and make the scattering of data not completely unexpected. The heat flux burner used in this work, previously developed at the TU/e, creates a flat flame coherently no corrections for stretch are necessary. Instead, the heat exchange with the burner is considered; by measuring the temperature distribution over the burner plate, the net heat flux of the flame to the burner can be determined. By tuning the unburnt gas velocity until there is no net heat flux, the adiabatic burning velocity is found by interpolation. An extension to the original design, using a vaporized fluid in a carrier gas flow, enables to measure burning velocities of liquid fuels. In the present research, burning velocity measurements have been performed on vaporized ethanol/air flames in order to validate the setup. Similarities with the latest experimental research have been evaluated and good agreement has been found. Furthermore, temperature dependencies have been elucidated and compared to power law correlations stated by this external research. Again, good resemblance can be claimed, although the expanding of certain input parameters on mixture composition could give more solid confirmation. Subsequently, comparison with numerically performed calculations has been
Directory of Open Access Journals (Sweden)
Buyun Sheng
2018-01-01
Full Text Available The existing surface reconstruction algorithms currently reconstruct large amounts of mesh data. Consequently, many of these algorithms cannot meet the efficiency requirements of real-time data transmission in a web environment. This paper proposes a lightweight surface reconstruction method for online 3D scanned point cloud data oriented toward 3D printing. The proposed online lightweight surface reconstruction algorithm is composed of a point cloud update algorithm (PCU, a rapid iterative closest point algorithm (RICP, and an improved Poisson surface reconstruction algorithm (IPSR. The generated lightweight point cloud data are pretreated using an updating and rapid registration method. The Poisson surface reconstruction is also accomplished by a pretreatment to recompute the point cloud normal vectors; this approach is based on a least squares method, and the postprocessing of the PDE patch generation was based on biharmonic-like fourth-order PDEs, which effectively reduces the amount of reconstructed mesh data and improves the efficiency of the algorithm. This method was verified using an online personalized customization system that was developed with WebGL and oriented toward 3D printing. The experimental results indicate that this method can generate a lightweight 3D scanning mesh rapidly and efficiently in a web environment.
Prediction of critical heat flux in fuel assemblies using a CHF table method
Energy Technology Data Exchange (ETDEWEB)
Chun, Tae Hyun; Hwang, Dae Hyun; Bang, Je Geon [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of); Baek, Won Pil; Chang, Soon Heung [Korea Advance Institute of Science and Technology, Taejon (Korea, Republic of)
1997-12-31
A CHF table method has been assessed in this study for rod bundle CHF predictions. At the conceptual design stage for a new reactor, a general critical heat flux (CHF) prediction method with a wide applicable range and reasonable accuracy is essential to the thermal-hydraulic design and safety analysis. In many aspects, a CHF table method (i.e., the use of a round tube CHF table with appropriate bundle correction factors) can be a promising way to fulfill this need. So the assessment of the CHF table method has been performed with the bundle CHF data relevant to pressurized water reactors (PWRs). For comparison purposes, W-3R and EPRI-1 were also applied to the same data base. Data analysis has been conducted with the subchannel code COBRA-IV-I. The CHF table method shows the best predictions based on the direct substitution method. Improvements of the bundle correction factors, especially for the spacer grid and cold wall effects, are desirable for better predictions. Though the present assessment is somewhat limited in both fuel geometries and operating conditions, the CHF table method clearly shows potential to be a general CHF predictor. 8 refs., 3 figs., 3 tabs. (Author)
Prediction of critical heat flux in fuel assemblies using a CHF table method
Energy Technology Data Exchange (ETDEWEB)
Chun, Tae Hyun; Hwang, Dae Hyun; Bang, Je Geon [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of); Baek, Won Pil; Chang, Soon Heung [Korea Advance Institute of Science and Technology, Taejon (Korea, Republic of)
1998-12-31
A CHF table method has been assessed in this study for rod bundle CHF predictions. At the conceptual design stage for a new reactor, a general critical heat flux (CHF) prediction method with a wide applicable range and reasonable accuracy is essential to the thermal-hydraulic design and safety analysis. In many aspects, a CHF table method (i.e., the use of a round tube CHF table with appropriate bundle correction factors) can be a promising way to fulfill this need. So the assessment of the CHF table method has been performed with the bundle CHF data relevant to pressurized water reactors (PWRs). For comparison purposes, W-3R and EPRI-1 were also applied to the same data base. Data analysis has been conducted with the subchannel code COBRA-IV-I. The CHF table method shows the best predictions based on the direct substitution method. Improvements of the bundle correction factors, especially for the spacer grid and cold wall effects, are desirable for better predictions. Though the present assessment is somewhat limited in both fuel geometries and operating conditions, the CHF table method clearly shows potential to be a general CHF predictor. 8 refs., 3 figs., 3 tabs. (Author)
A Weighted Two-Level Bregman Method with Dictionary Updating for Nonconvex MR Image Reconstruction
Directory of Open Access Journals (Sweden)
Qiegen Liu
2014-01-01
Full Text Available Nonconvex optimization has shown that it needs substantially fewer measurements than l1 minimization for exact recovery under fixed transform/overcomplete dictionary. In this work, two efficient numerical algorithms which are unified by the method named weighted two-level Bregman method with dictionary updating (WTBMDU are proposed for solving lp optimization under the dictionary learning model and subjecting the fidelity to the partial measurements. By incorporating the iteratively reweighted norm into the two-level Bregman iteration method with dictionary updating scheme (TBMDU, the modified alternating direction method (ADM solves the model of pursuing the approximated lp-norm penalty efficiently. Specifically, the algorithms converge after a relatively small number of iterations, under the formulation of iteratively reweighted l1 and l2 minimization. Experimental results on MR image simulations and real MR data, under a variety of sampling trajectories and acceleration factors, consistently demonstrate that the proposed method can efficiently reconstruct MR images from highly undersampled k-space data and presents advantages over the current state-of-the-art reconstruction approaches, in terms of higher PSNR and lower HFEN values.
A new method of three-dimensional computer assisted reconstruction of the developing biliary tract.
Prudhomme, M; Gaubert-Cristol, R; Jaeger, M; De Reffye, P; Godlewski, G
1999-01-01
A three-dimensional (3-D) computer assisted reconstruction of the biliary tract was performed in human and rat embryos at Carnegie stage 23 to describe and compare the biliary structures and to point out the anatomic relations between the structures of the hepatic pedicle. Light micrograph images from consecutive serial sagittal sections (diameter 7 mm) of one human and 16 rat embryos were directly digitalized with a CCD camera. The serial views were aligned automatically by software. The data were analysed following segmentation and thresholding, allowing automatic reconstruction. The main bile ducts ascended in the mesoderm of the hepatoduodenal ligament. The extrahepatic bile ducts: common bile duct (CD), cystic duct and gallbladder in the human, formed a compound system which could not be shown so clearly in histologic sections. The hepato-pancreatic ampulla was studied as visualised through the duodenum. The course of the CD was like a chicane. The gallbladder diameter and length were similar to those of the CD. Computer-assisted reconstruction permitted easy acquisition of the data by direct examination of the sections through the microscope. This method showed the relationships between the different structures of the hepatic pedicle and allowed estimation of the volume of the bile duct. These findings were not obvious in two-dimensional (2-D) views from histologic sections. Each embryonic stage could be rebuilt in 3-D, which could introduce the time as a fourth dimension, fundamental for the study of organogenesis.
Directory of Open Access Journals (Sweden)
Bakhtiari Jalal
2012-12-01
Full Text Available Abstract Background Laparoscopic gastrectomy is a new and technically challenging surgical procedure with potential benefit. The objective of this study was to investigate clinical and para-clinical consequences following Roux-en-Y and Jejunal Loop interposition reconstructive techniques for subtotal gastrectomy using laparoscopic assisted surgery. Results Following resection of the stomach attachments through a laparoscopic approach, stomach was removed and reconstruction was performed with either standard Roux-en-Y (n = 5 or Jejunal Loop interposition (n = 5 methods. Weight changes were monitored on a daily basis and blood samples were collected on Days 0, 7 and 21 post surgery. A fecal sample was collected on Day 28 after surgery to evaluate fat content. One month post surgery, positive contrast radiography was conducted at 5, 10, 20, 40, 60 and 90 minutes after oral administration of barium sulfate, to evaluate the postoperative complications. There was a gradual decline in body weight in both experimental groups after surgery (P 0.05. Fecal fat content increased in the Roux-en-Y compared to the Jejunal loop interposition technique (P 0.05. Conclusion Roux-en-Y and Jejunal loop interposition techniques might be considered as suitable approaches for reconstructing gastro-intestinal tract following gastrectomy in dogs. The results of this study warrant further investigation with a larger number of animals.
Fast gradient-based methods for Bayesian reconstruction of transmission and emission PET images
International Nuclear Information System (INIS)
Mumcuglu, E.U.; Leahy, R.; Zhou, Z.; Cherry, S.R.
1994-01-01
The authors describe conjugate gradient algorithms for reconstruction of transmission and emission PET images. The reconstructions are based on a Bayesian formulation, where the data are modeled as a collection of independent Poisson random variables and the image is modeled using a Markov random field. A conjugate gradient algorithm is used to compute a maximum a posteriori (MAP) estimate of the image by maximizing over the posterior density. To ensure nonnegativity of the solution, a penalty function is used to convert the problem to one of unconstrained optimization. Preconditioners are used to enhance convergence rates. These methods generally achieve effective convergence in 15--25 iterations. Reconstructions are presented of an 18 FDG whole body scan from data collected using a Siemens/CTI ECAT931 whole body system. These results indicate significant improvements in emission image quality using the Bayesian approach, in comparison to filtered backprojection, particularly when reprojections of the MAP transmission image are used in place of the standard attenuation correction factors
Statistical image reconstruction methods for simultaneous emission/transmission PET scans
International Nuclear Information System (INIS)
Erdogan, H.; Fessler, J.A.
1996-01-01
Transmission scans are necessary for estimating the attenuation correction factors (ACFs) to yield quantitatively accurate PET emission images. To reduce the total scan time, post-injection transmission scans have been proposed in which one can simultaneously acquire emission and transmission data using rod sources and sinogram windowing. However, since the post-injection transmission scans are corrupted by emission coincidences, accurate correction for attenuation becomes more challenging. Conventional methods (emission subtraction) for ACF computation from post-injection scans are suboptimal and require relatively long scan times. We introduce statistical methods based on penalized-likelihood objectives to compute ACFs and then use them to reconstruct lower noise PET emission images from simultaneous transmission/emission scans. Simulations show the efficacy of the proposed methods. These methods improve image quality and SNR of the estimates as compared to conventional methods
International Nuclear Information System (INIS)
Kollár, László E; Lucas, Gary P; Zhang, Zhichao
2014-01-01
An analytical method is developed for the reconstruction of velocity profiles using measured potential distributions obtained around the boundary of a multi-electrode electromagnetic flow meter (EMFM). The method is based on the discrete Fourier transform (DFT), and is implemented in Matlab. The method assumes the velocity profile in a section of a pipe as a superposition of polynomials up to sixth order. Each polynomial component is defined along a specific direction in the plane of the pipe section. For a potential distribution obtained in a uniform magnetic field, this direction is not unique for quadratic and higher-order components; thus, multiple possible solutions exist for the reconstructed velocity profile. A procedure for choosing the optimum velocity profile is proposed. It is applicable for single-phase or two-phase flows, and requires measurement of the potential distribution in a non-uniform magnetic field. The potential distribution in this non-uniform magnetic field is also calculated for the possible solutions using weight values. Then, the velocity profile with the calculated potential distribution which is closest to the measured one provides the optimum solution. The reliability of the method is first demonstrated by reconstructing an artificial velocity profile defined by polynomial functions. Next, velocity profiles in different two-phase flows, based on results from the literature, are used to define the input velocity fields. In all cases, COMSOL Multiphysics is used to model the physical specifications of the EMFM and to simulate the measurements; thus, COMSOL simulations produce the potential distributions on the internal circumference of the flow pipe. These potential distributions serve as inputs for the analytical method. The reconstructed velocity profiles show satisfactory agreement with the input velocity profiles. The method described in this paper is most suitable for stratified flows and is not applicable to axisymmetric flows in
Reconstructing the Initial Density Field of the Local Universe: Methods and Tests with Mock Catalogs
Wang, Huiyuan; Mo, H. J.; Yang, Xiaohu; van den Bosch, Frank C.
2013-07-01
Our research objective in this paper is to reconstruct an initial linear density field, which follows the multivariate Gaussian distribution with variances given by the linear power spectrum of the current cold dark matter model and evolves through gravitational instabilities to the present-day density field in the local universe. For this purpose, we develop a Hamiltonian Markov Chain Monte Carlo method to obtain the linear density field from a posterior probability function that consists of two components: a prior of a Gaussian density field with a given linear spectrum and a likelihood term that is given by the current density field. The present-day density field can be reconstructed from galaxy groups using the method developed in Wang et al. Using a realistic mock Sloan Digital Sky Survey DR7, obtained by populating dark matter halos in the Millennium simulation (MS) with galaxies, we show that our method can effectively and accurately recover both the amplitudes and phases of the initial, linear density field. To examine the accuracy of our method, we use N-body simulations to evolve these reconstructed initial conditions to the present day. The resimulated density field thus obtained accurately matches the original density field of the MS in the density range 0.3 \\lesssim \\rho /\\bar{\\rho } \\lesssim 20 without any significant bias. In particular, the Fourier phases of the resimulated density fields are tightly correlated with those of the original simulation down to a scale corresponding to a wavenumber of ~1 h Mpc-1, much smaller than the translinear scale, which corresponds to a wavenumber of ~0.15 h Mpc-1.
Central Russia agroecosystem monitoring with CO2 fluxes analysis by eddy covariance method
Directory of Open Access Journals (Sweden)
Joulia Meshalkina
2015-07-01
Full Text Available The eddy covariance (EC technique as a powerful statistics-based method of measurement and calculation the vertical turbulent fluxes of greenhouses gases within atmospheric boundary layers provides the continuous, long-term flux information integrated at the ecosystem scale. An attractive way to compare the agricultural practices influences on GHG fluxes is to divide a crop area into subplots managed in different ways. The research has been carried out in the Precision Farming Experimental Field of the Russian Timiryazev State Agricultural University (RTSAU, Moscow in 2013 under the support of RF Government grant # 11.G34.31.0079, EU grant # 603542 LUС4С (7FP and RF Ministry of education and science grant # 14-120-14-4266-ScSh. Arable Umbric Albeluvisols have around 1% of SOC, 5.4 pH (KCl and NPK medium-enhanced contents in sandy loam topsoil. The CO2 flux seasonal monitoring has been done by two eddy covariance stations located at the distance of 108 m. The LI-COR instrumental equipment was the same for the both stations. The stations differ only by current crop version: barley or vetch and oats. At both sites, diurnal patterns of NEE among different months were very similar in shape but varied slightly in amplitude. NEE values were about zero during spring time. CO2 fluxes have been intensified after crop emerging from values of 3 to 7 µmol/s∙m2 for emission, and from 5 to 20 µmol/s∙m2 for sink. Stabilization of the fluxes has come at achieving plants height of 10-12 cm. Average NEE was negative only in June and July. Maximum uptake was observed in June with average values about 8 µmol CO2 m−2 s−1. Although different kind of crops were planted on the fields A and B, GPP dynamics was quite similar for both sites: after reaching the peak values at the mid of June, GPP decreased from 4 to 0.5 g C CO2 m-2 d-1 at the end of July. The difference in crops harvesting time that was equal two weeks did not significantly influence the daily
International Nuclear Information System (INIS)
D'Orazio, A; Karimipour, A; Nezhad, A H; Shirani, E
2014-01-01
Laminar mixed convective heat transfer in two-dimensional rectangular inclined driven cavity is studied numerically by means of a double population thermal Lattice Boltzmann method. Through the top moving lid the heat flux enters the cavity whereas it leaves the system through the bottom wall; side walls are adiabatic. The counter-slip internal energy density boundary condition, able to simulate an imposed non zero heat flux at the wall, is applied, in order to demonstrate that it can be effectively used to simulate heat transfer phenomena also in case of moving walls. Results are analyzed over a range of the Richardson numbers and tilting angles of the enclosure, encompassing the dominating forced convection, mixed convection, and dominating natural convection flow regimes. As expected, heat transfer rate increases as increases the inclination angle, but this effect is significant for higher Richardson numbers, when buoyancy forces dominate the problem; for horizontal cavity, average Nusselt number decreases with the increase of Richardson number because of the stratified field configuration
A comparison of recent methods for modelling mercury fluxes at the air-water interface
Directory of Open Access Journals (Sweden)
Fantozzi L.
2013-04-01
Full Text Available The atmospheric pathway of the global mercury flux is known to be the primary source of mercury contamination to most threatened aquatic ecosystems. Notwithstanding, the emission of mercury from surface water to the atmosphere is as much as 50% of total annual emissions of this metal into the atmosphere. In recent years, much effort has been made in theoretical and experimental researches to quantify the total mass flux of mercury to the atmosphere. In this study the most recent atmospheric modelling methods and the information obtained from them are presented and compared using experimental data collected during the Oceanographic Campaign Fenice 2011 (25 October – 8 November 2011, performed on board the Research Vessel (RV Urania of the CNR in the framework of the MEDOCEANOR ongoing program. A strategy for future numerical model development is proposed which is intended to gain a better knowledge of the long-term effects of meteo-climatic drivers on mercury evasional processes, and would provide key information on gaseous Hg exchange rates at the air-water interface.
Energy Technology Data Exchange (ETDEWEB)
Mory, Cyril, E-mail: cyril.mory@philips.com [Université de Lyon, CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Lyon 1, F-69621 Villeurbanne Cedex (France); Philips Research Medisys, 33 rue de Verdun, 92156 Suresnes (France); Auvray, Vincent; Zhang, Bo [Philips Research Medisys, 33 rue de Verdun, 92156 Suresnes (France); Grass, Michael; Schäfer, Dirk [Philips Research, Röntgenstrasse 24–26, D-22335 Hamburg (Germany); Chen, S. James; Carroll, John D. [Department of Medicine, Division of Cardiology, University of Colorado Denver, 12605 East 16th Avenue, Aurora, Colorado 80045 (United States); Rit, Simon [Université de Lyon, CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Lyon 1 (France); Centre Léon Bérard, 28 rue Laënnec, F-69373 Lyon (France); Peyrin, Françoise [Université de Lyon, CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Lyon 1, F-69621 Villeurbanne Cedex (France); X-ray Imaging Group, European Synchrotron, Radiation Facility, BP 220, F-38043 Grenoble Cedex (France); Douek, Philippe; Boussel, Loïc [Université de Lyon, CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Lyon 1 (France); Hospices Civils de Lyon, 28 Avenue du Doyen Jean Lépine, 69500 Bron (France)
2014-02-15
Purpose: Reconstruction of the beating heart in 3D + time in the catheter laboratory using only the available C-arm system would improve diagnosis, guidance, device sizing, and outcome control for intracardiac interventions, e.g., electrophysiology, valvular disease treatment, structural or congenital heart disease. To obtain such a reconstruction, the patient's electrocardiogram (ECG) must be recorded during the acquisition and used in the reconstruction. In this paper, the authors present a 4D reconstruction method aiming to reconstruct the heart from a single sweep 10 s acquisition. Methods: The authors introduce the 4D RecOnstructiOn using Spatial and TEmporal Regularization (short 4D ROOSTER) method, which reconstructs all cardiac phases at once, as a 3D + time volume. The algorithm alternates between a reconstruction step based on conjugate gradient and four regularization steps: enforcing positivity, averaging along time outside a motion mask that contains the heart and vessels, 3D spatial total variation minimization, and 1D temporal total variation minimization. Results: 4D ROOSTER recovers the different temporal representations of a moving Shepp and Logan phantom, and outperforms both ECG-gated simultaneous algebraic reconstruction technique and prior image constrained compressed sensing on a clinical case. It generates 3D + time reconstructions with sharp edges which can be used, for example, to estimate the patient's left ventricular ejection fraction. Conclusions: 4D ROOSTER can be applied for human cardiac C-arm CT, and potentially in other dynamic tomography areas. It can easily be adapted to other problems as regularization is decoupled from projection and back projection.
International Nuclear Information System (INIS)
Mory, Cyril; Auvray, Vincent; Zhang, Bo; Grass, Michael; Schäfer, Dirk; Chen, S. James; Carroll, John D.; Rit, Simon; Peyrin, Françoise; Douek, Philippe; Boussel, Loïc
2014-01-01
Purpose: Reconstruction of the beating heart in 3D + time in the catheter laboratory using only the available C-arm system would improve diagnosis, guidance, device sizing, and outcome control for intracardiac interventions, e.g., electrophysiology, valvular disease treatment, structural or congenital heart disease. To obtain such a reconstruction, the patient's electrocardiogram (ECG) must be recorded during the acquisition and used in the reconstruction. In this paper, the authors present a 4D reconstruction method aiming to reconstruct the heart from a single sweep 10 s acquisition. Methods: The authors introduce the 4D RecOnstructiOn using Spatial and TEmporal Regularization (short 4D ROOSTER) method, which reconstructs all cardiac phases at once, as a 3D + time volume. The algorithm alternates between a reconstruction step based on conjugate gradient and four regularization steps: enforcing positivity, averaging along time outside a motion mask that contains the heart and vessels, 3D spatial total variation minimization, and 1D temporal total variation minimization. Results: 4D ROOSTER recovers the different temporal representations of a moving Shepp and Logan phantom, and outperforms both ECG-gated simultaneous algebraic reconstruction technique and prior image constrained compressed sensing on a clinical case. It generates 3D + time reconstructions with sharp edges which can be used, for example, to estimate the patient's left ventricular ejection fraction. Conclusions: 4D ROOSTER can be applied for human cardiac C-arm CT, and potentially in other dynamic tomography areas. It can easily be adapted to other problems as regularization is decoupled from projection and back projection
Gradient heat flux measurement as monitoring method for the diesel engine
Sapozhnikov, S. Z.; Mityakov, V. Yu; Mityakov, A. V.; Vintsarevich, A. V.; Pavlov, A. V.; Nalyotov, I. D.
2017-11-01
The usage of gradient heat flux measurement for monitoring of heat flux on combustion chamber surface and optimization of diesel work process is proposed. Heterogeneous gradient heat flux sensors can be used at various regimes for an appreciable length of time. Fuel injection timing is set by the position of the maximum point on the angular heat flux diagram however, the value itself of the heat flux may not be considered. The development of such an approach can be productive for remote monitoring of work process in the cylinders of high-power marine engines.
International Nuclear Information System (INIS)
Han, Zhaolong; Li, Jiasong; Singh, Manmohan; Wu, Chen; Liu, Chih-hao; Wang, Shang; Idugboe, Rita; Raghunathan, Raksha; Sudheendran, Narendran; Larin, Kirill V; Aglyamov, Salavat R; Twa, Michael D
2015-01-01
We present a systematic analysis of the accuracy of five different methods for extracting the biomechanical properties of soft samples using optical coherence elastography (OCE). OCE is an emerging noninvasive technique, which allows assessment of biomechanical properties of tissues with micrometer spatial resolution. However, in order to accurately extract biomechanical properties from OCE measurements, application of a proper mechanical model is required. In this study, we utilize tissue-mimicking phantoms with controlled elastic properties and investigate the feasibilities of four available methods for reconstructing elasticity (Young’s modulus) based on OCE measurements of an air-pulse induced elastic wave. The approaches are based on the shear wave equation (SWE), the surface wave equation (SuWE), Rayleigh-Lamb frequency equation (RLFE), and finite element method (FEM), Elasticity values were compared with uniaxial mechanical testing. The results show that the RLFE and the FEM are more robust in quantitatively assessing elasticity than the other simplified models. This study provides a foundation and reference for reconstructing the biomechanical properties of tissues from OCE data, which is important for the further development of noninvasive elastography methods. (paper)
a Method of 3d Measurement and Reconstruction for Cultural Relics in Museums
Zheng, S.; Zhou, Y.; Huang, R.; Zhou, L.; Xu, X.; Wang, C.
2012-07-01
Three-dimensional measurement and reconstruction during conservation and restoration of cultural relics have become an essential part of a modem museum regular work. Although many kinds of methods including laser scanning, computer vision and close-range photogrammetry have been put forward, but problems still exist, such as contradiction between cost and good result, time and fine effect. Aimed at these problems, this paper proposed a structure-light based method for 3D measurement and reconstruction of cultural relics in museums. Firstly, based on structure-light principle, digitalization hardware has been built and with its help, dense point cloud of cultural relics' surface can be easily acquired. To produce accurate 3D geometry model from point cloud data, multi processing algorithms have been developed and corresponding software has been implemented whose functions include blunder detection and removal, point cloud alignment and merge, 3D mesh construction and simplification. Finally, high-resolution images are captured and the alignment of these images and 3D geometry model is conducted and realistic, accurate 3D model is constructed. Based on such method, a complete system including hardware and software are built. Multi-kinds of cultural relics have been used to test this method and results prove its own feature such as high efficiency, high accuracy, easy operation and so on.
International Nuclear Information System (INIS)
Zhao, Weizhao; Ginsberg, M.; Young, T.Y.
1993-01-01
Quantitative autoradiography is a powerful radio-isotopic-imaging method for neuroscientists to study local cerebral blood flow and glucose-metabolic rate at rest, in response to physiologic activation of the visual, auditory, somatosensory, and motor systems, and in pathologic conditions. Most autoradiographic studies analyze glucose utilization and blood flow in two-dimensional (2-D) coronal sections. With modern digital computer and image-processing techniques, a large number of closely spaced coronal sections can be stacked appropriately to form a three-dimensional (3-d) image. 3-D autoradiography allows investigators to observe cerebral sections and surfaces from any viewing angle. A fundamental problem in 3-D reconstruction is the alignment (registration) of the coronal sections. A new alignment method based on disparity analysis is presented which can overcome many of the difficulties encountered by previous methods. The disparity analysis method can deal with asymmetric, damaged, or tilted coronal sections under the same general framework, and it can be used to match coronal sections of different sizes and shapes. Experimental results on alignment and 3-D reconstruction are presented
Jha, Abhinav K; Song, Na; Caffo, Brian; Frey, Eric C
2015-04-13
Quantitative single-photon emission computed tomography (SPECT) imaging is emerging as an important tool in clinical studies and biomedical research. There is thus a need for optimization and evaluation of systems and algorithms that are being developed for quantitative SPECT imaging. An appropriate objective method to evaluate these systems is by comparing their performance in the end task that is required in quantitative SPECT imaging, such as estimating the mean activity concentration in a volume of interest (VOI) in a patient image. This objective evaluation can be performed if the true value of the estimated parameter is known, i.e. we have a gold standard. However, very rarely is this gold standard known in human studies. Thus, no-gold-standard techniques to optimize and evaluate systems and algorithms in the absence of gold standard are required. In this work, we developed a no-gold-standard technique to objectively evaluate reconstruction methods used in quantitative SPECT when the parameter to be estimated is the mean activity concentration in a VOI. We studied the performance of the technique with realistic simulated image data generated from an object database consisting of five phantom anatomies with all possible combinations of five sets of organ uptakes, where each anatomy consisted of eight different organ VOIs. Results indicate that the method provided accurate ranking of the reconstruction methods. We also demonstrated the application of consistency checks to test the no-gold-standard output.
Implementation of a fast running full core pin power reconstruction method in DYN3D
International Nuclear Information System (INIS)
Gomez-Torres, Armando Miguel; Sanchez-Espinoza, Victor Hugo; Kliem, Sören; Gommlich, Andre
2014-01-01
Highlights: • New pin power reconstruction (PPR) method for the nodal diffusion code DYN3D. • Flexible PPR method applicable to a single, a group or to all fuel assemblies (square, hex). • Combination of nodal with pin-wise solutions (non-conform geometry). • PPR capabilities shown for REA of a Minicore (REA) PWR whole core. - Abstract: This paper presents a substantial extension of the pin power reconstruction (PPR) method used in the reactor dynamics code DYN3D with the aim to better describe the heterogeneity within the fuel assembly during reactor simulations. The flexibility of the new implemented PPR permits the local spatial refinement of one fuel assembly, of a cluster of fuel assemblies, of a quarter or eight of a core or even of a whole core. The application of PPR in core regions of interest will pave the way for the coupling with sub-channel codes enabling the prediction of local safety parameters. One of the main advantages of considering regions and not only a hot fuel assembly (FA) is the fact that the cross flow within this region can be taken into account by the subchannel code. The implementation of the new PPR method has been tested analysing a rod ejection accident (REA) in a PWR minicore consisting of 3 × 3 FA. Finally, the new capabilities of DNY3D are demonstrated by the analysing a boron dilution transient in a PWR MOX core and the pin power of a VVER-1000 reactor at stationary conditions
Implementation of a fast running full core pin power reconstruction method in DYN3D
Energy Technology Data Exchange (ETDEWEB)
Gomez-Torres, Armando Miguel [Instituto Nacional de Investigaciones Nucleares, Department of Nuclear Systems, Carretera Mexico – Toluca s/n, La Marquesa, 52750 Ocoyoacac (Mexico); Sanchez-Espinoza, Victor Hugo, E-mail: victor.sanchez@kit.edu [Karlsruhe Institute of Technology, Institute for Neutron Physics and Reactor Technology, Hermann-vom-Helmhotz-Platz 1, D-76344 Eggenstein-Leopoldshafen (Germany); Kliem, Sören; Gommlich, Andre [Helmholtz-Zentrum Dresden-Rossendorf, Bautzner Landstraße 400, 01328 Dresden (Germany)
2014-07-01
Highlights: • New pin power reconstruction (PPR) method for the nodal diffusion code DYN3D. • Flexible PPR method applicable to a single, a group or to all fuel assemblies (square, hex). • Combination of nodal with pin-wise solutions (non-conform geometry). • PPR capabilities shown for REA of a Minicore (REA) PWR whole core. - Abstract: This paper presents a substantial extension of the pin power reconstruction (PPR) method used in the reactor dynamics code DYN3D with the aim to better describe the heterogeneity within the fuel assembly during reactor simulations. The flexibility of the new implemented PPR permits the local spatial refinement of one fuel assembly, of a cluster of fuel assemblies, of a quarter or eight of a core or even of a whole core. The application of PPR in core regions of interest will pave the way for the coupling with sub-channel codes enabling the prediction of local safety parameters. One of the main advantages of considering regions and not only a hot fuel assembly (FA) is the fact that the cross flow within this region can be taken into account by the subchannel code. The implementation of the new PPR method has been tested analysing a rod ejection accident (REA) in a PWR minicore consisting of 3 × 3 FA. Finally, the new capabilities of DNY3D are demonstrated by the analysing a boron dilution transient in a PWR MOX core and the pin power of a VVER-1000 reactor at stationary conditions.
International Nuclear Information System (INIS)
1980-01-01
An apparatus is described which can be used in computerized tomographic systems for constructing a representation of an object and which uses a fan-shaped beam source, detectors and a convolution method of data reconstruction. (U.K.)
Method of Relative Magnitudes for Calculating Magnetic Fluxes in Electrical Machine
Directory of Open Access Journals (Sweden)
Oleg A.
2018-03-01
Full Text Available Introduction: The article presents the study results of the model of an asynchronous electric motor carried out by the author within the framework of the Priorities Research Program “Research and development in the priority areas of development of Russia’s scientific and technical complex for 2014–2020”. Materials and Methods: A model of an idealized asynchronous machine (with sinusoidal distribution of magnetic induction in air gap is used in vector control systems. It is impossible to create windings for this machine. The basis of the new calculation approach was the Conductivity of Teeth Contours Method, developed at the Electrical Machines Chair of the Moscow Power Engineering Institute (MPEI. Unlike this method, the author used not absolute values, but relative magnitudes of magnetic fluxes. This solution fundamentally improved the method’s capabilities. The relative magnitudes of the magnetic fluxes of the teeth contours do not required the additional consideration for exact structure of magnetic field of tooth and adjacent slots. These structures are identical for all the teeth of the machine and differ only in magnitude. The purpose of the calculations was not traditional harmonic analysis of magnetic induction distribution in air gap of machine, but a refinement of the equations of electric machine model. The vector control researchers used only the cos(θ function as a value of mutual magnetic coupling coefficient between the windings. Results: The author has developed a way to take into account the design of the windings of a real machine by using imaginary measuring winding with the same winding design as a real phase winding. The imaginary winding can be placed in the position of any machine windings. The calculation of the relative magnetic fluxes of this winding helped to estimate the real values of the magnetic coupling coefficients between the windings, and find the correction functions for the model of an idealized
Yoshikawa, K.; Ueyama, M.; Takagi, K.; Kominami, Y.
2015-12-01
Methane (CH4) budget in forest ecosystems have not been accurately quantified due to limited measurements and considerable spatiotemporal heterogeneity. In order to quantify CH4 fluxes at temperate forest at various spatiotemporal scales, we have continuously measured CH4 fluxes at two upland forests based on the micrometeorological hyperbolic relaxed eddy accumulation (HREA) and automated dynamic closed chamber methods.The measurements have been conducted at Teshio experimental forest (TSE) since September 2013 and Yamashiro forest meteorology research site (YMS) since November 2014. Three automated chambers were installed on each site. Our system can measure CH4 flux by the micrometeorological HREA, vertical concentration profile at four heights, and chamber measurements by a laser-based gas analyzer (FGGA-24r-EP, Los Gatos Research Inc., USA).Seasonal variations of canopy-scale CH4 fluxes were different in each site. CH4 was consumed during the summer, but was emitted during the fall and winter in TSE; consequently, the site acted as a net annual CH4 source. CH4 was steadily consumed during the winter, but CH4 fluxes fluctuated between absorption and emission during the spring and summer in YMS. YMS acted as a net annual CH4 sink. CH4 uptake at the canopy scale generally decreased with rising soil temperature and increased with drying condition for both sites. CH4 flux measured by most of chambers showed the consistent sensitivity examined for the canopy scale to the environmental variables. CH4 fluxes from a few chambers located at a wet condition were independent of variations in soil temperature and moisture at both sites. Magnitude of soil CH4 uptake was higher than the canopy-scale CH4 uptake. Our results showed that the canopy-scale CH4 fluxes were totally different with the plot-scale CH4 fluxes by chambers, suggesting the considerable spatial heterogeneity in CH4 flux at the temperate forests.
Energy Technology Data Exchange (ETDEWEB)
Murphy, Martin J; Todor, Dorin A [Department of Radiation Oncology, Virginia Commonwealth University, Richmond VA 23298 (United States)
2005-06-07
By monitoring brachytherapy seed placement and determining the actual configuration of the seeds in vivo, one can optimize the treatment plan during the process of implantation. Two or more radiographic images from different viewpoints can in principle allow one to reconstruct the configuration of implanted seeds uniquely. However, the reconstruction problem is complicated by several factors: (1) the seeds can overlap and cluster in the images; (2) the images can have distortion that varies with viewpoint when a C-arm fluoroscope is used; (3) there can be uncertainty in the imaging viewpoints; (4) the angular separation of the imaging viewpoints can be small owing to physical space constraints; (5) there can be inconsistency in the number of seeds detected in the images; and (6) the patient can move while being imaged. We propose and conceptually demonstrate a novel reconstruction method that handles all of these complications and uncertainties in a unified process. The method represents the three-dimensional seed and camera configurations as parametrized models that are adjusted iteratively to conform to the observed radiographic images. The morphed model seed configuration that best reproduces the appearance of the seeds in the radiographs is the best estimate of the actual seed configuration. All of the information needed to establish both the seed configuration and the camera model is derived from the seed images without resort to external calibration fixtures. Furthermore, by comparing overall image content rather than individual seed coordinates, the process avoids the need to establish correspondence between seed identities in the several images. The method has been shown to work robustly in simulation tests that simultaneously allow for unknown individual seed positions, uncertainties in the imaging viewpoints and variable image distortion.
A Dictionary Learning Method with Total Generalized Variation for MRI Reconstruction.
Lu, Hongyang; Wei, Jingbo; Liu, Qiegen; Wang, Yuhao; Deng, Xiaohua
2016-01-01
Reconstructing images from their noisy and incomplete measurements is always a challenge especially for medical MR image with important details and features. This work proposes a novel dictionary learning model that integrates two sparse regularization methods: the total generalized variation (TGV) approach and adaptive dictionary learning (DL). In the proposed method, the TGV selectively regularizes different image regions at different levels to avoid oil painting artifacts largely. At the same time, the dictionary learning adaptively represents the image features sparsely and effectively recovers details of images. The proposed model is solved by variable splitting technique and the alternating direction method of multiplier. Extensive simulation experimental results demonstrate that the proposed method consistently recovers MR images efficiently and outperforms the current state-of-the-art approaches in terms of higher PSNR and lower HFEN values.
A Dictionary Learning Method with Total Generalized Variation for MRI Reconstruction
Directory of Open Access Journals (Sweden)
Hongyang Lu
2016-01-01
Full Text Available Reconstructing images from their noisy and incomplete measurements is always a challenge especially for medical MR image with important details and features. This work proposes a novel dictionary learning model that integrates two sparse regularization methods: the total generalized variation (TGV approach and adaptive dictionary learning (DL. In the proposed method, the TGV selectively regularizes different image regions at different levels to avoid oil painting artifacts largely. At the same time, the dictionary learning adaptively represents the image features sparsely and effectively recovers details of images. The proposed model is solved by variable splitting technique and the alternating direction method of multiplier. Extensive simulation experimental results demonstrate that the proposed method consistently recovers MR images efficiently and outperforms the current state-of-the-art approaches in terms of higher PSNR and lower HFEN values.
Novel Direction Of Arrival Estimation Method Based on Coherent Accumulation Matrix Reconstruction
Directory of Open Access Journals (Sweden)
Li Lei
2015-04-01
Full Text Available Based on coherent accumulation matrix reconstruction, a novel Direction Of Arrival (DOA estimation decorrelation method of coherent signals is proposed using a small sample. First, the Signal to Noise Ratio (SNR is improved by performing coherent accumulation operation on an array of observed data. Then, according to the structure characteristics of the accumulated snapshot vector, the equivalent covariance matrix, whose rank is the same as the number of array elements, is constructed. The rank of this matrix is proved to be determined just by the number of incident signals, which realize the decorrelation of coherent signals. Compared with spatial smoothing method, the proposed method performs better by effectively avoiding aperture loss with high-resolution characteristics and low computational complexity. Simulation results demonstrate the efficiency of the proposed method.
Energy Technology Data Exchange (ETDEWEB)
Kuhnert, Georg; Sterzer, Sergej; Kahraman, Deniz; Dietlein, Markus; Drzezga, Alexander; Kobe, Carsten [University Hospital of Cologne, Department of Nuclear Medicine, Cologne (Germany); Boellaard, Ronald [VU University Medical Centre, Department of Radiology and Nuclear Medicine, Amsterdam (Netherlands); Scheffler, Matthias; Wolf, Juergen [University Hospital of Cologne, Lung Cancer Group Cologne, Department I of Internal Medicine, Center for Integrated Oncology Cologne Bonn, Cologne (Germany)
2016-02-15
In oncological imaging using PET/CT, the standardized uptake value has become the most common parameter used to measure tracer accumulation. The aim of this analysis was to evaluate ultra high definition (UHD) and ordered subset expectation maximization (OSEM) PET/CT reconstructions for their potential impact on quantification. We analyzed 40 PET/CT scans of lung cancer patients who had undergone PET/CT. Standardized uptake values corrected for body weight (SUV) and lean body mass (SUL) were determined in the single hottest lesion in the lung and normalized to the liver for UHD and OSEM reconstruction. Quantitative uptake values and their normalized ratios for the two reconstruction settings were compared using the Wilcoxon test. The distribution of quantitative uptake values and their ratios in relation to the reconstruction method used were demonstrated in the form of frequency distribution curves, box-plots and scatter plots. The agreement between OSEM and UHD reconstructions was assessed through Bland-Altman analysis. A significant difference was observed after OSEM and UHD reconstruction for SUV and SUL data tested (p < 0.0005 in all cases). The mean values of the ratios after OSEM and UHD reconstruction showed equally significant differences (p < 0.0005 in all cases). Bland-Altman analysis showed that the SUV and SUL and their normalized values were, on average, up to 60 % higher after UHD reconstruction as compared to OSEM reconstruction. OSEM and HD reconstruction brought a significant difference for SUV and SUL, which remained constantly high after normalization to the liver, indicating that standardization of reconstruction and the use of comparable SUV measurements are crucial when using PET/CT. (orig.)
International Nuclear Information System (INIS)
Rezac, K.; Klir, D.; Kubes, P.; Kravarik, J.
2009-01-01
We present the reconstruction of neutron energy spectra from time-of-flight signals. This technique is useful in experiments with the time of neutron production in the range of about tens or hundreds of nanoseconds. The neutron signals were obtained by a common hard X-ray and neutron fast plastic scintillation detectors. The reconstruction is based on the Monte Carlo method which has been improved by simultaneous usage of neutron detectors placed on two opposite sides from the neutron source. Although the reconstruction from detectors placed on two opposite sides is more difficult and a little bit inaccurate (it followed from several presumptions during the inclusion of both sides of detection), there are some advantages. The most important advantage is smaller influence of scattered neutrons on the reconstruction. Finally, we describe the estimation of the error of this reconstruction.
Methods of reconstruction of perineal wounds after abdominoperineal resection. Literature review
Directory of Open Access Journals (Sweden)
S. S. Gordeev
2017-01-01
Full Text Available The problem of wound closure after abdominoperineal resection to treat oncological diseases remains unsolved. Formation of a primary suture in the perineal wound can lead to multiple postoperative complications: seroma, abscess, wound disruption with subsequent perineal hernia. Chemoradiation therapy as a standard for locally advanced rectal or anal cancer doesn’t improve results of treatment of perineal wounds and increases duration of their healing. Currently, surgeons have several reconstructive and plastic techniques to improve both direct and long-term functional treatment results. In the article, the most common methods of allo- and autotransplantation are considered, benefits and deficiencies of various techniques are evaluated and analyzed.
CUDA based Level Set Method for 3D Reconstruction of Fishes from Large Acoustic Data
DEFF Research Database (Denmark)
Sharma, Ojaswa; Anton, François
2009-01-01
Acoustic images present views of underwater dynamics, even in high depths. With multi-beam echo sounders (SONARs), it is possible to capture series of 2D high resolution acoustic images. 3D reconstruction of the water column and subsequent estimation of fish abundance and fish species identificat...... of suppressing threshold and show its convergence as the evolution proceeds. We also present a GPU based streaming computation of the method using NVIDIA's CUDA framework to handle large volume data-sets. Our implementation is optimised for memory usage to handle large volumes....
A STUDY ON DYNAMIC LOAD HISTORY RECONSTRUCTION USING PSEUDO-INVERSE METHODS
Santos, Ariane Rebelato Silva dos; Marczak, Rogério José
2017-01-01
Considering that the vibratory forces generally cannot be measured directly at the interface of two bodies, an inverse method is studied in the present work to recover the load history in such cases. The proposed technique attempts to reconstruct the dynamic loads history by using a frequency domain analysis and Moore-Penrose pseudo-inverses of the frequency response function (FRF) of the system. The methodology consists in applying discrete dynamic loads on a finite element model in the time...
International Nuclear Information System (INIS)
Dong, Xiangyuan; Guo, Shuqing
2008-01-01
In this paper, a novel image reconstruction method for electrical capacitance tomography (ECT) based on the combined series and parallel model is presented. A regularization technique is used to obtain a stabilized solution of the inverse problem. Also, the adaptive coefficient of the combined model is deduced by numerical optimization. Simulation results indicate that it can produce higher quality images when compared to the algorithm based on the parallel or series models for the cases tested in this paper. It provides a new algorithm for ECT application
Tang, Cuong Q; Humphreys, Aelys M; Fontaneto, Diego; Barraclough, Timothy G; Paradis, Emmanuel
2014-10-01
Coalescent-based species delimitation methods combine population genetic and phylogenetic theory to provide an objective means for delineating evolutionarily significant units of diversity. The generalised mixed Yule coalescent (GMYC) and the Poisson tree process (PTP) are methods that use ultrametric (GMYC or PTP) or non-ultrametric (PTP) gene trees as input, intended for use mostly with single-locus data such as DNA barcodes. Here, we assess how robust the GMYC and PTP are to different phylogenetic reconstruction and branch smoothing methods. We reconstruct over 400 ultrametric trees using up to 30 different combinations of phylogenetic and smoothing methods and perform over 2000 separate species delimitation analyses across 16 empirical data sets. We then assess how variable diversity estimates are, in terms of richness and identity, with respect to species delimitation, phylogenetic and smoothing methods. The PTP method generally generates diversity estimates that are more robust to different phylogenetic methods. The GMYC is more sensitive, but provides consistent estimates for BEAST trees. The lower consistency of GMYC estimates is likely a result of differences among gene trees introduced by the smoothing step. Unresolved nodes (real anomalies or methodological artefacts) affect both GMYC and PTP estimates, but have a greater effect on GMYC estimates. Branch smoothing is a difficult step and perhaps an underappreciated source of bias that may be widespread among studies of diversity and diversification. Nevertheless, careful choice of phylogenetic method does produce equivalent PTP and GMYC diversity estimates. We recommend simultaneous use of the PTP model with any model-based gene tree (e.g. RAxML) and GMYC approaches with BEAST trees for obtaining species hypotheses.
Low dose dynamic CT myocardial perfusion imaging using a statistical iterative reconstruction method
Energy Technology Data Exchange (ETDEWEB)
Tao, Yinghua [Department of Medical Physics, University of Wisconsin-Madison, Madison, Wisconsin 53705 (United States); Chen, Guang-Hong [Department of Medical Physics and Department of Radiology, University of Wisconsin-Madison, Madison, Wisconsin 53705 (United States); Hacker, Timothy A.; Raval, Amish N. [Department of Medicine, University of Wisconsin-Madison, Madison, Wisconsin 53792 (United States); Van Lysel, Michael S.; Speidel, Michael A., E-mail: speidel@wisc.edu [Department of Medical Physics and Department of Medicine, University of Wisconsin-Madison, Madison, Wisconsin 53705 (United States)
2014-07-15
Purpose: Dynamic CT myocardial perfusion imaging has the potential to provide both functional and anatomical information regarding coronary artery stenosis. However, radiation dose can be potentially high due to repeated scanning of the same region. The purpose of this study is to investigate the use of statistical iterative reconstruction to improve parametric maps of myocardial perfusion derived from a low tube current dynamic CT acquisition. Methods: Four pigs underwent high (500 mA) and low (25 mA) dose dynamic CT myocardial perfusion scans with and without coronary occlusion. To delineate the affected myocardial territory, an N-13 ammonia PET perfusion scan was performed for each animal in each occlusion state. Filtered backprojection (FBP) reconstruction was first applied to all CT data sets. Then, a statistical iterative reconstruction (SIR) method was applied to data sets acquired at low dose. Image voxel noise was matched between the low dose SIR and high dose FBP reconstructions. CT perfusion maps were compared among the low dose FBP, low dose SIR and high dose FBP reconstructions. Numerical simulations of a dynamic CT scan at high and low dose (20:1 ratio) were performed to quantitatively evaluate SIR and FBP performance in terms of flow map accuracy, precision, dose efficiency, and spatial resolution. Results: Forin vivo studies, the 500 mA FBP maps gave −88.4%, −96.0%, −76.7%, and −65.8% flow change in the occluded anterior region compared to the open-coronary scans (four animals). The percent changes in the 25 mA SIR maps were in good agreement, measuring −94.7%, −81.6%, −84.0%, and −72.2%. The 25 mA FBP maps gave unreliable flow measurements due to streaks caused by photon starvation (percent changes of +137.4%, +71.0%, −11.8%, and −3.5%). Agreement between 25 mA SIR and 500 mA FBP global flow was −9.7%, 8.8%, −3.1%, and 26.4%. The average variability of flow measurements in a nonoccluded region was 16.3%, 24.1%, and 937
Low dose dynamic CT myocardial perfusion imaging using a statistical iterative reconstruction method
International Nuclear Information System (INIS)
Tao, Yinghua; Chen, Guang-Hong; Hacker, Timothy A.; Raval, Amish N.; Van Lysel, Michael S.; Speidel, Michael A.
2014-01-01
Purpose: Dynamic CT myocardial perfusion imaging has the potential to provide both functional and anatomical information regarding coronary artery stenosis. However, radiation dose can be potentially high due to repeated scanning of the same region. The purpose of this study is to investigate the use of statistical iterative reconstruction to improve parametric maps of myocardial perfusion derived from a low tube current dynamic CT acquisition. Methods: Four pigs underwent high (500 mA) and low (25 mA) dose dynamic CT myocardial perfusion scans with and without coronary occlusion. To delineate the affected myocardial territory, an N-13 ammonia PET perfusion scan was performed for each animal in each occlusion state. Filtered backprojection (FBP) reconstruction was first applied to all CT data sets. Then, a statistical iterative reconstruction (SIR) method was applied to data sets acquired at low dose. Image voxel noise was matched between the low dose SIR and high dose FBP reconstructions. CT perfusion maps were compared among the low dose FBP, low dose SIR and high dose FBP reconstructions. Numerical simulations of a dynamic CT scan at high and low dose (20:1 ratio) were performed to quantitatively evaluate SIR and FBP performance in terms of flow map accuracy, precision, dose efficiency, and spatial resolution. Results: Forin vivo studies, the 500 mA FBP maps gave −88.4%, −96.0%, −76.7%, and −65.8% flow change in the occluded anterior region compared to the open-coronary scans (four animals). The percent changes in the 25 mA SIR maps were in good agreement, measuring −94.7%, −81.6%, −84.0%, and −72.2%. The 25 mA FBP maps gave unreliable flow measurements due to streaks caused by photon starvation (percent changes of +137.4%, +71.0%, −11.8%, and −3.5%). Agreement between 25 mA SIR and 500 mA FBP global flow was −9.7%, 8.8%, −3.1%, and 26.4%. The average variability of flow measurements in a nonoccluded region was 16.3%, 24.1%, and 937
International Nuclear Information System (INIS)
2000-01-01
Systems loaded with plutonium in the form of mixed-oxide (MOX) fuel show somewhat different neutronic characteristics compared with those using conventional uranium fuels. In order to maintain adequate safety standards, it is essential to accurately predict the characteristics of MOX-fuelled systems and to further validate both the nuclear data and the computation methods used. A computation benchmark on power distribution within fuel assemblies to compare different techniques used in production codes for fine flux prediction in systems partially loaded with MOX fuel was carried out at an international level. It addressed first the numerical schemes for pin power reconstruction, then investigated the global performance including cross-section data reduction methods. This report provides the detailed results of this second phase of the benchmark. The analysis of the results revealed that basic data still need to be improved, primarily for higher plutonium isotopes and minor actinides. (author)
Thermal Analysis on Radial Flux Permanent Magnet Generator (PMG using Finite Element Method