International Nuclear Information System (INIS)
Dorning, J.J.
1991-01-01
A simultaneous pin lattice cell and fuel bundle homogenization theory has been developed for use with nodal diffusion calculations of practical reactors. The theoretical development of the homogenization theory, which is based on multiple-scales asymptotic expansion methods carried out through fourth order in a small parameter, starts from the transport equation and systematically yields: a cell-homogenized bundled diffusion equation with self-consistent expressions for the cell-homogenized cross sections and diffusion tensor elements; and a bundle-homogenized global reactor diffusion equation with self-consistent expressions for the bundle-homogenized cross sections and diffusion tensor elements. The continuity of the angular flux at cell and bundle interfaces also systematically yields jump conditions for the scaler flux or so-called flux discontinuity factors on the cell and bundle interfaces in terms of the two adjacent cell or bundle eigenfunctions. The expressions required for the reconstruction of the angular flux or the 'de-homogenization' theory were obtained as an integral part of the development; hence the leading order transport theory angular flux is easily reconstructed throughout the reactor including the regions in the interior of the fuel bundles or computational nodes and in the interiors of the pin lattice cells. The theoretical development shows that the exact transport theory angular flux is obtained to first order from the whole-reactor nodal diffusion calculations, done using the homogenized nuclear data and discontinuity factors, is a product of three computed quantities: a ''cell shape function''; a ''bundle shape function''; and a ''global shape function''. 10 refs
Methods for the reconstruction of large scale anisotropies of the cosmic ray flux
Energy Technology Data Exchange (ETDEWEB)
Over, Sven
2010-01-15
In cosmic ray experiments the arrival directions, among other properties, of cosmic ray particles from detected air shower events are reconstructed. The question of uniformity in the distribution of arrival directions is of large importance for models that try to explain cosmic radiation. In this thesis, methods for the reconstruction of parameters of a dipole-like flux distribution of cosmic rays from a set of recorded air shower events are studied. Different methods are presented and examined by means of detailed Monte Carlo simulations. Particular focus is put on the implications of spurious experimental effects. Modifications of existing methods and new methods are proposed. The main goal of this thesis is the development of the horizontal Rayleigh analysis method. Unlike other methods, this method is based on the analysis of local viewing directions instead of global sidereal directions. As a result, the symmetries of the experimental setup can be better utilised. The calculation of the sky coverage (exposure function) is not necessary in this analysis. The performance of the method is tested by means of further Monte Carlo simulations. The new method performs similarly good or only marginally worse than established methods in case of ideal measurement conditions. However, the simulation of certain experimental effects can cause substantial misestimations of the dipole parameters by the established methods, whereas the new method produces no systematic deviations. The invulnerability to certain effects offers additional advantages, as certain data selection cuts become dispensable. (orig.)
Wu, Han; Wu, Tso-Ren; Lee, Chun-Juei; Tsai, Yu-Lin; Li, Pei-Yu
2017-04-01
The event of 1771 Japan Ishigaki Earthquake induced a large tsunami with an 80-meter runup height recorded. Several reef boulders transported by the huge tsunami waves were found along the coast and were located at elevation about 30 meters. Considering the limited distance between Yaeyama and Taiwan Islands, this study aimed to understand the behavior of tsunami propagation and the potential hazard in Taiwan. Reconstructing the 1771 event and validating the result with the field survey is the first step. In order to analysis hazard from the potential tsunami sources around the event area, we adopted the Impact Intensity Analysis (IIA), which had been presented in the EGU 2016 and many other international conferences. Instead of using IIA method, we further developed a new method called the Volume Flux Method (VFM). The VFM kept the accuracy of IIA method. However, the efficiency was improved significantly. The analyzed results showed that the source of the 1771 Great Yaeyama Tsunami was most likely located at the south offshore of Ishigaki Island. The wave height and inundation area were matched with the survey map (Geospatial Information Authority of Japan, 1994). The tsunami threat to Taiwan was also simulated. It indicated that the tsunami height would not be greater than 1 meters at east coast of Taiwan if the tsunami source located at nearshore around Ishigaki Island. However, it is noteworthy that the northeast coast of Taiwan was under the tsunami threats if the sources located in the south offshore on the Ryukyu Trench. We will present the detailed result in EGU 2017.
Cox, Christopher; Liang, Chunlei; Plesniak, Michael
2015-11-01
This paper reports development of a high-order compact method for solving unsteady incompressible flow on unstructured grids with implicit time stepping. The method falls under the class of methods now referred to as flux reconstruction/correction procedure via reconstruction. The governing equations employ the classical artificial compressibility treatment, where dual time stepping is needed to solve unsteady flow problems. An implicit non-linear lower-upper symmetric Gauss-Seidel scheme with backward Euler discretization is used to efficiently march the solution in pseudo time, while a second-order backward Euler discretization is used to march in physical time. We verify and validate implementation of the high-order method coupled with our implicit time-stepping scheme. Three-dimensional results computed on many processing elements will be presented. The high-order method is very suitable for parallel computing and can easily be extended to moving and deforming grids. The current implicit time stepping scheme is proven effective in satisfying the divergence-free constraint on the velocity field in the artificial compressibility formulation within the context of the high-order flux reconstruction method. Financial support provided under the GW Presidential Merit Fellowship.
International Nuclear Information System (INIS)
Jacqmin, R.P.
1991-01-01
The safety and optimal performance of large, commercial, light-water reactors require the knowledge at all time of the neutron-flux distribution in the core. In principle, this information can be obtained by solving the time-dependent neutron diffusion equations. However, this approach is complicated and very expensive. Sufficiently accurate, real-time calculations (time scale of approximately one second) are not yet possible on desktop computers, even with fast-running, nodal kinetics codes. A semi-experimental, nodal synthesis method which avoids the solution of the time-dependent, neutron diffusion equations is described. The essential idea of this method is to approximate instantaneous nodal group-fluxes by a linear combination of K, precomputed, three-dimensional, static expansion-functions. The time-dependent coefficients of the combination are found from the requirement that the reconstructed flux-distribution agree in a least-squares sense with the readings of J (≥K) fixed, prompt-responding neutron-detectors. Possible numerical difficulties with the least-squares solution of the ill-conditioned, J-by-K system of equations are brought under complete control by the use of a singular-value-decomposition technique. This procedure amounts to the rearrangement of the original, linear combination of K expansion functions into an equivalent more convenient, linear combination of R (≤K) orthogonalized ''modes'' of decreasing magnitude. Exceedingly small modes are zeroed to eliminate any risk of roundoff-error amplification, and to assure consistency with the limited accuracy of the data. Additional modes are zeroed when it is desirable to limit the sensitivity of the results to measurement noise
Energy Technology Data Exchange (ETDEWEB)
Jacqmin, Robert P. [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States)
1991-12-10
The safety and optimal performance of large, commercial, light-water reactors require the knowledge at all time of the neutron-flux distribution in the core. In principle, this information can be obtained by solving the time-dependent neutron diffusion equations. However, this approach is complicated and very expensive. Sufficiently accurate, real-time calculations (time scale of approximately one second) are not yet possible on desktop computers, even with fast-running, nodal kinetics codes. A semi-experimental, nodal synthesis method which avoids the solution of the time-dependent, neutron diffusion equations is described. The essential idea of this method is to approximate instantaneous nodal group-fluxes by a linear combination of K, precomputed, three-dimensional, static expansion-functions. The time-dependent coefficients of the combination are found from the requirement that the reconstructed flux-distribution agree in a least-squares sense with the readings of J (≥K) fixed, prompt-responding neutron-detectors. Possible numerical difficulties with the least-squares solution of the ill-conditioned, J-by-K system of equations are brought under complete control by the use of a singular-value-decomposition technique. This procedure amounts to the rearrangement of the original, linear combination of K expansion functions into an equivalent more convenient, linear combination of R (≤K) orthogonalized ``modes`` of decreasing magnitude. Exceedingly small modes are zeroed to eliminate any risk of roundoff-error amplification, and to assure consistency with the limited accuracy of the data. Additional modes are zeroed when it is desirable to limit the sensitivity of the results to measurement noise.
Energy Technology Data Exchange (ETDEWEB)
Jacqmin, R.P.
1991-12-10
The safety and optimal performance of large, commercial, light-water reactors require the knowledge at all time of the neutron-flux distribution in the core. In principle, this information can be obtained by solving the time-dependent neutron diffusion equations. However, this approach is complicated and very expensive. Sufficiently accurate, real-time calculations (time scale of approximately one second) are not yet possible on desktop computers, even with fast-running, nodal kinetics codes. A semi-experimental, nodal synthesis method which avoids the solution of the time-dependent, neutron diffusion equations is described. The essential idea of this method is to approximate instantaneous nodal group-fluxes by a linear combination of K, precomputed, three-dimensional, static expansion-functions. The time-dependent coefficients of the combination are found from the requirement that the reconstructed flux-distribution agree in a least-squares sense with the readings of J ({ge}K) fixed, prompt-responding neutron-detectors. Possible numerical difficulties with the least-squares solution of the ill-conditioned, J-by-K system of equations are brought under complete control by the use of a singular-value-decomposition technique. This procedure amounts to the rearrangement of the original, linear combination of K expansion functions into an equivalent more convenient, linear combination of R ({le}K) orthogonalized modes'' of decreasing magnitude. Exceedingly small modes are zeroed to eliminate any risk of roundoff-error amplification, and to assure consistency with the limited accuracy of the data. Additional modes are zeroed when it is desirable to limit the sensitivity of the results to measurement noise.
Group-decoupled multi-group pin power reconstruction utilizing nodal solution 1D flux profiles
International Nuclear Information System (INIS)
Yu, Lulin; Lu, Dong; Zhang, Shaohong; Wang, Dezhong
2014-01-01
Highlights: • A direct fitting multi-group pin power reconstruction method is developed. • The 1D nodal solution flux profiles are used as the condition. • The least square fit problem is analytically solved. • A slowing down source improvement method is applied. • The method shows good accuracy for even challenging problems. - Abstract: A group-decoupled direct fitting method is developed for multi-group pin power reconstruction, which avoids both the complication of obtaining 2D analytic multi-group flux solution and any group-coupled iteration. A unique feature of the method is that in addition to nodal volume and surface average fluxes and corner fluxes, transversely-integrated 1D nodal solution flux profiles are also used as the condition to determine the 2D intra-nodal flux distribution. For each energy group, a two-dimensional expansion with a nine-term polynomial and eight hyperbolic functions is used to perform a constrained least square fit to the 1D intra-nodal flux solution profiles. The constraints are on the conservation of nodal volume and surface average fluxes and corner fluxes. Instead of solving the constrained least square fit problem numerically, we solve it analytically by fully utilizing the symmetry property of the expansion functions. Each of the 17 unknown expansion coefficients is expressed in terms of nodal volume and surface average fluxes, corner fluxes and transversely-integrated flux values. To determine the unknown corner fluxes, a set of linear algebraic equations involving corner fluxes is established via using the current conservation condition on all corners. Moreover, an optional slowing down source improvement method is also developed to further enhance the accuracy of the reconstructed flux distribution if needed. Two test examples are shown with very good results. One is a four-group BWR mini-core problem with all control blades inserted and the other is the seven-group OECD NEA MOX benchmark, C5G7
[Reconstructive methods after Fournier gangrene].
Wallner, C; Behr, B; Ring, A; Mikhail, B D; Lehnhardt, M; Daigeler, A
2016-04-01
Fournier's gangrene is a variant of the necrotizing fasciitis restricted to the perineal and genital region. It presents as an acute life-threatening disease and demands rapid surgical debridement, resulting in large soft tissue defects. Various reconstructive methods have to be applied to reconstitute functionality and aesthetics. The objective of this work is to identify different reconstructive methods in the literature and compare them to our current concepts for reconstructing defects caused by Fournier gangrene. Analysis of the current literature and our reconstructive methods on Fournier gangrene. The Fournier gangrene is an emergency requiring rapid, calculated antibiotic treatment and radical surgical debridement. After the acute phase of the disease, appropriate reconstructive methods are indicated. The planning of the reconstruction of the defect depends on many factors, especially functional and aesthetic demands. Scrotal reconstruction requires a higher aesthetic and functional reconstructive degree than perineal cutaneous wounds. In general, thorough wound hygiene, proper pre-operative planning, and careful consideration of the patient's demands are essential for successful reconstruction. In the literature, various methods for reconstruction after Fournier gangrene are described. Reconstruction with a flap is required for a good functional result in complex regions as the scrotum and penis, while cutaneous wounds can be managed through skin grafting. Patient compliance and tissue demand are crucial factors in the decision-making process.
A reconstruction of solar irradiance using a flux transport model
Dasi Espuig, Maria; Jiang, Jie; Krivova, Natalie; Solanki, Sami
2013-04-01
Reconstructions of solar irradiance into the past are of considerable interest for studies of solar influence on climate. Models based on the assumption that irradiance changes are caused by the evolution of the photospheric magnetic field have been the most successful in reproducing the measured irradiance variations. Our SATIRE-S model is one of these. It uses solar full-disc magnetograms as an input, and these are available for less than four decades. Thus, to reconstruct the irradiance back to times when no observed magnetograms are available, we combine the SATIRE-S model with synthetic magnetograms, produced using a surface flux transport model. The model is fed with daily, observed or modelled statistically, records of sunspot positions, areas, and tilt angles. To describe the secular change in the irradiance, we used the concept of overlapping ephemeral region cycles. With this technique TSI can be reconstructed back to 1610.
Garcillán-Barcia, M. Pilar; Mora, Azucena; Blanco, Jorge; Coque, Teresa M.; de la Cruz, Fernando
2014-01-01
Bacterial whole genome sequence (WGS) methods are rapidly overtaking classical sequence analysis. Many bacterial sequencing projects focus on mobilome changes, since macroevolutionary events, such as the acquisition or loss of mobile genetic elements, mainly plasmids, play essential roles in adaptive evolution. Existing WGS analysis protocols do not assort contigs between plasmids and the main chromosome, thus hampering full analysis of plasmid sequences. We developed a method (called plasmid constellation networks or PLACNET) that identifies, visualizes and analyzes plasmids in WGS projects by creating a network of contig interactions, thus allowing comprehensive plasmid analysis within WGS datasets. The workflow of the method is based on three types of data: assembly information (including scaffold links and coverage), comparison to reference sequences and plasmid-diagnostic sequence features. The resulting network is pruned by expert analysis, to eliminate confounding data, and implemented in a Cytoscape-based graphic representation. To demonstrate PLACNET sensitivity and efficacy, the plasmidome of the Escherichia coli lineage ST131 was analyzed. ST131 is a globally spread clonal group of extraintestinal pathogenic E. coli (ExPEC), comprising different sublineages with ability to acquire and spread antibiotic resistance and virulence genes via plasmids. Results show that plasmids flux in the evolution of this lineage, which is wide open for plasmid exchange. MOBF12/IncF plasmids were pervasive, adding just by themselves more than 350 protein families to the ST131 pangenome. Nearly 50% of the most frequent γ–proteobacterial plasmid groups were found to be present in our limited sample of ten analyzed ST131 genomes, which represent the main ST131 sublineages. PMID:25522143
Directory of Open Access Journals (Sweden)
Val F Lanza
2014-12-01
Full Text Available Bacterial whole genome sequence (WGS methods are rapidly overtaking classical sequence analysis. Many bacterial sequencing projects focus on mobilome changes, since macroevolutionary events, such as the acquisition or loss of mobile genetic elements, mainly plasmids, play essential roles in adaptive evolution. Existing WGS analysis protocols do not assort contigs between plasmids and the main chromosome, thus hampering full analysis of plasmid sequences. We developed a method (called plasmid constellation networks or PLACNET that identifies, visualizes and analyzes plasmids in WGS projects by creating a network of contig interactions, thus allowing comprehensive plasmid analysis within WGS datasets. The workflow of the method is based on three types of data: assembly information (including scaffold links and coverage, comparison to reference sequences and plasmid-diagnostic sequence features. The resulting network is pruned by expert analysis, to eliminate confounding data, and implemented in a Cytoscape-based graphic representation. To demonstrate PLACNET sensitivity and efficacy, the plasmidome of the Escherichia coli lineage ST131 was analyzed. ST131 is a globally spread clonal group of extraintestinal pathogenic E. coli (ExPEC, comprising different sublineages with ability to acquire and spread antibiotic resistance and virulence genes via plasmids. Results show that plasmids flux in the evolution of this lineage, which is wide open for plasmid exchange. MOBF12/IncF plasmids were pervasive, adding just by themselves more than 350 protein families to the ST131 pangenome. Nearly 50% of the most frequent γ-proteobacterial plasmid groups were found to be present in our limited sample of ten analyzed ST131 genomes, which represent the main ST131 sublineages.
Phylogenetic reconstruction methods: an overview.
De Bruyn, Alexandre; Martin, Darren P; Lefeuvre, Pierre
2014-01-01
Initially designed to infer evolutionary relationships based on morphological and physiological characters, phylogenetic reconstruction methods have greatly benefited from recent developments in molecular biology and sequencing technologies with a number of powerful methods having been developed specifically to infer phylogenies from macromolecular data. This chapter, while presenting an overview of basic concepts and methods used in phylogenetic reconstruction, is primarily intended as a simplified step-by-step guide to the construction of phylogenetic trees from nucleotide sequences using fairly up-to-date maximum likelihood methods implemented in freely available computer programs. While the analysis of chloroplast sequences from various Vanilla species is used as an illustrative example, the techniques covered here are relevant to the comparative analysis of homologous sequences datasets sampled from any group of organisms.
Reconstruction of flux coordinates from discretized magnetic field maps
Predebon, I.; Momo, B.; Suzuki, Y.; Auriemma, F.
2018-04-01
We provide a simple method to build a straight field-line coordinate system from discretized (Poincaré) magnetic field maps. The method is suitable for any plasma domain with nested flux surfaces, including magnetic islands. Illustrative examples are shown for tokamak, heliotron, and reversed-field-pinch plasmas with m = 1 islands.
Dual-spacecraft reconstruction of a three-dimensional magnetic flux rope at the Earth's magnetopause
Directory of Open Access Journals (Sweden)
H. Hasegawa
2015-02-01
Full Text Available We present the first results of a data analysis method, developed by Sonnerup and Hasegawa (2011, for reconstructing three-dimensional (3-D, magnetohydrostatic structures from data taken as two closely spaced satellites traverse the structures. The method is applied to a magnetic flux transfer event (FTE, which was encountered on 27 June 2007 by at least three (TH-C, TH-D, and TH-E of the five THEMIS probes near the subsolar magnetopause. The FTE was sandwiched between two oppositely directed reconnection jets under a southward interplanetary magnetic field condition, consistent with its generation by multiple X-line reconnection. The recovered 3-D field indicates that a magnetic flux rope with a diameter of ~ 3000 km was embedded in the magnetopause. The FTE flux rope had a significant 3-D structure, because the 3-D field reconstructed from the data from TH-C and TH-D (separated by ~ 390 km better predicts magnetic field variations actually measured along the TH-E path than does the 2-D Grad–Shafranov reconstruction using the data from TH-C (which was closer to TH-E than TH-D and was at ~ 1250 km from TH-E. Such a 3-D nature suggests that the field lines reconnected at the two X-lines on both sides of the flux rope are entangled in a complicated way through their interaction with each other. The generation process of the observed 3-D flux rope is discussed on the basis of the reconstruction results and the pitch-angle distribution of electrons observed in and around the FTE.
Crystal growth of emerald by flux method
International Nuclear Information System (INIS)
Inoue, Mikio; Narita, Eiichi; Okabe, Taijiro; Morishita, Toshihiko.
1979-01-01
Emerald crystals have been formed in two binary fluxes of Li 2 O-MoO 2 and Li 2 O-V 2 O 5 using the slow cooling method and the temperature gradient method under various conditions. In the flux of Li 2 O-MoO 3 carried out in the range of 2 -- 5 of molar ratios (MoO 3 /Li 2 O), emerald was crystallized in the temperature range from 750 to 950 0 C, and the suitable crystallization conditions were found to be the molar ratio of 3 -- 4 and the temperature about 900 0 C. In the flux of Li 2 O-V 2 O 5 carried out in the range of 1.7 -- 5 of molar ratios (V 2 O 5 /Li 2 O), emerald was crystallized in the temperature range from 900 to 1150 0 . The suitable crystals were obtained at the molar ratio of 3 and the temperature range of 1000 -- 1100 0 C. The crystallization temperature rised with an increase in the molar ratio of the both fluxes. The emeralds grown in two binary fluxes were transparent green, having the density of 2.68, the refractive index of 1.56, and the two distinct bands in the visible spectrum at 430 and 600nm. The emerald grown in Li 2 O-V 2 O 5 flux was more bluish green than that grown in Li 2 O-MoO 3 flux. The size of the spontaneously nucleated emerald grown in the former flux was larger than the latter, when crystallized by the slow cooling method. As for the solubility of beryl in the two fluxes, Li 2 O-V 2 O 5 flux was superior to Li 2 O-MoO 3 flux whose small solubility of SiO 2 caused an experimental problem to the temperature gradient method. The suitability of the two fluxes for the crystal growth of emerald by the flux method was discussed from the view point of various properties of above-mentioned two fluxes. (author)
Methods for reconstruction of the density distribution of nuclear power
International Nuclear Information System (INIS)
Pessoa, Paulo O.; Silva, Fernando C.; Martinez, Aquilino S.
2015-01-01
Highlights: • Two methods for reconstruction of the pin power distribution are presented. • The ARM method uses analytical solution of the 2D diffusion equation. • The PRM method uses polynomial solution without boundary conditions. • The maximum errors in pin power reconstruction occur in the peripheral water region. • The errors are significantly less in the inner area of the core. - Abstract: In analytical reconstruction method (ARM), the two-dimensional (2D) neutron diffusion equation is analytically solved for two energy groups (2G) and homogeneous nodes with dimensions of a fuel assembly (FA). The solution employs a 2D fourth-order expansion for the axial leakage term. The Nodal Expansion Method (NEM) provides the solution average values as the four average partial currents on the surfaces of the node, the average flux in the node and the multiplying factor of the problem. The expansion coefficients for the axial leakage are determined directly from NEM method or can be determined in the reconstruction method. A new polynomial reconstruction method (PRM) is implemented based on the 2D expansion for the axial leakage term. The ARM method use the four average currents on the surfaces of the node and four average fluxes in corners of the node as boundary conditions and the average flux in the node as a consistency condition. To determine the average fluxes in corners of the node an analytical solution is employed. This analytical solution uses the average fluxes on the surfaces of the node as boundary conditions and discontinuities in corners are incorporated. The polynomial and analytical solutions to the PRM and ARM methods, respectively, represent the homogeneous flux distributions. The detailed distributions inside a FA are estimated by product of homogeneous distribution by local heterogeneous form function. Moreover, the form functions of power are used. The results show that the methods have good accuracy when compared with reference values and
Method for position emission mammography image reconstruction
Smith, Mark Frederick
2004-10-12
An image reconstruction method comprising accepting coincidence datat from either a data file or in real time from a pair of detector heads, culling event data that is outside a desired energy range, optionally saving the desired data for each detector position or for each pair of detector pixels on the two detector heads, and then reconstructing the image either by backprojection image reconstruction or by iterative image reconstruction. In the backprojection image reconstruction mode, rays are traced between centers of lines of response (LOR's), counts are then either allocated by nearest pixel interpolation or allocated by an overlap method and then corrected for geometric effects and attenuation and the data file updated. If the iterative image reconstruction option is selected, one implementation is to compute a grid Siddon retracing, and to perform maximum likelihood expectation maiximization (MLEM) computed by either: a) tracing parallel rays between subpixels on opposite detector heads; or b) tracing rays between randomized endpoint locations on opposite detector heads.
Review of digital holography reconstruction methods
Dovhaliuk, Rostyslav Yu.
2018-01-01
Development of digital holography opened new ways of both transparent and opaque objects non-destructive study. In this paper, a digital hologram reconstruction process is investigated. The advantages and limitations of common wave propagation methods are discussed. The details of a software implementation of a digital hologram reconstruction methods are presented. Finally, the performance of each wave propagation method is evaluated, and recommendations about possible use cases for each of them are given.
Reconstruction of local heat fluxes in pool boiling experiments using the entire heater geometry
Energy Technology Data Exchange (ETDEWEB)
Heng, Y.; Mhamdi, A.; Marquardt, W. [RWTH Aachen University, Aachen (Germany). AVT-Process Systems Engineering; Buchholz, M.; Auracher, H. [Berlin University of Technology (Germany). Inst. for Energy Engineering
2009-07-01
In this work, we consider the reconstruction of local boiling heat fluxes from high resolution transient temperature measurements inside the heater obtained during experiments performed at T U Berlin. In our previous work, a very small 3D domain surrounding the micro thermocouples at the center of a test heater has been considered. The unknown lateral boundary conditions have been set to zero, due to the lack of better knowledge. This geometry and the related assumptions have been chosen, due to the computational limitations. In the present study, we address for the first time this problem over the entire test heater. This has been only possible by improving the computational efficiency and using a suitable non-uniform discretization strategy. The boundary conditions in this study are well-defined at the boundaries where no boiling occurs. We formulate the heat flux estimation as a three-dimensional transient inverse heat conduction problem (IHCP). The solution of this ill-posed problem is obtained by applying an iterative regularization strategy, which is a combination of the method of conjugate gradients for the normal equation and the discrepancy principle. The obtained results are similar to those obtained in our previous work. The estimates are, however, much better in this work, since we not only recover the dynamics of the signal, but also largely avoid the negative heat fluxes, which we observed using the much smaller region. (author)
Methods and applications in high flux neutron imaging
International Nuclear Information System (INIS)
Ballhausen, H.
2007-01-01
This treatise develops new methods for high flux neutron radiography and high flux neutron tomography and describes some of their applications in actual experiments. Instead of single images, time series can be acquired with short exposure times due to the available high intensity. To best use the increased amount of information, new estimators are proposed, which extract accurate results from the recorded ensembles, even if the individual piece of data is very noisy and in addition severely affected by systematic errors such as an influence of gamma background radiation. The spatial resolution of neutron radiographies, usually limited by beam divergence and inherent resolution of the scintillator, can be significantly increased by scanning the sample with a pinhole-micro-collimator. This technique circumvents any limitations in present detector design and, due to the available high intensity, could be successfully tested. Imaging with scattered neutrons as opposed to conventional total attenuation based imaging determines separately the absorption and scattering cross sections within the sample. For the first time even coherent angle dependent scattering could be visualized space-resolved. New applications of high flux neutron imaging are presented, such as materials engineering experiments on innovative metal joints, time-resolved tomography on multilayer stacks of fuel cells under operation, and others. A new implementation of an algorithm for the algebraic reconstruction of tomography data executes even in case of missing information, such as limited angle tomography, and returns quantitative reconstructions. The setup of the world-leading high flux radiography and tomography facility at the Institut Laue-Langevin is presented. A comprehensive appendix covers the physical and technical foundations of neutron imaging. (orig.)
Finite difference applied to the reconstruction method of the nuclear power density distribution
International Nuclear Information System (INIS)
Pessoa, Paulo O.; Silva, Fernando C.; Martinez, Aquilino S.
2016-01-01
Highlights: • A method for reconstruction of the power density distribution is presented. • The method uses discretization by finite differences of 2D neutrons diffusion equation. • The discretization is performed homogeneous meshes with dimensions of a fuel cell. • The discretization is combined with flux distributions on the four node surfaces. • The maximum errors in reconstruction occur in the peripheral water region. - Abstract: In this reconstruction method the two-dimensional (2D) neutron diffusion equation is discretized by finite differences, employed to two energy groups (2G) and meshes with fuel-pin cell dimensions. The Nodal Expansion Method (NEM) makes use of surface discontinuity factors of the node and provides for reconstruction method the effective multiplication factor of the problem and the four surface average fluxes in homogeneous nodes with size of a fuel assembly (FA). The reconstruction process combines the discretized 2D diffusion equation by finite differences with fluxes distribution on four surfaces of the nodes. These distributions are obtained for each surfaces from a fourth order one-dimensional (1D) polynomial expansion with five coefficients to be determined. The conditions necessary for coefficients determination are three average fluxes on consecutive surfaces of the three nodes and two fluxes in corners between these three surface fluxes. Corner fluxes of the node are determined using a third order 1D polynomial expansion with four coefficients. This reconstruction method uses heterogeneous nuclear parameters directly providing the heterogeneous neutron flux distribution and the detailed nuclear power density distribution within the FAs. The results obtained with this method has good accuracy and efficiency when compared with reference values.
Comparison between Evapotranspiration Fluxes Assessment Methods
Casola, A.; Longobardi, A.; Villani, P.
2009-11-01
Knowledge of hydrological processes acting in the water balance is determinant for a rational water resources management plan. Among these, the water losses as vapour, in the form of evapotranspiration, play an important role in the water balance and the heat transfers between the land surface and the atmosphere. Mass and energy interactions between soil, atmosphere and vegetation, in fact, influence all hydrological processes modificating rainfall interception, infiltration, evapotraspiration, surface runoff and groundwater recharge.A numbers of methods have been developed in scientific literature for modelling evapotranspiration. They can be divided in three main groups: i) traditional meteorological models, ii) energy fluxes balance models, considering interaction between vegetation and the atmosphere, and iii) remote sensing based models. The present analysis preliminary performs a study of fluxes directions and an evaluation of energy balance closure in a typical Mediterranean short vegetation area, using data series recorded from an eddy covariance station, located in the Campania region, Southern Italy. The analysis was performed on different seasons of the year with the aim to assess climatic forcing features impact on fluxes balance, to evaluate the smaller imbalance and to highlight influencing factors and sampling errors on balance closure. The present study also concerns evapotranspiration fluxes assessment at the point scale. Evapotranspiration is evaluated both from empirical relationships (Penmann-Montheit, Penmann F AO, Prestley&Taylor) calibrated with measured energy fluxes at mentioned experimental site, and from measured latent heat data scaled by the latent heat of vaporization. These results are compared with traditional and reliable well known models at the plot scale (Coutagne, Turc, Thorthwaite).
Reconstructing Heat Fluxes Over Lake Erie During the Lake Effect Snow Event of November 2014
Fitzpatrick, L.; Fujisaki-Manome, A.; Gronewold, A.; Anderson, E. J.; Spence, C.; Chen, J.; Shao, C.; Posselt, D. J.; Wright, D. M.; Lofgren, B. M.; Schwab, D. J.
2017-12-01
The extreme North American winter storm of November 2014 triggered a record lake effect snowfall (LES) event in southwest New York. This study examined the evaporation from Lake Erie during the record lake effect snowfall event, November 17th-20th, 2014, by reconstructing heat fluxes and evaporation rates over Lake Erie using the unstructured grid, Finite-Volume Community Ocean Model (FVCOM). Nine different model runs were conducted using combinations of three different flux algorithms: the Met Flux Algorithm (COARE), a method routinely used at NOAA's Great Lakes Environmental Research Laboratory (SOLAR), and the Los Alamos Sea Ice Model (CICE); and three different meteorological forcings: the Climate Forecast System version 2 Operational Analysis (CFSv2), Interpolated observations (Interp), and the High Resolution Rapid Refresh (HRRR). A few non-FVCOM model outputs were also included in the evaporation analysis from an atmospheric reanalysis (CFSv2) and the large lake thermodynamic model (LLTM). Model-simulated water temperature and meteorological forcing data (wind direction and air temperature) were validated with buoy data at three locations in Lake Erie. The simulated sensible and latent heat fluxes were validated with the eddy covariance measurements at two offshore sites; Long Point Lighthouse in north central Lake Erie and Toledo water crib intake in western Lake Erie. The evaluation showed a significant increase in heat fluxes over three days, with the peak on the 18th of November. Snow water equivalent data from the National Snow Analyses at the National Operational Hydrologic Remote Sensing Center showed a spike in water content on the 20th of November, two days after the peak heat fluxes. The ensemble runs presented a variation in spatial pattern of evaporation, lake-wide average evaporation, and resulting cooling of the lake. Overall, the evaporation tended to be larger in deep water than shallow water near the shore. The lake-wide average evaporations
Image reconstruction methods in positron tomography
International Nuclear Information System (INIS)
Townsend, D.W.; Defrise, M.
1993-01-01
In the two decades since the introduction of the X-ray scanner into radiology, medical imaging techniques have become widely established as essential tools in the diagnosis of disease. As a consequence of recent technological and mathematical advances, the non-invasive, three-dimensional imaging of internal organs such as the brain and the heart is now possible, not only for anatomical investigations using X-ray but also for studies which explore the functional status of the body using positron-emitting radioisotopes. This report reviews the historical and physical basis of medical imaging techniques using positron-emitting radioisotopes. Mathematical methods which enable three-dimensional distributions of radioisotopes to be reconstructed from projection data (sinograms) acquired by detectors suitably positioned around the patient are discussed. The extension of conventional two-dimensional tomographic reconstruction algorithms to fully three-dimensional reconstruction is described in detail. (orig.)
Hu, Q.
2016-12-01
We will present an extension of the Grad-Shafranov (GS) reconstruction technique of cylindrical flux-rope structures to the geometry of a torus. Benchmark test cases on analytic solutions to the GS equation in such a geometry are shown to illustrate the procedures. Applications to events of multi-spacecraft in-situ observations will be attempted, especially to the two events in May and November 2007. In each event, a Magnetic Cloud (MC) was observed simultaneously by three spacecraft, Wind, STEREO-A (ST-A) and B. In the November event, the ST-A and B were separated from Wind by about 20 degrees on either side. We applied the toroidal GS reconstruction procedures to the Wind spacecraft data, which exhibit the strongest signatures of a flux-rope configuration. The toroidal GS reconstruction results showed that both ST-A and B spacecraft were glancing across the upper and lower edge, not the main body of the flux rope reconstructed. Therefore whether or not the flux-rope structure maintained a coherent toroidal configuration of significant lateral extent (>0.05 AU in minor radius) over an angular span of about 40 degrees in this event remains an open question. This study demonstrated the new way to examine ICME flux rope structure transformation over a relatively large spatial extent by combining multi-spacecraft observations and the GS reconstruction technique, taking into account, at times, a more favorable toroidal geometry. We will also release the code and make it known to the community for wider usage and validation of this new tool.
New method for initial density reconstruction
Shi, Yanlong; Cautun, Marius; Li, Baojiu
2018-01-01
A theoretically interesting and practically important question in cosmology is the reconstruction of the initial density distribution provided a late-time density field. This is a long-standing question with a revived interest recently, especially in the context of optimally extracting the baryonic acoustic oscillation (BAO) signals from observed galaxy distributions. We present a new efficient method to carry out this reconstruction, which is based on numerical solutions to the nonlinear partial differential equation that governs the mapping between the initial Lagrangian and final Eulerian coordinates of particles in evolved density fields. This is motivated by numerical simulations of the quartic Galileon gravity model, which has similar equations that can be solved effectively by multigrid Gauss-Seidel relaxation. The method is based on mass conservation, and does not assume any specific cosmological model. Our test shows that it has a performance comparable to that of state-of-the-art algorithms that were very recently put forward in the literature, with the reconstructed density field over ˜80 % (50%) correlated with the initial condition at k ≲0.6 h /Mpc (1.0 h /Mpc ). With an example, we demonstrate that this method can significantly improve the accuracy of BAO reconstruction.
Tomographic Reconstruction Methods for Decomposing Directional Components
DEFF Research Database (Denmark)
Kongskov, Rasmus Dalgas; Dong, Yiqiu
X-ray computed tomography technique has many diﬀerent practical applications. In this paper, we propose two new reconstruction methods that can decompose objects at the same time. By incorporating direction information, the proposed methods can decompose objects into various directional components....... Furthermore we propose a method to obtain the direction information in the objects directly from the measured sinogram data. We demonstrate the proposed methods on simulated and real samples to show their practical applicability. The numerical results show the diﬀerences between the two methods...
Profile reconstruction methods for pulse reflectometry
Energy Technology Data Exchange (ETDEWEB)
Bruskin, L.G.; Yamamoto, A.; Mase, A.; Ohashi, M.; Deguchi, T. [Kyushu Univ., Advanced Science and Technology Center for Cooperative Research, Kasuga, Fukuoka (Japan)
2001-05-01
We present an analysis of the existing time delay methods of plasma profile reconstruction applied to the Ultra-Short Pulse (USP) reflectometry. As the instantaneous frequencies become poorly localized in the time domain, even the advanced time-frequency analysis fails to produce reliable values of the time delay for corresponding modes. Based on the results of analytical modeling of USP propagation in plasma the Signal Record Analysis method of profile reconstruction is proposed. The method has an advantage of relying on a row signal record rather than on the delay time of each frequency mode, which makes it more robust and reliable for the problem of density profile measurements using USP reflectometry. (author)
Apparatus and method for reconstructing data
International Nuclear Information System (INIS)
Pavkovich, J.M.
1977-01-01
The apparatus and method for reconstructing data are described. A fan beam of radiation is passed through an object, the beam lying in the same quasi-plane as the object slice to be examined. Radiation not absorbed in the object slice is recorded on oppositely situated detectors aligned with the source of radiation. Relative rotation is provided between the source-detector configuration and the object. Reconstruction means are coupled to the detector means, and may comprise a general purpose computer, a special purpose computer, and control logic for interfacing between said computers and controlling the respective functioning thereof for performing a convolution and back projection based upon non-absorbed radiation detected by said detector means, whereby the reconstruction means converts values of the non-absorbed radiation into values of absorbed radiation at each of an arbitrarily large number of points selected within the object slice. Display means are coupled to the reconstruction means for providing a visual or other display or representation of the quantities of radiation absorbed at the points considered in the object. (Auth.)
A local expansion method applied to fast plasma boundary reconstruction for EAST
Guo, Yong; Xiao, Bingjia; Luo, Zhengping
2011-10-01
A fast plasma boundary reconstruction technique based on a local expansion method is designed for EAST. It represents the poloidal flux distribution in the vacuum region by a limited number of expansions. The plasma boundary reconstructed by the local expansion method is consistent with EFIT/RT-EFIT results for an arbitrary plasma configuration. On a Linux server with Intel (R) Xeon (TM) CPU 3.2 GHz, the method completes one plasma boundary reconstruction in about 150 µs. This technique is sufficiently reliable and fast for real-time shape control.
Frerichs, H.; Effenberg, F.; Feng, Y.; Schmitz, O.; Stephey, L.; Reiter, D.; Börner, P.; The W7-X Team
2017-12-01
The interpretation of spectroscopic measurements in the edge region of high-temperature plasmas can be guided by modeling with the EMC3-EIRENE code. A versatile synthetic diagnostic module, initially developed for the generation of synthetic camera images, has been extended for the evaluation of the inverse problem in which the observable photon flux is related back to the originating particle flux (recycling). An application of this synthetic diagnostic to the startup phase (inboard) limiter in Wendelstein 7-X (W7-X) is presented, and reconstruction of recycling from synthetic observation of \\renewcommand{\
Analytical method for reconstruction pin to pin of the nuclear power density distribution
International Nuclear Information System (INIS)
Pessoa, Paulo O.; Silva, Fernando C.; Martinez, Aquilino S.
2013-01-01
An accurate and efficient method for reconstructing pin to pin of the nuclear power density distribution, involving the analytical solution of the diffusion equation for two-dimensional neutron energy groups in homogeneous nodes, is presented. The boundary conditions used for analytic as solution are the four currents or fluxes on the surface of the node, which are obtained by Nodal Expansion Method (known as NEM) and four fluxes at the vertices of a node calculated using the finite difference method. The analytical solution found is the homogeneous distribution of neutron flux. Detailed distributions pin to pin inside a fuel assembly are estimated by the product of homogeneous flux distribution by local heterogeneous form function. Furthermore, the form functions of flux and power are used. The results obtained with this method have a good accuracy when compared with reference values. (author)
Analytical method for reconstruction pin to pin of the nuclear power density distribution
Energy Technology Data Exchange (ETDEWEB)
Pessoa, Paulo O.; Silva, Fernando C.; Martinez, Aquilino S., E-mail: ppessoa@con.ufrj.br, E-mail: fernando@con.ufrj.br, E-mail: aquilino@imp.ufrj.br [Coordenacao dos Programas de Pos-Graduacao em Engenharia (COPPE/UFRJ), Rio de Janeiro, RJ (Brazil)
2013-07-01
An accurate and efficient method for reconstructing pin to pin of the nuclear power density distribution, involving the analytical solution of the diffusion equation for two-dimensional neutron energy groups in homogeneous nodes, is presented. The boundary conditions used for analytic as solution are the four currents or fluxes on the surface of the node, which are obtained by Nodal Expansion Method (known as NEM) and four fluxes at the vertices of a node calculated using the finite difference method. The analytical solution found is the homogeneous distribution of neutron flux. Detailed distributions pin to pin inside a fuel assembly are estimated by the product of homogeneous flux distribution by local heterogeneous form function. Furthermore, the form functions of flux and power are used. The results obtained with this method have a good accuracy when compared with reference values. (author)
An alternative method for the measurement of neutron flux
Indian Academy of Sciences (India)
Abstract. A simple and easy method for measuring the neutron flux is presented. This paper deals with the experimental verification of neutron dose rate–flux relationship for a non-dissipative medium. Though the neutron flux cannot be obtained from the dose rate in a dissipative medium, experimental result shows that for ...
An alternative method for the measurement of neutron flux
Indian Academy of Sciences (India)
A simple and easy method for measuring the neutron flux is presented. This paper deals with the experimental verification of neutron dose rate–flux relationship for a non-dissipative medium. Though the neutron flux cannot be obtained from the dose rate in a dissipative medium, experimental result shows that for ...
Surface renewal method for estimating sensible heat flux | Mengistu ...
African Journals Online (AJOL)
For short canopies, latent energy flux may be estimated using a shortened surface energy balance from measurements of sensible and soil heat flux and the net irradiance at the surface. The surface renewal (SR) method for estimating sensible heat, latent energy, and other scalar fluxes has the advantage over other ...
Research of ART method in CT image reconstruction
International Nuclear Information System (INIS)
Li Zhipeng; Cong Peng; Wu Haifeng
2005-01-01
This paper studied Algebraic Reconstruction Technique (ART) in CT image reconstruction. Discussed the ray number influence on image quality. And the adopting of smooth method got high quality CT image. (authors)
Method of reconstructing a moving pulse
Energy Technology Data Exchange (ETDEWEB)
Howard, S J; Horton, R D; Hwang, D Q; Evans, R W; Brockington, S J; Johnson, J [UC Davis Department of Applied Science, Livermore, CA, 94551 (United States)
2007-11-15
We present a method of analyzing a set of N time signals f{sub i}(t) that consist of local measurements of the same physical observable taken at N sequential locations Z{sub i} along the length of an experimental device. The result is an algorithm for reconstructing an approximation F(z,t) of the field f(z,t) in the inaccessible regions between the points of measurement. We also explore the conditions needed for this approximation to hold, and test the algorithm under a variety of conditions. We apply this method to analyze the magnetic field measurements taken on the Compact Toroid Injection eXperiment (CTIX) plasma accelerator; providing a direct means of visualizing experimental data, quantifying global properties, and benchmarking simulation.
Cadmium filtered neutron flux determination. Comparison of activation methods
International Nuclear Information System (INIS)
Ollui-Mboulou, Magloire.
1979-01-01
Neutron fluxes under cadmium filters are determined by the cadmium ratio and sandwich activation methods. The thermal neutron flux levels obtained with 7 detectors of different kinds: In, Au, Ag, W, Co, Mn, Zn are compared. The cadmium ratio method was used in locations for which the epithermal and thermal neutron flux ratio are quite different. By irradiating materials under different thicknesses of cadmium it was possible to establish experimental curves from which the flux depression factors for intermediate neutrons may be determined whatever the thickness of the filter used. Whereas the cadmium ratio method can only measure the mean flux above the cadmium cut-off energy the sandwich method enables the flux value to be determined in a narrow band around the resonance energy of each detector used [fr
Class of reconstructed discontinuous Galerkin methods in computational fluid dynamics
International Nuclear Information System (INIS)
Luo, Hong; Xia, Yidong; Nourgaliev, Robert
2011-01-01
A class of reconstructed discontinuous Galerkin (DG) methods is presented to solve compressible flow problems on arbitrary grids. The idea is to combine the efficiency of the reconstruction methods in finite volume methods and the accuracy of the DG methods to obtain a better numerical algorithm in computational fluid dynamics. The beauty of the resulting reconstructed discontinuous Galerkin (RDG) methods is that they provide a unified formulation for both finite volume and DG methods, and contain both classical finite volume and standard DG methods as two special cases of the RDG methods, and thus allow for a direct efficiency comparison. Both Green-Gauss and least-squares reconstruction methods and a least-squares recovery method are presented to obtain a quadratic polynomial representation of the underlying linear discontinuous Galerkin solution on each cell via a so-called in-cell reconstruction process. The devised in-cell reconstruction is aimed to augment the accuracy of the discontinuous Galerkin method by increasing the order of the underlying polynomial solution. These three reconstructed discontinuous Galerkin methods are used to compute a variety of compressible flow problems on arbitrary meshes to assess their accuracy. The numerical experiments demonstrate that all three reconstructed discontinuous Galerkin methods can significantly improve the accuracy of the underlying second-order DG method, although the least-squares reconstructed DG method provides the best performance in terms of both accuracy, efficiency, and robustness. (author)
Image-reconstruction methods in positron tomography
Townsend, David W; CERN. Geneva
1993-01-01
Physics and mathematics for medical imaging In the two decades since the introduction of the X-ray scanner into radiology, medical imaging techniques have become widely established as essential tools in the diagnosis of disease. As a consequence of recent technological and mathematical advances, the non-invasive, three-dimensional imaging of internal organs such as the brain and the heart is now possible, not only for anatomical investigations using X-rays but also for studies which explore the functional status of the body using positron-emitting radioisotopes and nuclear magnetic resonance. Mathematical methods which enable three-dimentional distributions to be reconstructed from projection data acquired by radiation detectors suitably positioned around the patient will be described in detail. The lectures will trace the development of medical imaging from simpleradiographs to the present-day non-invasive measurement of in vivo boichemistry. Powerful techniques to correlate anatomy and function that are cur...
New weighting methods for phylogenetic tree reconstruction using multiple loci.
Misawa, Kazuharu; Tajima, Fumio
2012-08-01
Efficient determination of evolutionary distances is important for the correct reconstruction of phylogenetic trees. The performance of the pooled distance required for reconstructing a phylogenetic tree can be improved by applying large weights to appropriate distances for reconstructing phylogenetic trees and small weights to inappropriate distances. We developed two weighting methods, the modified Tajima-Takezaki method and the modified least-squares method, for reconstructing phylogenetic trees from multiple loci. By computer simulations, we found that both of the new methods were more efficient in reconstructing correct topologies than the no-weight method. Hence, we reconstructed hominoid phylogenetic trees from mitochondrial DNA using our new methods, and found that the levels of bootstrap support were significantly increased by the modified Tajima-Takezaki and by the modified least-squares method.
Spatial methods for event reconstruction in CLEAN
Energy Technology Data Exchange (ETDEWEB)
Coakley, K.J. E-mail: kevin.coakley@nist.gov; McKinsey, D.N. E-mail: daniel.mckinsey@yale.edu
2004-04-21
In CLEAN (Cryogenic Low Energy Astrophysics with Noble gases), a proposed neutrino and dark matter detector, background discrimination is possible if one can determine the location of an ionizing radiation event with high accuracy. Here, we develop spatial methods for event reconstruction, and study their performance in computer simulation experiments. We simulate ionizing radiation events that produce multiple scintillation photons within a spherical detection volume filled with liquid neon. We estimate the radial location of a particular ionizing radiation event based on the observed count data corresponding to that event. The count data are collected by detectors mounted at the spherical boundary of the detection volume. We neglect absorption, but account for Rayleigh scattering. To account for wavelength-shifting of the scintillation light, we assume that photons are absorbed and re-emitted at the detectors. In our study, the detectors incompletely cover the surface area of the sphere. In the first method, we estimate the radial location of the event by maximizing the approximate Poisson likelihood of the observed count data. To correct for scattering and wavelength-shifting, we adjust this estimate using a polynomial calibration model. In the second method, we predict the radial location of the event as a polynomial function of the magnitude of the centroid of the observed count data. The polynomial calibration models are constructed from calibration (training) data. In general, the Maximum Likelihood (ML) method estimate is more accurate than that of the Centroid method estimate. We estimate the expected number of photons emitted by the event by a ML method and a simple method based on the ratio of the number of detected photons and a detection probability factor.
The higher order flux mapping method in large size PHWRs
International Nuclear Information System (INIS)
Kulkarni, A.K.; Balaraman, V.; Purandare, H.D.
1997-01-01
A new higher order method is proposed for obtaining flux map using single set of expansion mode. In this procedure, one can make use of the difference between predicted value of detector reading and their actual values for determining the strength of local fluxes around detector site. The local fluxes are arising due to constant perturbation changes (both extrinsic and intrinsic) taking place in the reactor. (author)
Energy Technology Data Exchange (ETDEWEB)
Hong Luo; Luqing Luo; Robert Nourgaliev; Vincent A. Mousseau
2010-01-01
A reconstruction-based discontinuous Galerkin (RDG) method is presented for the solution of the compressible Navier-Stokes equations on arbitrary grids. The RDG method, originally developed for the compressible Euler equations, is extended to discretize viscous and heat fluxes in the Navier-Stokes equations using a so-called inter-cell reconstruction, where a smooth solution is locally reconstructed using a least-squares method from the underlying discontinuous DG solution. Similar to the recovery-based DG (rDG) methods, this reconstructed DG method eliminates the introduction of ad hoc penalty or coupling terms commonly found in traditional DG methods. Unlike rDG methods, this RDG method does not need to judiciously choose a proper form of a recovered polynomial, thus is simple, flexible, and robust, and can be used on arbitrary grids. The developed RDG method is used to compute a variety of flow problems on arbitrary meshes to demonstrate its accuracy, efficiency, robustness, and versatility. The numerical results indicate that this RDG method is able to deliver the same accuracy as the well-known Bassi-Rebay II scheme, at a half of its computing costs for the discretization of the viscous fluxes in the Navier-Stokes equations, clearly demonstrating its superior performance over the existing DG methods for solving the compressible Navier-Stokes equations.
DEFF Research Database (Denmark)
Hellebust, Taran Paulsen; Tanderup, Kari; Bergstrand, Eva Stabell
2007-01-01
in multiplanar reconstructed images (MPR) and (3) library plans, using pre-defined applicator geometry (LIB). The doses to the lead pellets were calculated. The relative standard deviation (SD) for all reconstruction methods was less than 3.7% in the dose points. The relative SD for the LIB method...... applicator set and six lead pellets representing dose points, was used. The phantom was CT scanned with the ring applicator at four different angles related to the image plane. In each scan the applicator was reconstructed by three methods: (1) direct reconstruction in each image (DR), (2) reconstruction...... was significantly lower (p Using library plans for applicator reconstruction gives the most reproducible dose calculation. However, with restrictive guidelines for applicator...
A Parallel Reconstructed Discontinuous Galerkin Method for the Compressible Flows on Aritrary Grids
Energy Technology Data Exchange (ETDEWEB)
Hong Luo; Amjad Ali; Robert Nourgaliev; Vincent A. Mousseau
2010-01-01
A reconstruction-based discontinuous Galerkin method is presented for the solution of the compressible Navier-Stokes equations on arbitrary grids. In this method, an in-cell reconstruction is used to obtain a higher-order polynomial representation of the underlying discontinuous Galerkin polynomial solution and an inter-cell reconstruction is used to obtain a continuous polynomial solution on the union of two neighboring, interface-sharing cells. The in-cell reconstruction is designed to enhance the accuracy of the discontinuous Galerkin method by increasing the order of the underlying polynomial solution. The inter-cell reconstruction is devised to remove an interface discontinuity of the solution and its derivatives and thus to provide a simple, accurate, consistent, and robust approximation to the viscous and heat fluxes in the Navier-Stokes equations. A parallel strategy is also devised for the resulting reconstruction discontinuous Galerkin method, which is based on domain partitioning and Single Program Multiple Data (SPMD) parallel programming model. The RDG method is used to compute a variety of compressible flow problems on arbitrary meshes to demonstrate its accuracy, efficiency, robustness, and versatility. The numerical results demonstrate that this RDG method is third-order accurate at a cost slightly higher than its underlying second-order DG method, at the same time providing a better performance than the third order DG method, in terms of both computing costs and storage requirements.
Total solar irradiance reconstruction since 1700 using a flux transport model
Dasi Espuig, Maria; Krivova, Natalie; Solanki, Sami K.; Jiang, Jie
Reconstructions of solar irradiance into the past are crucial for studies of solar influence on climate. Models based on the assumption that irradiance changes are caused by the evolution of the photospheric magnetic fields have been most successful in reproducing the measured irradiance variations. Daily magnetograms, such as those from MDI and HMI, provide the most detailed information on the changing distribution of the photospheric magnetic fields. Since such magnetograms are only available from 1974, we used a surface flux transport model to describe the evolution of the magnetic fields on the solar surface due to the effects of differential rotation, meridional circulation, and turbulent diffusivity, before 1974. In this model, the sources of magnetic flux are the active regions, which are introduced based on sunspot group areas, positions, and tilt angles. The RGO record is, however, only available since 1874. Here we present a model of solar irradiance since 1700, which is based on a semi-synthetic sunspot record. The semi-synthetic record was obtained using statistical relationships between sunspot group properties (areas, positions, tilt angles) derived from the RGO record on one hand, and the cycle strength and phase derived from the sunspot group number (Rg) on the other. These relationships were employed to produce daily records of sunspot group positions, areas, and tilt angles before 1874. The semi-synthetic records were fed into the surface flux transport model to simulate daily magnetograms since 1700. By combining the simulated magnetograms with a SATIRE-type model, we then reconstructed total solar irradiance since 1700.
Software Architecture Reconstruction Method, a Survey
Zainab Nayyar; Nazish Rafique
2014-01-01
Architecture reconstruction belongs to a reverse engineering process, in which we move from code to architecture level for reconstructing architecture. Software architectures are the blue prints of projects which depict the external overview of the software system. Mostly maintenance and testing cause the software to deviate from its original architecture, because sometimes for enhancing the functionality of a system the software deviates from its documented specifications, some new modules a...
Testing an inversion method for estimating electron energy fluxes from all-sky camera images
Directory of Open Access Journals (Sweden)
N. Partamies
2004-06-01
Full Text Available An inversion method for reconstructing the precipitating electron energy flux from a set of multi-wavelength digital all-sky camera (ASC images has recently been developed by tomografia. Preliminary tests suggested that the inversion is able to reconstruct the position and energy characteristics of the aurora with reasonable accuracy. This study carries out a thorough testing of the method and a few improvements for its emission physics equations.
We compared the precipitating electron energy fluxes as estimated by the inversion method to the energy flux data recorded by the Defense Meteorological Satellite Program (DMSP satellites during four passes over auroral structures. When the aurorae appear very close to the local zenith, the fluxes inverted from the blue (427.8nm filtered ASC images or blue and green line (557.7nm images together give the best agreement with the measured flux values. The fluxes inverted from green line images alone are clearly larger than the measured ones. Closer to the horizon the quality of the inversion results from blue images deteriorate to the level of the ones from green images. In addition to the satellite data, the precipitating electron energy fluxes were estimated from the electron density measurements by the EISCAT Svalbard Radar (ESR. These energy flux values were compared to the ones of the inversion method applied to over 100 ASC images recorded at the nearby ASC station in Longyearbyen. The energy fluxes deduced from these two types of data are in general of the same order of magnitude. In 35% of all of the blue and green image inversions the relative errors were less than 50% and in 90% of the blue and green image inversions less than 100%.
This kind of systematic testing of the inversion method is the first step toward using all-sky camera images in the way in which global UV images have recently been used to estimate the energy fluxes. The
Testing an inversion method for estimating electron energy fluxes from all-sky camera images
Directory of Open Access Journals (Sweden)
N. Partamies
2004-06-01
Full Text Available An inversion method for reconstructing the precipitating electron energy flux from a set of multi-wavelength digital all-sky camera (ASC images has recently been developed by tomografia. Preliminary tests suggested that the inversion is able to reconstruct the position and energy characteristics of the aurora with reasonable accuracy. This study carries out a thorough testing of the method and a few improvements for its emission physics equations. We compared the precipitating electron energy fluxes as estimated by the inversion method to the energy flux data recorded by the Defense Meteorological Satellite Program (DMSP satellites during four passes over auroral structures. When the aurorae appear very close to the local zenith, the fluxes inverted from the blue (427.8nm filtered ASC images or blue and green line (557.7nm images together give the best agreement with the measured flux values. The fluxes inverted from green line images alone are clearly larger than the measured ones. Closer to the horizon the quality of the inversion results from blue images deteriorate to the level of the ones from green images. In addition to the satellite data, the precipitating electron energy fluxes were estimated from the electron density measurements by the EISCAT Svalbard Radar (ESR. These energy flux values were compared to the ones of the inversion method applied to over 100 ASC images recorded at the nearby ASC station in Longyearbyen. The energy fluxes deduced from these two types of data are in general of the same order of magnitude. In 35% of all of the blue and green image inversions the relative errors were less than 50% and in 90% of the blue and green image inversions less than 100%. This kind of systematic testing of the inversion method is the first step toward using all-sky camera images in the way in which global UV images have recently been used to estimate the energy fluxes. The advantages of ASCs, compared to the space-born imagers, are
Energy Technology Data Exchange (ETDEWEB)
Gao, Zhongming [Laboratory for Atmospheric Research, Department of Civil and Environmental Engineering, Washington State University, Pullman Washington USA; Russell, Eric S. [Laboratory for Atmospheric Research, Department of Civil and Environmental Engineering, Washington State University, Pullman Washington USA; Missik, Justine E. C. [Laboratory for Atmospheric Research, Department of Civil and Environmental Engineering, Washington State University, Pullman Washington USA; Huang, Maoyi [Pacific Northwest National Laboratory, Richland Washington USA; Chen, Xingyuan [Pacific Northwest National Laboratory, Richland Washington USA; Strickland, Chris E. [Pacific Northwest National Laboratory, Richland Washington USA; Clayton, Ray [Pacific Northwest National Laboratory, Richland Washington USA; Arntzen, Evan [Pacific Northwest National Laboratory, Richland Washington USA; Ma, Yulong [Laboratory for Atmospheric Research, Department of Civil and Environmental Engineering, Washington State University, Pullman Washington USA; Liu, Heping [Laboratory for Atmospheric Research, Department of Civil and Environmental Engineering, Washington State University, Pullman Washington USA
2017-07-12
We evaluated nine methods of soil heat flux calculation using field observations. All nine methods underestimated the soil heat flux by at least 19%. This large underestimation is mainly caused by uncertainties in soil thermal properties.
Nitric oxide fluxes from an agricultural soil using a flux-gradient method
Taylor, N. M.; Wagner-Riddle, C.; Thurtell, G. W.; Beauchamp, E. G.
1999-05-01
Soil emission of nitric oxide may be a significant source of NOx in rural areas. Agricultural practices may enhance these emissions by addition of nitrogen fertilizers. A system that enables continuous measurement of NO fluxes from agricultural surfaces using the flux-gradient method was developed. Hourly differences in NO concentrations in air sampled at two intake heights (0.6 and 1 m) were determined using a chemiluminescence analyzer. Eddy diffusivities were determined using wind profiles (cup anemometers), and stability corrections calculated using a 5 cm path sonic anemometer. Fast switching of sampling between air intake heights (every 30 s) and determination of concentration values at a frequency of 2 Hz minimized the errors due to fluctuations in background concentration. Low travel times for air samples in the tubing (˜8 s) were estimated to result in small errors in flux values (chemical reactions. The overall resolution of the system was estimated as ˜1 ng N m-2s-1. NO fluxes from a bare soil were measured quasi-continuously from January to June 1995 at Elora, Canada, comprising a total of 1833 hourly values. Daily NO fluxes before nitrogen fertilization were small, increasing after nitrogen fertilizer was added (>10 ng N m-2 s-1). Monthly NO fluxes estimated were similar to those observed in previous studies. The designed system could be easily modified to measure NOx and NO fluxes by using an additional chemiluminescence analyzer. The system also could be adapted to measure fluxes sequentially from various plots, enabling testing of agricultural practices on NO emissions.
Magnetic flux density reconstruction using interleaved partial Fourier acquisitions in MREIT
Energy Technology Data Exchange (ETDEWEB)
Park, Hee Myung [Department of Veterinary Internal Medicine, College of Veterinary Medicine, Konkuk University (Korea, Republic of); Nam, Hyun Soo; Kwon, Oh In, E-mail: oikwon@konkuk.ac.kr [Department of Mathematics, Konkuk University (Korea, Republic of)
2011-04-07
Magnetic resonance electrical impedance tomography (MREIT) has been introduced as a non-invasive modality to visualize the internal conductivity and/or current density of an electrically conductive object by the injection of current. In order to measure a magnetic flux density signal in MREIT, the phase difference approach in an interleaved encoding scheme cancels the systematic artifacts accumulated in phase signals and also reduces the random noise effect. However, it is important to reduce scan duration maintaining spatial resolution and sufficient contrast, in order to allow for practical in vivo implementation of MREIT. The purpose of this paper is to develop a coupled partial Fourier strategy in the interleaved sampling in order to reduce the total imaging time for an MREIT acquisition, whilst maintaining an SNR of the measured magnetic flux density comparable to what is achieved with complete k-space data. The proposed method uses two key steps: one is to update the magnetic flux density by updating the complex densities using the partially interleaved k-space data and the other is to fill in the missing k-space data iteratively using the updated background field inhomogeneity and magnetic flux density data. Results from numerical simulations and animal experiments demonstrate that the proposed method reduces considerably the scanning time and provides resolution of the recovered B{sub z} comparable to what is obtained from complete k-space data.
How to choose methods for lake greenhouse gas flux measurements?
Bastviken, David
2017-04-01
Lake greenhouse gas (GHG) fluxes are increasingly recognized as important for lake ecosystems as well as for large scale carbon and GHG budgets. However, many of our flux estimates are uncertain and it can be discussed if the presently available data is representative for the systems studied or not. Data are also very limited for some important flux pathways. Hence, many ongoing efforts try to better constrain fluxes and understand flux regulation. A fundamental challenge towards improved knowledge and when starting new studies is what methods to choose. A variety of approaches to measure aquatic GHG exchange is used and data from different methods and methodological approaches have often been treated as equally valid to create large datasets for extrapolations and syntheses. However, data from different approaches may cover different flux pathways or spatio-temporal domains and are thus not always comparable. Method inter-comparisons and critical method evaluations addressing these issues are rare. Emerging efforts to organize systematic multi-lake monitoring networks for GHG fluxes leads to method choices that may set the foundation for decades of data generation and therefore require fundamental evaluation of different approaches. The method choices do not only regard the equipment but also for example consideration of overall measurement design and field approaches, relevant spatial and temporal resolution for different flux components, and accessory variables to measure. In addition, consideration of how to design monitoring approaches being affordable, suitable for widespread (global) use, and comparable across regions is needed. Inspired by discussions with Prof. Dr. Cristian Blodau during the EGU General Assembly 2016, this presentation aims to (1) illustrate fundamental pros and cons for a number of common methods, (2) show how common methodological approaches originally adapted for other environments can be improved for lake flux measurements, (3) suggest
Predictive methods for estimating pesticide flux to air
Energy Technology Data Exchange (ETDEWEB)
Woodrow, J.E.; Seiber, J.N. [Univ. of Nevada, Reno, NV (United States)
1996-10-01
Published evaporative flux values for pesticides volatilizing from soil, plants, and water were correlated with compound vapor pressures (VP), modified by compound properties appropriate to the treated matrix (e.g., soil adsorption coefficient [K{sub oc}], water solubility [S{sub w}]). These correlations were formulated as Ln-Ln plots with correlation (r{sup 2}) coefficients in the range 0.93-0.99: (1) Soil surface - Ln flux vs Ln (VP/[K{sub oc} x S{sub w}]); (2) soil incorporation - Ln flux vs Ln [(VP x AR)/(K{sub oc} x S{sub w} x d)] (AR = application rate, d = incorporation depth); (3) plants - Ln flux vs Ln VP; and (4) water - Ln (flux/water conc) vs Ln (VP/Sw). Using estimated flux values from the plant correlation as source terms in the EPA`s SCREEN-2 dispersion model gave downwind concentrations that agreed to within 65-114% with measured concentrations. Further validation using other treated matrices is in progress. These predictive methods for estimating flux, when coupled with downwind dispersion modeling, provide tools for limiting downwind exposures.
Geometric reconstruction methods for electron tomography
International Nuclear Information System (INIS)
Alpers, Andreas; Gardner, Richard J.; König, Stefan; Pennington, Robert S.; Boothroyd, Chris B.; Houben, Lothar; Dunin-Borkowski, Rafal E.; Joost Batenburg, Kees
2013-01-01
Electron tomography is becoming an increasingly important tool in materials science for studying the three-dimensional morphologies and chemical compositions of nanostructures. The image quality obtained by many current algorithms is seriously affected by the problems of missing wedge artefacts and non-linear projection intensities due to diffraction effects. The former refers to the fact that data cannot be acquired over the full 180° tilt range; the latter implies that for some orientations, crystalline structures can show strong contrast changes. To overcome these problems we introduce and discuss several algorithms from the mathematical fields of geometric and discrete tomography. The algorithms incorporate geometric prior knowledge (mainly convexity and homogeneity), which also in principle considerably reduces the number of tilt angles required. Results are discussed for the reconstruction of an InAs nanowire. - Highlights: ► Four algorithms for electron tomography are introduced that utilize prior knowledge. ► Objects are assumed to be homogeneous; convexity and regularity is also discussed. ► We are able to reconstruct slices of a nanowire from as few as four projections. ► Algorithms should be selected based on the specific reconstruction task at hand
Multicore Performance of Block Algebraic Iterative Reconstruction Methods
DEFF Research Database (Denmark)
Sørensen, Hans Henrik B.; Hansen, Per Christian
2014-01-01
Algebraic iterative methods are routinely used for solving the ill-posed sparse linear systems arising in tomographic image reconstruction. Here we consider the algebraic reconstruction technique (ART) and the simultaneous iterative reconstruction techniques (SIRT), both of which rely...... on semiconvergence. Block versions of these methods, based on a partitioning of the linear system, are able to combine the fast semiconvergence of ART with the better multicore properties of SIRT. These block methods separate into two classes: those that, in each iteration, access the blocks in a sequential manner...
Assessing the Accuracy of Ancestral Protein Reconstruction Methods
Williams, Paul D; Pollock, David D; Blackburne, Benjamin P; Goldstein, Richard A
2006-01-01
The phylogenetic inference of ancestral protein sequences is a powerful technique for the study of molecular evolution, but any conclusions drawn from such studies are only as good as the accuracy of the reconstruction method. Every inference method leads to errors in the ancestral protein sequence, resulting in potentially misleading estimates of the ancestral protein's properties. To assess the accuracy of ancestral protein reconstruction methods, we performed computational population evolu...
Magnetic flux concentration methods for magnetic energy harvesting module
Directory of Open Access Journals (Sweden)
Wakiwaka Hiroyuki
2013-01-01
Full Text Available This paper presents magnetic flux concentration methods for magnetic energy harvesting module. The purpose of this study is to harvest 1 mW energy with a Brooks coil 2 cm in diameter from environmental magnetic field at 60 Hz. Because the harvesting power is proportional to the square of the magnetic flux density, we consider the use of a magnetic flux concentration coil and a magnetic core. The magnetic flux concentration coil consists of an aircore Brooks coil and a resonant capacitor. When a uniform magnetic field crossed the coil, the magnetic flux distribution around the coil was changed. It is found that the magnetic field in an area is concentrated larger than 20 times compared with the uniform magnetic field. Compared with the aircore coil, our designed magnetic core makes the harvested energy tenfold. According to ICNIRP2010 guideline, the acceptable level of magnetic field is 0.2 mT in the frequency range between 25 Hz and 400 Hz. Without the two magnetic flux concentration methods, the corresponding energy is limited to 1 µW. In contrast, our experimental results successfully demonstrate energy harvesting of 1 mW from a magnetic field of 0.03 mT at 60 Hz.
Determination of soil evaporation fluxes using distributed temperature sensing methods
Serna, J. L.; Cristi Matte, F.; Munoz, J. F.; Suarez, F. I.
2014-12-01
The dynamics of evaporation fluxes in arid soils is an unresolved complex phenomenon that has a major impact on the basin's water availability. In arid zones, evaporation controls moisture contents near the soil surface and drives liquid water and water vapor fluxes through the vadose zone, playing a critical role in both the hydrological cycle and energy balance. However, determining soil evaporation in arid zones is a difficult undertaking. Thus, it is important to develop new measuring techniques that can determine evaporation fluxes. In the last decade, distributed temperature sensing (DTS) methods have been successfully used to investigate a wide range of hydrologic applications. In particular, DTS methods have been used indirectly to monitor soil moisture. Two methods have been developed: the passive and the active method. In the active mode, the DTS system uses cables with metal elements and a voltage difference is applied at the two ends to of the cable to heat it up for a defined time-period. Then, the cumulative temperature increase along the cable is computed and soil moisture is determined by using an empirical relation. DTS technology has also been used to determine water fluxes in porous media, but so far no efforts have been made to determine evaporation fluxes. Here, we investigate the feasibility of using the active DTS method to determine soil evaporation fluxes. To achieve this objective, column experiments were designed to study evaporation from sandy soils with shallow water tables. The soil columns were instrumented with traditional temperature and time-domain-reflectometry probes, and an armored fiber-optic cable that allows using the active method to estimate the soil moisture profile. In the experiments, the water table can be fixed at different depths and soil evaporation can be estimated by measuring the water added to the constant-head reservoir that feeds the column. Thus, allowing the investigation of soil evaporation fluxes from DTS
New reconstruction method for the advanced compton camera
International Nuclear Information System (INIS)
Kurihara, Takashi; Ogawa, Koichi
2007-01-01
Conventional gammacameras employ a mechanical collimator, which reduces the number of photons detected by such cameras. To address this issue, a Compton camera has been proposed to improve the efficiency of data acquisition by employing electronic collimation. With regard to Compton cameras, the advanced Compton camera (ACC) which has been proposed by Tanimori et al. can restrict the source locations with the help of the recoil electrons that are emitted in the process of Compton scattering. However, the reconstruction methods employed in conventional Compton cameras are inefficient in reconstructing images from the data acquired with the ACC. In this paper, we propose a new reconstruction method that is designed specifically for the ACC. This method, which is an improved version of the source space tree algorithm (SSTA), permits the source distribution to be reconstructed accurately and efficiently. The SSTA is one of the reconstruction methods for conventional Compton cameras proposed by Rohe et al. Our proposed algorithm employs a set of lines that are defined at equiangular intervals in the reconstruction region and the specified voxels of interest that include the search points located on the above predefined lines at equally spaced intervals. The validity of our method is demonstrated by simulations involving the reconstruction of a point source and a disk source. (author)
Review of unfolding methods for neutron flux dosimetry
International Nuclear Information System (INIS)
Stallmann, F.W.; Kam, F.B.K.
1975-01-01
The primary method in reactor dosimetry is the foil activation technique. To translate the activation measurements into neutron fluxes, a special data processing technique called unfolding is needed. Some general observations about the problems and the reliability of this approach to reactor dosimetry are presented. Current unfolding methods are reviewed. 12 references. (auth)
A comparison of ancestral state reconstruction methods for quantitative characters.
Royer-Carenzi, Manuela; Didier, Gilles
2016-09-07
Choosing an ancestral state reconstruction method among the alternatives available for quantitative characters may be puzzling. We present here a comparison of seven of them, namely the maximum likelihood, restricted maximum likelihood, generalized least squares under Brownian, Brownian-with-trend and Ornstein-Uhlenbeck models, phylogenetic independent contrasts and squared parsimony methods. A review of the relations between these methods shows that the maximum likelihood, the restricted maximum likelihood and the generalized least squares under Brownian model infer the same ancestral states and can only be distinguished by the distributions accounting for the reconstruction uncertainty which they provide. The respective accuracy of the methods is assessed over character evolution simulated under a Brownian motion with (and without) directional or stabilizing selection. We give the general form of ancestral state distributions conditioned on leaf states under the simulation models. Ancestral distributions are used first, to give a theoretical lower bound of the expected reconstruction error, and second, to develop an original evaluation scheme which is more efficient than comparing the reconstructed and the simulated states. Our simulations show that: (i) the distributions of the reconstruction uncertainty provided by the methods generally make sense (some more than others); (ii) it is essential to detect the presence of an evolutionary trend and to choose a reconstruction method accordingly; (iii) all the methods show good performances on characters under stabilizing selection; (iv) without trend or stabilizing selection, the maximum likelihood method is generally the most accurate. Copyright © 2016 Elsevier Ltd. All rights reserved.
Borrelli, C.; Gabitov, R. I.; Messenger, S. R.; Nguyen, A. N.; Torres, M. E.; Kessler, J. D.
2015-01-01
Methane (CH4) is an important greenhouse gas, with a global warming potential much higher than carbon dioxide (CO2) on a short time scale. Even if the residence time of CH4 in the atmosphere is relatively short (tens of years), one of the products of CH4 oxidation is CO2, a greenhouse gas with a much longer residence time in the atmosphere (tens to hundreds of years). CH4 has been proposed as one of the trigger mechanisms for rapid global climate change today and in the geological past. With regards to the geological past, numerous studies proposed the benthic foraminiferal carbon isotope ratio (Delta13C) as a tool to reconstruct the impact of marine CH4 on rapid climate changes; however, the investigation of modern benthic foraminiferal Delta13C have produced inconclusive results. CH4 has a distinctive hydrogen isotope (Delta(D)) and Delta13C signature compared to seawater, and sulfate reduction, often coupled to CH4 anaerobic oxidation in sediments, changes the sulfur isotope signature (Delta34S) of the remaining sulfate in porewater. Therefore, we hypothesize that the Delta(D) and Delta34S signature of infaunal benthic foraminiferal species can provide a complementary approach to Delta13C to study CH4 dynamics in sedimentary environments. Here, we present the preliminary results obtained analyzing Uvigerina peregrina Delta(D) and Delta34S from three different locations at Hydrate Ridge, offshore Oregon. Unfortunately, the lack of chemical data related to the moment of foraminiferal calcification makes difficult to build a robust relationship among the U. peregrina stable isotopes and the CH4 fluxes at the sampling sites. However, our results look very promising, as each site is characterized by a different Delta(D) and Delta34S signature. We emphasize that this study represents the first step in the development of new proxies (Delta(D)) and Delta34S), which may complement the more traditional benthic foraminiferal Delta13C values, to reconstruct marine CH4
DEFF Research Database (Denmark)
Kandel, Tanka P; Lærke, Poul Erik; Elsgaard, Lars
2016-01-01
-deployment fluxes by linear regression techniques. Thus, usually the cumulative flux curve becomes downward concave due to the decreased gas diffusion rate. Non-linear models based on biophysical theory usually fit to such curvatures and may reduce the underestimation of fluxes. In this study, we examined...... the effect of increasing chamber enclosure time on SR flux rates calculated using a linear, an exponential and a revised Hutchinson and Mosier model (HMR). Soil respiration rates were measured with a closed chamber in combination with an infrared gas analyzer. During SR flux measurements the chamber......) to obtain a range of fluxes with different shapes of flux curves. The linear method provided more stable flux results during short enclosure times (few min) but underestimated initial fluxes by 15–300% after 45 min deployment time. Non-linear models reduced the underestimation as average underestimation...
AIR Tools - A MATLAB package of algebraic iterative reconstruction methods
DEFF Research Database (Denmark)
Hansen, Per Christian; Saxild-Hansen, Maria
2012-01-01
We present a MATLAB package with implementations of several algebraic iterative reconstruction methods for discretizations of inverse problems. These so-called row action methods rely on semi-convergence for achieving the necessary regularization of the problem. Two classes of methods are impleme......We present a MATLAB package with implementations of several algebraic iterative reconstruction methods for discretizations of inverse problems. These so-called row action methods rely on semi-convergence for achieving the necessary regularization of the problem. Two classes of methods...
Bonechi, L.; D'Alessandro, R.; Mori, N.; Viliani, L.
2015-02-01
Muon absorption radiography is an imaging technique based on the analysis of the attenuation of the cosmic-ray muon flux after traversing an object under examination. While this technique is now reaching maturity in the field of volcanology for the imaging of the innermost parts of the volcanic cones, its applicability to other fields of research has not yet been proved. In this paper we present a study concerning the application of the muon absorption radiography technique to the field of archaeology, and we propose a method for the search of underground cavities and structures hidden a few metres deep in the soil (patent [1]). An original geometric treatment of the reconstructed muon tracks, based on the comparison of the measured flux with a reference simulated flux, and the preliminary results of specific simulations are discussed in details.
Filter-based reconstruction methods for tomography
Pelt, D.M.
2016-01-01
In X-ray tomography, a three-dimensional image of the interior of an object is computed from multiple X-ray images, acquired over a range of angles. Two types of methods are commonly used to compute such an image: analytical methods and iterative methods. Analytical methods are computationally
Comparison of Force Reconstruction Methods for a Lumped Mass Beam
Directory of Open Access Journals (Sweden)
Vesta I. Bateman
1997-01-01
Full Text Available Two extensions of the force reconstruction method, the sum of weighted accelerations technique (SWAT, are presented in this article. SWAT requires the use of the structure’s elastic mode shapes for reconstruction of the applied force. Although based on the same theory, the two new techniques do not rely on mode shapes to reconstruct the applied force and may be applied to structures whose mode shapes are not available. One technique uses the measured force and acceleration responses with the rigid body mode shapes to calculate the scalar weighting vector, so the technique is called SWAT-CAL (SWAT using a calibrated force input. The second technique uses the free-decay time response of the structure with the rigid body mode shapes to calculate the scalar weighting vector and is called SWAT-TEEM (SWAT using time eliminated elastic modes. All three methods are used to reconstruct forces for a simple structure.
Choosing the best ancestral character state reconstruction method.
Royer-Carenzi, Manuela; Pontarotti, Pierre; Didier, Gilles
2013-03-01
Despite its intrinsic difficulty, ancestral character state reconstruction is an essential tool for testing evolutionary hypothesis. Two major classes of approaches to this question can be distinguished: parsimony- or likelihood-based approaches. We focus here on the second class of methods, more specifically on approaches based on continuous-time Markov modeling of character evolution. Among them, we consider the most-likely-ancestor reconstruction, the posterior-probability reconstruction, the likelihood-ratio method, and the Bayesian approach. We discuss and compare the above-mentioned methods over several phylogenetic trees, adding the maximum-parsimony method performance in the comparison. Under the assumption that the character evolves according a continuous-time Markov process, we compute and compare the expectations of success of each method for a broad range of model parameter values. Moreover, we show how the knowledge of the evolution model parameters allows to compute upper bounds of reconstruction performances, which are provided as references. The results of all these reconstruction methods are quite close one to another, and the expectations of success are not so far from their theoretical upper bounds. But the performance ranking heavily depends on the topology of the studied tree, on the ancestral node that is to be inferred and on the parameter values. Consequently, we propose a protocol providing for each parameter value the best method in terms of expectation of success, with regard to the phylogenetic tree and the ancestral node to infer. Copyright © 2012 Elsevier Inc. All rights reserved.
A multiscale mortar multipoint flux mixed finite element method
Wheeler, Mary Fanett
2012-02-03
In this paper, we develop a multiscale mortar multipoint flux mixed finite element method for second order elliptic problems. The equations in the coarse elements (or subdomains) are discretized on a fine grid scale by a multipoint flux mixed finite element method that reduces to cell-centered finite differences on irregular grids. The subdomain grids do not have to match across the interfaces. Continuity of flux between coarse elements is imposed via a mortar finite element space on a coarse grid scale. With an appropriate choice of polynomial degree of the mortar space, we derive optimal order convergence on the fine scale for both the multiscale pressure and velocity, as well as the coarse scale mortar pressure. Some superconvergence results are also derived. The algebraic system is reduced via a non-overlapping domain decomposition to a coarse scale mortar interface problem that is solved using a multiscale flux basis. Numerical experiments are presented to confirm the theory and illustrate the efficiency and flexibility of the method. © EDP Sciences, SMAI, 2012.
Assessing the accuracy of ancestral protein reconstruction methods.
Directory of Open Access Journals (Sweden)
Paul D Williams
2006-06-01
Full Text Available The phylogenetic inference of ancestral protein sequences is a powerful technique for the study of molecular evolution, but any conclusions drawn from such studies are only as good as the accuracy of the reconstruction method. Every inference method leads to errors in the ancestral protein sequence, resulting in potentially misleading estimates of the ancestral protein's properties. To assess the accuracy of ancestral protein reconstruction methods, we performed computational population evolution simulations featuring near-neutral evolution under purifying selection, speciation, and divergence using an off-lattice protein model where fitness depends on the ability to be stable in a specified target structure. We were thus able to compare the thermodynamic properties of the true ancestral sequences with the properties of "ancestral sequences" inferred by maximum parsimony, maximum likelihood, and Bayesian methods. Surprisingly, we found that methods such as maximum parsimony and maximum likelihood that reconstruct a "best guess" amino acid at each position overestimate thermostability, while a Bayesian method that sometimes chooses less-probable residues from the posterior probability distribution does not. Maximum likelihood and maximum parsimony apparently tend to eliminate variants at a position that are slightly detrimental to structural stability simply because such detrimental variants are less frequent. Other properties of ancestral proteins might be similarly overestimated. This suggests that ancestral reconstruction studies require greater care to come to credible conclusions regarding functional evolution. Inferred functional patterns that mimic reconstruction bias should be reevaluated.
Assessing the accuracy of ancestral protein reconstruction methods.
Williams, Paul D; Pollock, David D; Blackburne, Benjamin P; Goldstein, Richard A
2006-06-23
The phylogenetic inference of ancestral protein sequences is a powerful technique for the study of molecular evolution, but any conclusions drawn from such studies are only as good as the accuracy of the reconstruction method. Every inference method leads to errors in the ancestral protein sequence, resulting in potentially misleading estimates of the ancestral protein's properties. To assess the accuracy of ancestral protein reconstruction methods, we performed computational population evolution simulations featuring near-neutral evolution under purifying selection, speciation, and divergence using an off-lattice protein model where fitness depends on the ability to be stable in a specified target structure. We were thus able to compare the thermodynamic properties of the true ancestral sequences with the properties of "ancestral sequences" inferred by maximum parsimony, maximum likelihood, and Bayesian methods. Surprisingly, we found that methods such as maximum parsimony and maximum likelihood that reconstruct a "best guess" amino acid at each position overestimate thermostability, while a Bayesian method that sometimes chooses less-probable residues from the posterior probability distribution does not. Maximum likelihood and maximum parsimony apparently tend to eliminate variants at a position that are slightly detrimental to structural stability simply because such detrimental variants are less frequent. Other properties of ancestral proteins might be similarly overestimated. This suggests that ancestral reconstruction studies require greater care to come to credible conclusions regarding functional evolution. Inferred functional patterns that mimic reconstruction bias should be reevaluated.
Directory of Open Access Journals (Sweden)
Feng Zhao
2014-10-01
Full Text Available A method for canopy Fluorescence Spectrum Reconstruction (FSR is proposed in this study, which can be used to retrieve the solar-induced canopy fluorescence spectrum over the whole chlorophyll fluorescence emission region from 640–850 nm. Firstly, the radiance of the solar-induced chlorophyll fluorescence (Fs at five absorption lines of the solar spectrum was retrieved by a Spectral Fitting Method (SFM. The Singular Vector Decomposition (SVD technique was then used to extract three basis spectra from a training dataset simulated by the model SCOPE (Soil Canopy Observation, Photochemistry and Energy fluxes. Finally, these basis spectra were linearly combined to reconstruct the Fs spectrum, and the coefficients of them were determined by Weighted Linear Least Squares (WLLS fitting with the five retrieved Fs values. Results for simulated datasets indicate that the FSR method could accurately reconstruct the Fs spectra from hyperspectral measurements acquired by instruments of high Spectral Resolution (SR and Signal to Noise Ratio (SNR. The FSR method was also applied to an experimental dataset acquired in a diurnal experiment. The diurnal change of the reconstructed Fs spectra shows that the Fs radiance around noon was higher than that in the morning and afternoon, which is consistent with former studies. Finally, the potential and limitations of this method are discussed.
Flux-weakening control methods for hybrid excitation synchronous motor
Directory of Open Access Journals (Sweden)
Mingming Huang
2015-09-01
Full Text Available The hybrid excitation synchronous motor (HESM, which aim at combining the advantages of permanent magnet motor and wound excitation motor, have the characteristics of low-speed high-torque hill climbing and wide speed range. Firstly, a new kind of HESM is presented in the paper, and its structure and mathematical model are illustrated. Then, based on a space voltage vector control, a novel flux-weakening method for speed adjustment in the high speed region is presented. The unique feature of the proposed control method is that the HESM driving system keeps the q-axis back-EMF components invariable during the flux-weakening operation process. Moreover, a copper loss minimization algorithm is adopted to reduce the copper loss of the HESM in the high speed region. Lastly, the proposed method is validated by the simulation and the experimental results.
Image reconstruction in computerized tomography using the convolution method
International Nuclear Information System (INIS)
Oliveira Rebelo, A.M. de.
1984-03-01
In the present work an algoritin was derived, using the analytical convolution method (filtered back-projection) for two-dimensional or three-dimensional image reconstruction in computerized tomography applied to non-destructive testing and to the medical use. This mathematical model is based on the analytical Fourier transform method for image reconstruction. This model consists of a discontinuous system formed by an NxN array of cells (pixels). The attenuation in the object under study of a colimated gamma ray beam has been determined for various positions and incidence angles (projections) in terms of the interaction of the beam with the intercepted pixels. The contribution of each pixel to beam attenuation was determined using the weight function W ij which was used for simulated tests. Simulated tests using standard objects with attenuation coefficients in the range of 0,2 to 0,7 cm -1 were carried out using cell arrays of up to 25x25. One application was carried out in the medical area simulating image reconstruction of an arm phantom with attenuation coefficients in the range of 0,2 to 0,5 cm -1 using cell arrays of 41x41. The simulated results show that, in objects with a great number of interfaces and great variations of attenuation coefficients at these interfaces, a good reconstruction is obtained with the number of projections equal to the reconstruction matrix dimension. A good reconstruction is otherwise obtained with fewer projections. (author) [pt
Matrix-based image reconstruction methods for tomography
International Nuclear Information System (INIS)
Llacer, J.; Meng, J.D.
1984-10-01
Matrix methods of image reconstruction have not been used, in general, because of the large size of practical matrices, ill condition upon inversion and the success of Fourier-based techniques. An exception is the work that has been done at the Lawrence Berkeley Laboratory for imaging with accelerated radioactive ions. An extension of that work into more general imaging problems shows that, with a correct formulation of the problem, positron tomography with ring geometries results in well behaved matrices which can be used for image reconstruction with no distortion of the point response in the field of view and flexibility in the design of the instrument. Maximum Likelihood Estimator methods of reconstruction, which use the system matrices tailored to specific instruments and do not need matrix inversion, are shown to result in good preliminary images. A parallel processing computer structure based on multiple inexpensive microprocessors is proposed as a system to implement the matrix-MLE methods. 14 references, 7 figures
Virtanen, I. O. I.; Virtanen, I. I.; Pevtsov, A. A.; Yeates, A.; Mursula, K.
2017-07-01
Aims: We aim to use the surface flux transport model to simulate the long-term evolution of the photospheric magnetic field from historical observations. In this work we study the accuracy of the model and its sensitivity to uncertainties in its main parameters and the input data. Methods: We tested the model by running simulations with different values of meridional circulation and supergranular diffusion parameters, and studied how the flux distribution inside active regions and the initial magnetic field affected the simulation. We compared the results to assess how sensitive the simulation is to uncertainties in meridional circulation speed, supergranular diffusion, and input data. We also compared the simulated magnetic field with observations. Results: We find that there is generally good agreement between simulations and observations. Although the model is not capable of replicating fine details of the magnetic field, the long-term evolution of the polar field is very similar in simulations and observations. Simulations typically yield a smoother evolution of polar fields than observations, which often include artificial variations due to observational limitations. We also find that the simulated field is fairly insensitive to uncertainties in model parameters or the input data. Due to the decay term included in the model the effects of the uncertainties are somewhat minor or temporary, lasting typically one solar cycle.
COMPARISON OF HOLOGRAPHIC AND ITERATIVE METHODS FOR AMPLITUDE OBJECT RECONSTRUCTION
Directory of Open Access Journals (Sweden)
I. A. Shevkunov
2015-01-01
Full Text Available Experimental comparison of four methods for the wavefront reconstruction is presented. We considered two iterative and two holographic methods with different mathematical models and algorithms for recovery. The first two of these methods do not use a reference wave recording scheme that reduces requirements for stability of the installation. A major role in phase information reconstruction by such methods is played by a set of spatial intensity distributions, which are recorded as the recording matrix is being moved along the optical axis. The obtained data are used consistently for wavefront reconstruction using an iterative procedure. In the course of this procedure numerical distribution of the wavefront between the planes is performed. Thus, phase information of the wavefront is stored in every plane and calculated amplitude distributions are replaced for the measured ones in these planes. In the first of the compared methods, a two-dimensional Fresnel transform and iterative calculation in the object plane are used as a mathematical model. In the second approach, an angular spectrum method is used for numerical wavefront propagation, and the iterative calculation is carried out only between closely located planes of data registration. Two digital holography methods, based on the usage of the reference wave in the recording scheme and differing from each other by numerical reconstruction algorithm of digital holograms, are compared with the first two methods. The comparison proved that the iterative method based on 2D Fresnel transform gives results comparable with the result of common holographic method with the Fourier-filtering. It is shown that holographic method for reconstructing of the object complex amplitude in the process of the object amplitude reduction is the best among considered ones.
MR Image Reconstruction Using Block Matching and Adaptive Kernel Methods.
Schmidt, Johannes F M; Santelli, Claudio; Kozerke, Sebastian
2016-01-01
An approach to Magnetic Resonance (MR) image reconstruction from undersampled data is proposed. Undersampling artifacts are removed using an iterative thresholding algorithm applied to nonlinearly transformed image block arrays. Each block array is transformed using kernel principal component analysis where the contribution of each image block to the transform depends in a nonlinear fashion on the distance to other image blocks. Elimination of undersampling artifacts is achieved by conventional principal component analysis in the nonlinear transform domain, projection onto the main components and back-mapping into the image domain. Iterative image reconstruction is performed by interleaving the proposed undersampling artifact removal step and gradient updates enforcing consistency with acquired k-space data. The algorithm is evaluated using retrospectively undersampled MR cardiac cine data and compared to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT reconstruction. Evaluation of image quality and root-mean-squared-error (RMSE) reveal improved image reconstruction for up to 8-fold undersampled data with the proposed approach relative to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT. In conclusion, block matching and kernel methods can be used for effective removal of undersampling artifacts in MR image reconstruction and outperform methods using standard compressed sensing and ℓ1-regularized parallel imaging methods.
Analytic Method to Estimate Particle Acceleration in Flux Ropes
Guidoni, S. E.; Karpen, J. T.; DeVore, C. R.
2015-01-01
The mechanism that accelerates particles to the energies required to produce the observed high-energy emission in solar flares is not well understood. Drake et al. (2006) proposed a kinetic mechanism for accelerating electrons in contracting magnetic islands formed by reconnection. In this model, particles that gyrate around magnetic field lines transit from island to island, increasing their energy by Fermi acceleration in those islands that are contracting. Based on these ideas, we present an analytic model to estimate the energy gain of particles orbiting around field lines inside a flux rope (2.5D magnetic island). We calculate the change in the velocity of the particles as the flux rope evolves in time. The method assumes a simple profile for the magnetic field of the evolving island; it can be applied to any case where flux ropes are formed. In our case, the flux-rope evolution is obtained from our recent high-resolution, compressible 2.5D MHD simulations of breakout eruptive flares. The simulations allow us to resolve in detail the generation and evolution of large-scale flux ropes as a result of sporadic and patchy reconnection in the flare current sheet. Our results show that the initial energy of particles can be increased by 2-5 times in a typical contracting island, before the island reconnects with the underlying arcade. Therefore, particles need to transit only from 3-7 islands to increase their energies by two orders of magnitude. These macroscopic regions, filled with a large number of particles, may explain the large observed rates of energetic electron production in flares. We conclude that this mechanism is a promising candidate for electron acceleration in flares, but further research is needed to extend our results to 3D flare conditions.
Comparison of advanced iterative reconstruction methods for SPECT/CT
Energy Technology Data Exchange (ETDEWEB)
Knoll, Peter; Koechle, Gunnar; Mirzaei, Siroos [Wilhelminenspital, Vienna (Austria). Dept. of Nuclear Medicine and PET Center; Kotalova, Daniela; Samal, Martin [Charles Univ. Prague, Prague (Czech Republic); Kuzelka, Ivan; Zadrazil, Ladislav [Hospital Havlickuv Brod (Czech Republic); Minear, Greg [Landesklinikum St. Poelten (Austria). Dept. of Internal Medicine II; Bergmann, Helmar [Medical Univ. of Vienna (Austria). Center for Medical Physics and Biomedical Engineering
2012-07-01
Aim: Corrective image reconstruction methods which produce reconstructed images with improved spatial resolution and decreased noise level became recently commercially available. In this work, we tested the performance of three new software packages with reconstruction schemes recommended by the manufacturers using physical phantoms simulating realistic clinical settings. Methods: A specially designed resolution phantom containing three {sup 99m}Tc lines sources and the NEMA NU-2 image quality phantom were acquired on three different SPECT/CT systems (General Electrics Infinia, Philips BrightView and Siemens Symbia T6). Measurement of both phantoms was done with the trunk filled with a {sup 99m}Tc-water solution. The projection data were reconstructed using the GE's Evolution for Bone {sup registered}, Philips Astonish {sup registered} and Siemens Flash3D {sup registered} software. The reconstruction parameters employed (number of iterations and subsets, the choice of post-filtering) followed theses recommendations of each vendor. These results were compared with reference reconstructions using the ordered subset expectation maximization (OSEM) reconstruction scheme. Results: The best results (smallest value for resolution, highest percent contrast values) for all three packages were found for the scatter corrected data without applying any post-filtering. The advanced reconstruction methods improve the full width at half maximum (FWHM) of the line sources from 11.4 to 9.5 mm (GE), from 9.1 to 6.4 mm (Philips), and from 12.1 to 8.9 mm (Siemens) if no additional post filter was applied. The total image quality control index measured for a concentration ratio of 8:1 improves for GE from 147 to 189, from 179. to 325 for Philips and from 217 to 320 for Siemens using the reference method for comparison. The same trends can be observed for the 4:1 concentration ratio. The use of a post-filter reduces the background variability approximately by a factor of two, but
International Nuclear Information System (INIS)
Mirzaei, M.; Shahverdi, M.
2004-01-01
This paper is proposed to compare the performances of deferent inviscid flux approximation methods in solution of two-dimensional Euler equations. The methods belong to two different group of flux splitting methods: flux difference splitting (FDS) methods and kinetic flux vector splitting (KFVS) method. Here Roe method and Osher method belonging to flux difference splitting (FDS) group have been employed and their performances are compared with that of kinetic flux vector splitting method (KFVS). Roe and Osher methods are based on approximate solution of Riemann problem over computational cell surfaces while the KFVS has a quit different base. In KFVS inviscid fluxes are approximated based on the kinetic theory and correlation between Boltzmann equation and Euler equations. For comparison the performances of the above mentioned methods three different problems have been solved. The first problem is flow over a 10 degree compression-expansion ramp with Mach number of 2.0, the second one is a transonic flow with Mach number of 0.85 over a 4.2% circular bump in a duct and the third is supersonic flow with Mach number of 3.0 over a circular blunt slab. (author)
Performance of different detrending methods in turbulent flux estimation
Donateo, Antonio; Cava, Daniela; Contini, Daniele
2015-04-01
The eddy covariance is the most direct, efficient and reliable method to measure the turbulent flux of a scalar (Baldocchi, 2003). Required conditions for high-quality eddy covariance measurements are amongst others stationarity of the measured data and a fully developed turbulence. The simplest method for obtaining the fluctuating components for covariance calculation according to Reynolds averaging rules under ideal stationary conditions is the so called mean removal method. However steady state conditions rarely exist in the atmosphere, because of the diurnal cycle, changes in meteorological conditions, or sensor drift. All these phenomena produce trends or low-frequency changes superimposed to the turbulent signal. Different methods for trend removal have been proposed in literature; however a general agreement on how separate low frequency perturbations from turbulence has not yet been reached. The most commonly applied methods are the linear detrending (Gash and Culf, 1996) and the high-pass filter, namely the moving average (Moncrieff et al., 2004). Moreover Vickers and Mahrt (2003) proposed a multi resolution decomposition method in order to select an appropriate time scale for mean removal as a function of atmospheric stability conditions. The present work investigates the performance of these different detrending methods in removing the low frequency contribution to the turbulent fluxes calculation, including also a spectral filter by a Fourier decomposition of the time series. The different methods have been applied to the calculation of the turbulent fluxes for different scalars (temperature, ultrafine particles number concentration, carbon dioxide and water vapour concentration). A comparison of the detrending methods will be performed also for different measurement site, namely a urban site, a suburban area, and a remote area in Antarctica. Moreover the performance of the moving average in detrending time series has been analyzed as a function of the
The equivalent source method as a sparse signal reconstruction
DEFF Research Database (Denmark)
Fernandez Grande, Efren; Xenaki, Angeliki
2015-01-01
This study proposes an acoustic holography method for sound field reconstruction based on a point source model, which uses the Compressed Sensing (CS) framework to provide a sparse solution. Sparsity implies that the sound field can be represented by a minimal number of non-zero terms, point...
A Total Variation-Based Reconstruction Method for Dynamic MRI
Directory of Open Access Journals (Sweden)
Germana Landi
2008-01-01
Full Text Available In recent years, total variation (TV regularization has become a popular and powerful tool for image restoration and enhancement. In this work, we apply TV minimization to improve the quality of dynamic magnetic resonance images. Dynamic magnetic resonance imaging is an increasingly popular clinical technique used to monitor spatio-temporal changes in tissue structure. Fast data acquisition is necessary in order to capture the dynamic process. Most commonly, the requirement of high temporal resolution is fulfilled by sacrificing spatial resolution. Therefore, the numerical methods have to address the issue of images reconstruction from limited Fourier data. One of the most successful techniques for dynamic imaging applications is the reduced-encoded imaging by generalized-series reconstruction method of Liang and Lauterbur. However, even if this method utilizes a priori data for optimal image reconstruction, the produced dynamic images are degraded by truncation artifacts, most notably Gibbs ringing, due to the spatial low resolution of the data. We use a TV regularization strategy in order to reduce these truncation artifacts in the dynamic images. The resulting TV minimization problem is solved by the fixed point iteration method of Vogel and Oman. The results of test problems with simulated and real data are presented to illustrate the effectiveness of the proposed approach in reducing the truncation artifacts of the reconstructed images.
Reconstructing Program Theories: Methods Available and Problems To Be Solved.
Leeuw, Frans L.
2003-01-01
Discusses methods for reconstructing theories underlying programs and policies, focusing on three approaches: (1) an empirical approach that focuses on interviews, documents, and argumentational analysis; (2) an approach based on strategic assessment, group dynamics, and dialogue; and (3) an approach based on cognitive and organizational…
Directory of Open Access Journals (Sweden)
Akιn Ata
2007-12-01
Full Text Available Abstract Background It is a daunting task to identify all the metabolic pathways of brain energy metabolism and develop a dynamic simulation environment that will cover a time scale ranging from seconds to hours. To simplify this task and make it more practicable, we undertook stoichiometric modeling of brain energy metabolism with the major aim of including the main interacting pathways in and between astrocytes and neurons. Model The constructed model includes central metabolism (glycolysis, pentose phosphate pathway, TCA cycle, lipid metabolism, reactive oxygen species (ROS detoxification, amino acid metabolism (synthesis and catabolism, the well-known glutamate-glutamine cycle, other coupling reactions between astrocytes and neurons, and neurotransmitter metabolism. This is, to our knowledge, the most comprehensive attempt at stoichiometric modeling of brain metabolism to date in terms of its coverage of a wide range of metabolic pathways. We then attempted to model the basal physiological behaviour and hypoxic behaviour of the brain cells where astrocytes and neurons are tightly coupled. Results The reconstructed stoichiometric reaction model included 217 reactions (184 internal, 33 exchange and 216 metabolites (183 internal, 33 external distributed in and between astrocytes and neurons. Flux balance analysis (FBA techniques were applied to the reconstructed model to elucidate the underlying cellular principles of neuron-astrocyte coupling. Simulation of resting conditions under the constraints of maximization of glutamate/glutamine/GABA cycle fluxes between the two cell types with subsequent minimization of Euclidean norm of fluxes resulted in a flux distribution in accordance with literature-based findings. As a further validation of our model, the effect of oxygen deprivation (hypoxia on fluxes was simulated using an FBA-derivative approach, known as minimization of metabolic adjustment (MOMA. The results show the power of the
Directory of Open Access Journals (Sweden)
C. Möstl
2009-05-01
Full Text Available We analyze a magnetic signature associated with the leading edge of a bursty bulk flow observed by Cluster at −19 RE downtail on 22 August 2001. A distinct rotation of the magnetic field was seen by all four spacecraft. This event was previously examined by Slavin et al. (2003b using both linear force-free modeling as well as a curlometer technique. Extending this work, we apply here single- and multi-spacecraft Grad-Shafranov (GS reconstruction techniques to the Cluster observations and find good evidence that the structure encountered is indeed a magnetic flux rope and contains helical magnetic field lines. We find that the flux rope has a diameter of approximately 1 RE, an axial field of 26.4 nT, a velocity of ≈650 km/s, a total axial current of 0.16 MA and magnetic fluxes of order 105 Wb. The field line twist is estimated as half a turn per RE. The invariant axis is inclined at 40° to the ecliptic plane and 10° to the GSM equatorial plane. The flux rope has a force-free core and non-force-free boundaries. When we compare and contrast our results with those obtained from minimum variance, single-spacecraft force-free fitting and curlometer techniques, we find in general fair agreement, but also clear differences such as a higher inclination of the axis to the ecliptic. We further conclude that single-spacecraft methods have limitations which should be kept in mind when applied to THEMIS observations, and that non-force-free GS and curlometer techniques are to be preferred in their analysis. Some properties we derived for this earthward– moving structure are similar to those inferred by Lui et al. (2007, using a different approach, for a tailward-moving flux rope observed during the expansion phase of the same substorm.
Parallel MR image reconstruction using augmented Lagrangian methods.
Ramani, Sathish; Fessler, Jeffrey A
2011-03-01
Magnetic resonance image (MRI) reconstruction using SENSitivity Encoding (SENSE) requires regularization to suppress noise and aliasing effects. Edge-preserving and sparsity-based regularization criteria can improve image quality, but they demand computation-intensive nonlinear optimization. In this paper, we present novel methods for regularized MRI reconstruction from undersampled sensitivity encoded data--SENSE-reconstruction--using the augmented Lagrangian (AL) framework for solving large-scale constrained optimization problems. We first formulate regularized SENSE-reconstruction as an unconstrained optimization task and then convert it to a set of (equivalent) constrained problems using variable splitting. We then attack these constrained versions in an AL framework using an alternating minimization method, leading to algorithms that can be implemented easily. The proposed methods are applicable to a general class of regularizers that includes popular edge-preserving (e.g., total-variation) and sparsity-promoting (e.g., l(1)-norm of wavelet coefficients) criteria and combinations thereof. Numerical experiments with synthetic and in vivo human data illustrate that the proposed AL algorithms converge faster than both general-purpose optimization algorithms such as nonlinear conjugate gradient (NCG) and state-of-the-art MFISTA.
Energy Technology Data Exchange (ETDEWEB)
Fraysse, F., E-mail: francois.fraysse@rs2n.eu [RS2N, St. Zacharie (France); E. T. S. de Ingeniería Aeronáutica y del Espacio, Universidad Politécnica de Madrid, Madrid (Spain); Redondo, C.; Rubio, G.; Valero, E. [E. T. S. de Ingeniería Aeronáutica y del Espacio, Universidad Politécnica de Madrid, Madrid (Spain)
2016-12-01
This article is devoted to the numerical discretisation of the hyperbolic two-phase flow model of Baer and Nunziato. A special attention is paid on the discretisation of intercell flux functions in the framework of Finite Volume and Discontinuous Galerkin approaches, where care has to be taken to efficiently approximate the non-conservative products inherent to the model equations. Various upwind approximate Riemann solvers have been tested on a bench of discontinuous test cases. New discretisation schemes are proposed in a Discontinuous Galerkin framework following the criterion of Abgrall and the path-conservative formalism. A stabilisation technique based on artificial viscosity is applied to the high-order Discontinuous Galerkin method and compared against classical TVD-MUSCL Finite Volume flux reconstruction.
International Nuclear Information System (INIS)
Fraysse, F.; Redondo, C.; Rubio, G.; Valero, E.
2016-01-01
This article is devoted to the numerical discretisation of the hyperbolic two-phase flow model of Baer and Nunziato. A special attention is paid on the discretisation of intercell flux functions in the framework of Finite Volume and Discontinuous Galerkin approaches, where care has to be taken to efficiently approximate the non-conservative products inherent to the model equations. Various upwind approximate Riemann solvers have been tested on a bench of discontinuous test cases. New discretisation schemes are proposed in a Discontinuous Galerkin framework following the criterion of Abgrall and the path-conservative formalism. A stabilisation technique based on artificial viscosity is applied to the high-order Discontinuous Galerkin method and compared against classical TVD-MUSCL Finite Volume flux reconstruction.
DEFF Research Database (Denmark)
Ravn, Ib
. FLUX betegner en flyden eller strømmen, dvs. dynamik. Forstår man livet som proces og udvikling i stedet for som ting og mekanik, får man et andet billede af det gode liv end det, som den velkendte vestlige mekanicisme lægger op til. Dynamisk forstået indebærer det gode liv den bedst mulige...... kanalisering af den flux eller energi, der strømmer igennem os og giver sig til kende i vore daglige aktiviteter. Skal vores tanker, handlinger, arbejde, samvær og politiske liv organiseres efter stramme og faste regelsæt, uden slinger i valsen? Eller skal de tværtimod forløbe ganske uhindret af regler og bånd...
Quartet-based methods to reconstruct phylogenetic networks.
Yang, Jialiang; Grünewald, Stefan; Xu, Yifei; Wan, Xiu-Feng
2014-02-20
Phylogenetic networks are employed to visualize evolutionary relationships among a group of nucleotide sequences, genes or species when reticulate events like hybridization, recombination, reassortant and horizontal gene transfer are believed to be involved. In comparison to traditional distance-based methods, quartet-based methods consider more information in the reconstruction process and thus have the potential to be more accurate. We introduce QuartetSuite, which includes a set of new quartet-based methods, namely QuartetS, QuartetA, and QuartetM, to reconstruct phylogenetic networks from nucleotide sequences. We tested their performances and compared them with other popular methods on two simulated nucleotide sequence data sets: one generated from a tree topology and the other from a complicated evolutionary history containing three reticulate events. We further validated these methods to two real data sets: a bacterial data set consisting of seven concatenated genes of 36 bacterial species and an influenza data set related to recently emerging H7N9 low pathogenic avian influenza viruses in China. QuartetS, QuartetA, and QuartetM have the potential to accurately reconstruct evolutionary scenarios from simple branching trees to complicated networks containing many reticulate events. These methods could provide insights into the understanding of complicated biological evolutionary processes such as bacterial taxonomy and reassortant of influenza viruses.
Sparse reconstruction methods in x-ray CT
Abascal, J. F. P. J.; Abella, M.; Mory, C.; Ducros, N.; de Molina, C.; Marinetto, E.; Peyrin, F.; Desco, M.
2017-10-01
Recent progress in X-ray CT is contributing to the advent of new clinical applications. A common challenge for these applications is the need for new image reconstruction methods that meet tight constraints in radiation dose and geometrical limitations in the acquisition. The recent developments in sparse reconstruction methods provide a framework that permits obtaining good quality images from drastically reduced signal-to-noise-ratio and limited-view data. In this work, we present our contributions in this field. For dynamic studies (3D+Time), we explored the possibility of extending the exploitation of sparsity to the temporal dimension: a temporal operator based on modelling motion between consecutive temporal points in gated-CT and based on experimental time curves in contrast-enhanced CT. In these cases, we also exploited sparsity by using a prior image estimated from the complete acquired dataset and assessed the effect on image quality of using different sparsity operators. For limited-view CT, we evaluated total-variation regularization in different simulated limited-data scenarios from a real small animal acquisition with a cone-beam microCT scanner, considering different angular span and number of projections. For other emerging imaging modalities, such as spectral CT, the image reconstruction problem is nonlinear, so we explored new efficient approaches to exploit sparsity for multi-energy CT data. In conclusion, we review our approaches to challenging CT data reconstruction problems and show results that support the feasibility for new clinical applications.
Quantifying Methane Fluxes Simply and Accurately: The Tracer Dilution Method
Rella, Christopher; Crosson, Eric; Green, Roger; Hater, Gary; Dayton, Dave; Lafleur, Rick; Merrill, Ray; Tan, Sze; Thoma, Eben
2010-05-01
Methane is an important atmospheric constituent with a wide variety of sources, both natural and anthropogenic, including wetlands and other water bodies, permafrost, farms, landfills, and areas with significant petrochemical exploration, drilling, transport, or processing, or refining occurs. Despite its importance to the carbon cycle, its significant impact as a greenhouse gas, and its ubiquity in modern life as a source of energy, its sources and sinks in marine and terrestrial ecosystems are only poorly understood. This is largely because high quality, quantitative measurements of methane fluxes in these different environments have not been available, due both to the lack of robust field-deployable instrumentation as well as to the fact that most significant sources of methane extend over large areas (from 10's to 1,000,000's of square meters) and are heterogeneous emitters - i.e., the methane is not emitted evenly over the area in question. Quantifying the total methane emissions from such sources becomes a tremendous challenge, compounded by the fact that atmospheric transport from emission point to detection point can be highly variable. In this presentation we describe a robust, accurate, and easy-to-deploy technique called the tracer dilution method, in which a known gas (such as acetylene, nitrous oxide, or sulfur hexafluoride) is released in the same vicinity of the methane emissions. Measurements of methane and the tracer gas are then made downwind of the release point, in the so-called far-field, where the area of methane emissions cannot be distinguished from a point source (i.e., the two gas plumes are well-mixed). In this regime, the methane emissions are given by the ratio of the two measured concentrations, multiplied by the known tracer emission rate. The challenges associated with atmospheric variability and heterogeneous methane emissions are handled automatically by the transport and dispersion of the tracer. We present detailed methane flux
A Comparison of Bulk Aerodynamic Methods for Calculating Air-Sea Flux
National Research Council Canada - National Science Library
Eleuterio, Daniel
1998-01-01
The Louis et al. (1982) bulk aerodynamic method for air-sea flux estimates is currently used in mesoscale models such as COAMPS, while the TOGA-COARE method is a state of the art flux parameterization involving recent...
Total variation superiorized conjugate gradient method for image reconstruction
Zibetti, Marcelo V. W.; Lin, Chuan; Herman, Gabor T.
2018-03-01
The conjugate gradient (CG) method is commonly used for the relatively-rapid solution of least squares problems. In image reconstruction, the problem can be ill-posed and also contaminated by noise; due to this, approaches such as regularization should be utilized. Total variation (TV) is a useful regularization penalty, frequently utilized in image reconstruction for generating images with sharp edges. When a non-quadratic norm is selected for regularization, as is the case for TV, then it is no longer possible to use CG. Non-linear CG is an alternative, but it does not share the efficiency that CG shows with least squares and methods such as fast iterative shrinkage-thresholding algorithms (FISTA) are preferred for problems with TV norm. A different approach to including prior information is superiorization. In this paper it is shown that the conjugate gradient method can be superiorized. Five different CG variants are proposed, including preconditioned CG. The CG methods superiorized by the total variation norm are presented and their performance in image reconstruction is demonstrated. It is illustrated that some of the proposed variants of the superiorized CG method can produce reconstructions of superior quality to those produced by FISTA and in less computational time, due to the speed of the original CG for least squares problems. In the Appendix we examine the behavior of one of the superiorized CG methods (we call it S-CG); one of its input parameters is a positive number ε. It is proved that, for any given ε that is greater than the half-squared-residual for the least squares solution, S-CG terminates in a finite number of steps with an output for which the half-squared-residual is less than or equal to ε. Importantly, it is also the case that the output will have a lower value of TV than what would be provided by unsuperiorized CG for the same value ε of the half-squared residual.
Barker, Brandon E; Sadagopan, Narayanan; Wang, Yiping; Smallbone, Kieran; Myers, Christopher R; Xi, Hongwei; Locasale, Jason W; Gu, Zhenglong
2015-12-01
A major theme in constraint-based modeling is unifying experimental data, such as biochemical information about the reactions that can occur in a system or the composition and localization of enzyme complexes, with high-throughput data including expression data, metabolomics, or DNA sequencing. The desired result is to increase predictive capability and improve our understanding of metabolism. The approach typically employed when only gene (or protein) intensities are available is the creation of tissue-specific models, which reduces the available reactions in an organism model, and does not provide an objective function for the estimation of fluxes. We develop a method, flux assignment with LAD (least absolute deviation) convex objectives and normalization (FALCON), that employs metabolic network reconstructions along with expression data to estimate fluxes. In order to use such a method, accurate measures of enzyme complex abundance are needed, so we first present an algorithm that addresses quantification of complex abundance. Our extensions to prior techniques include the capability to work with large models and significantly improved run-time performance even for smaller models, an improved analysis of enzyme complex formation, the ability to handle large enzyme complex rules that may incorporate multiple isoforms, and either maintained or significantly improved correlation with experimentally measured fluxes. FALCON has been implemented in MATLAB and ATS, and can be downloaded from: https://github.com/bbarker/FALCON. ATS is not required to compile the software, as intermediate C source code is available. FALCON requires use of the COBRA Toolbox, also implemented in MATLAB. Copyright © 2015 Elsevier Ltd. All rights reserved.
Monte Carlo methods for flux expansion solutions of transport problems
International Nuclear Information System (INIS)
Spanier, J.
1999-01-01
Adaptive Monte Carlo methods, based on the use of either correlated sampling or importance sampling, to obtain global solutions to certain transport problems have recently been described. The resulting learning algorithms are capable of achieving geometric convergence when applied to the estimation of a finite number of coefficients in a flux expansion representation of the global solution. However, because of the nonphysical nature of the random walk simulations needed to perform importance sampling, conventional transport estimators and source sampling techniques require modification to be used successfully in conjunction with such flux expansion methods. It is shown how these problems can be overcome. First, the traditional path length estimators in wide use in particle transport simulations are generalized to include rather general detector functions (which, in this application, are the individual basis functions chosen for the flus expansion). Second, it is shown how to sample from the signed probabilities that arise as source density functions in these applications, without destroying the zero variance property needed to ensure geometric convergence to zero error
Image reconstruction methods for the PBX-M pinhole camera
International Nuclear Information System (INIS)
Holland, A.; Powell, E.T.; Fonck, R.J.
1990-03-01
This paper describes two methods which have been used to reconstruct the soft x-ray emission profile of the PBX-M tokamak from the projected images recorded by the PBX-M pinhole camera. Both methods must accurately represent the shape of the reconstructed profile while also providing a degree of immunity to noise in the data. The first method is a simple least squares fit to the data. This has the advantage of being fast and small, and thus easily implemented on the PDP-11 computer used to control the video digitizer for the pinhole camera. The second method involves the application of a maximum entropy algorithm to an overdetermined system. This has the advantage of allowing the use of a default profile. This profile contains additional knowledge about the plasma shape which can be obtained from equilibrium fits to the external magnetic measurements. Additionally the reconstruction is guaranteed positive, and the fit to the data can be relaxed by specifying both the amount and distribution of noise in the image. The algorithm described has the advantage of being considerably faster, for an overdetermined system, than the usual Lagrange multiplier approach to finding the maximum entropy solution. 13 refs., 24 figs
Image reconstruction methods for the PBX-M pinhole camera
International Nuclear Information System (INIS)
Holland, A.; Powell, E.; Fonck, R.J.
1991-01-01
We describe two methods that have been used to reconstruct the soft x-ray emission profile of the PBX-M tokamak from the projected images recorded by the PBX-M pinhole camera [Proc. Soc. Photo-Opt. Instrum. Eng. 691, 111 (1986)]. Both methods must accurately represent the shape of the reconstructed profile while also providing a degree of immunity to noise in the data. The first method is a simple least-squares fit to the data. This has the advantage of being fast and small and thus easily implemented on the PDP-11 computer used to control the video digitizer for the pinhole camera. The second method involves the application of a maximum entropy algorithm to an overdetermined system. This has the advantage of allowing the use of a default profile. This profile contains additional knowledge about the plasma shape that can be obtained from equilibrium fits to the external magnetic measurements. Additionally the reconstruction is guaranteed positive, and the fit to the data can be relaxed by specifying both the amount and distribution of noise in the image. The algorithm described has the advantage of being considerably faster for an overdetermined system than the usual Lagrange multiplier approach to finding the maximum entropy solution [J. Opt. Soc. Am. 62, 511 (1972); Rev. Sci. Instrum. 57, 1557 (1986)
Analysis of Interpolation Methods in the Image Reconstruction Tasks
Directory of Open Access Journals (Sweden)
V. T. Nguyen
2017-01-01
Full Text Available The article studies the interpolation methods used for image reconstruction. These methods were also implemented and tested with several images to estimate their effectiveness.The considered interpolation methods are a nearest-neighbor method, linear method, a cubic B-spline method, a cubic convolution method, and a Lanczos method. For each method were presented an interpolation kernel (interpolation function and a frequency response (Fourier transform.As a result of the experiment, the following conclusions were drawn:- the nearest neighbor algorithm is very simple and often used. With using this method, the reconstructed images contain artifacts (blurring and haloing;- the linear method is quickly and easily performed. It also reduces some visual distortion caused by changing image size. Despite the advantages using this method causes a large amount of interpolation artifacts, such as blurring and haloing;- cubic B-spline method provides smoothness of reconstructed images and eliminates apparent ramp phenomenon. But in the interpolation process a low-pass filter is used, and a high frequency component is suppressed. This will lead to fuzzy edge and false artificial traces;- cubic convolution method offers less distortion interpolation. But its algorithm is more complicated and more execution time is required as compared to the nearest-neighbor method and the linear method;- using the Lanczos method allows us to achieve a high-definition image. In spite of the great advantage the method requires more execution time as compared to the other methods of interpolation.The result obtained not only shows a comparison of the considered interpolation methods for various aspects, but also enables users to select an appropriate interpolation method for their applications.It is advisable to study further the existing methods and develop new ones using a number of methods
Adaptive and robust methods of reconstruction (ARMOR) for thermoacoustic tomography.
Xie, Yao; Guo, Bin; Li, Jian; Ku, Geng; Wang, Lihong V
2008-12-01
In this paper, we present new adaptive and robust methods of reconstruction (ARMOR) for thermoacoustic tomography (TAT), and study their performances for breast cancer detection. TAT is an emerging medical imaging technique that combines the merits of high contrast due to electromagnetic or laser stimulation and high resolution offered by thermal acoustic imaging. The current image reconstruction methods used for TAT, such as the delay-and-sum (DAS) approach, are data-independent and suffer from low-resolution, high sidelobe levels, and poor interference rejection capabilities. The data-adaptive ARMOR can have much better resolution and much better interference rejection capabilities than their data-independent counterparts. By allowing certain uncertainties, ARMOR can be used to mitigate the amplitude and phase distortion problems encountered in TAT. The excellent performance of ARMOR is demonstrated using both simulated and experimentally measured data.
A two-way regularization method for MEG source reconstruction
Tian, Tian Siva
2012-09-01
The MEG inverse problem refers to the reconstruction of the neural activity of the brain from magnetoencephalography (MEG) measurements. We propose a two-way regularization (TWR) method to solve the MEG inverse problem under the assumptions that only a small number of locations in space are responsible for the measured signals (focality), and each source time course is smooth in time (smoothness). The focality and smoothness of the reconstructed signals are ensured respectively by imposing a sparsity-inducing penalty and a roughness penalty in the data fitting criterion. A two-stage algorithm is developed for fast computation, where a raw estimate of the source time course is obtained in the first stage and then refined in the second stage by the two-way regularization. The proposed method is shown to be effective on both synthetic and real-world examples. © Institute of Mathematical Statistics, 2012.
Optical Sensors and Methods for Underwater 3D Reconstruction
Massot-Campos, Miquel; Oliver-Codina, Gabriel
2015-01-01
This paper presents a survey on optical sensors and methods for 3D reconstruction in underwater environments. The techniques to obtain range data have been listed and explained, together with the different sensor hardware that makes them possible. The literature has been reviewed, and a classification has been proposed for the existing solutions. New developments, commercial solutions and previous reviews in this topic have also been gathered and considered. PMID:26694389
An alternative method for the measurement of neutron flux
Indian Academy of Sciences (India)
Here, the neutron flux inferred from the neutron count rate obtained with R-12 SDD shows an excellent agreement with the flux inferred from the neutron dose rate in a non-dissipative medium. Keywords. Neutron dose; neutron flux; superheated droplet detector; bubble nucleation. PACS Nos 29.40.Rg; 29.40.–n; 29.25.Dz. 1.
Xia, Yidong
The objective this work is to develop a parallel, implicit reconstructed discontinuous Galerkin (RDG) method using Taylor basis for the solution of the compressible Navier-Stokes equations on 3D hybrid grids. This third-order accurate RDG method is based on a hierarchical weighed essentially non- oscillatory reconstruction scheme, termed as HWENO(P1P 2) to indicate that a quadratic polynomial solution is obtained from the underlying linear polynomial DG solution via a hierarchical WENO reconstruction. The HWENO(P1P2) is designed not only to enhance the accuracy of the underlying DG(P1) method but also to ensure non-linear stability of the RDG method. In this reconstruction scheme, a quadratic polynomial (P2) solution is first reconstructed using a least-squares approach from the underlying linear (P1) discontinuous Galerkin solution. The final quadratic solution is then obtained using a Hermite WENO reconstruction, which is necessary to ensure the linear stability of the RDG method on 3D unstructured grids. The first derivatives of the quadratic polynomial solution are then reconstructed using a WENO reconstruction in order to eliminate spurious oscillations in the vicinity of strong discontinuities, thus ensuring the non-linear stability of the RDG method. The parallelization in the RDG method is based on a message passing interface (MPI) programming paradigm, where the METIS library is used for the partitioning of a mesh into subdomain meshes of approximately the same size. Both multi-stage explicit Runge-Kutta and simple implicit backward Euler methods are implemented for time advancement in the RDG method. In the implicit method, three approaches: analytical differentiation, divided differencing (DD), and automatic differentiation (AD) are developed and implemented to obtain the resulting flux Jacobian matrices. The automatic differentiation is a set of techniques based on the mechanical application of the chain rule to obtain derivatives of a function given as
Methods of total spectral radiant flux realization at VNIIOFI
Ivashin, Evgeniy; Lalek, Jan; Rybczyński, Andrzej; Ogarev, Sergey; Khlevnoy, Boris; Dobroserdov, Dmitry; Sapritsky, Victor
2018-02-01
VNIIOFI carries out works on realization of independent methods for realization of the total spectral radiant flux (TSRF) of incoherent optical radiation sources - reference high-temperature blackbodies (BB), halogen lamps, and LED with quasi-Lambert spatial distribution of radiance. The paper describes three schemes for measuring facilities using photometers, spectroradiometers and computer-controlled high class goniometer. The paper describes different approaches for TSRF realization at the VNIIOFI National radiometric standard on the basis of high-temperature BB and LED sources, and gonio-spectroradiometer. Further, they are planned to be compared, and the use of fixed-point cells (in particular, based on the high-temperature δ(MoC)-C metal-carbon eutectic with a phase transition temperature of 2583 °C corresponding to the metrological optical “source-A”) as an option instead of the BB is considered in order to enhance calibration accuracy.
Track and vertex reconstruction: From classical to adaptive methods
International Nuclear Information System (INIS)
Strandlie, Are; Fruehwirth, Rudolf
2010-01-01
This paper reviews classical and adaptive methods of track and vertex reconstruction in particle physics experiments. Adaptive methods have been developed to meet the experimental challenges at high-energy colliders, in particular, the CERN Large Hadron Collider. They can be characterized by the obliteration of the traditional boundaries between pattern recognition and statistical estimation, by the competition between different hypotheses about what constitutes a track or a vertex, and by a high level of flexibility and robustness achieved with a minimum of assumptions about the data. The theoretical background of some of the adaptive methods is described, and it is shown that there is a close connection between the two main branches of adaptive methods: neural networks and deformable templates, on the one hand, and robust stochastic filters with annealing, on the other hand. As both classical and adaptive methods of track and vertex reconstruction presuppose precise knowledge of the positions of the sensitive detector elements, the paper includes an overview of detector alignment methods and a survey of the alignment strategies employed by past and current experiments.
Reconstruction of Banknote Fragments Based on Keypoint Matching Method.
Gwo, Chih-Ying; Wei, Chia-Hung; Li, Yue; Chiu, Nan-Hsing
2015-07-01
Banknotes may be shredded by a scrap machine, ripped up by hand, or damaged in accidents. This study proposes an image registration method for reconstruction of multiple sheets of banknotes. The proposed method first constructs different scale spaces to identify keypoints in the underlying banknote fragments. Next, the features of those keypoints are extracted to represent their local patterns around keypoints. Then, similarity is computed to find the keypoint pairs between the fragment and the reference banknote. The banknote fragments can determine the coordinate and amend the orientation. Finally, an assembly strategy is proposed to piece multiple sheets of banknote fragments together. Experimental results show that the proposed method causes, on average, a deviation of 0.12457 ± 0.12810° for each fragment while the SIFT method deviates 1.16893 ± 2.35254° on average. The proposed method not only reconstructs the banknotes but also decreases the computing cost. Furthermore, the proposed method can estimate relatively precisely the orientation of the banknote fragments to assemble. © 2015 American Academy of Forensic Sciences.
Reconstructing Holocene geomagnetic field variation: new methods, models and implications
Nilsson, Andreas; Holme, Richard; Korte, Monika; Suttie, Neil; Hill, Mimi
2014-07-01
Reconstructions of the Holocene geomagnetic field and how it varies on millennial timescales are important for understanding processes in the core but may also be used to study long-term solar-terrestrial relationships and as relative dating tools for geological and archaeological archives. Here, we present a new family of spherical harmonic geomagnetic field models spanning the past 9000 yr based on magnetic field directions and intensity stored in archaeological artefacts, igneous rocks and sediment records. A new modelling strategy introduces alternative data treatments with a focus on extracting more information from sedimentary data. To reduce the influence of a few individual records all sedimentary data are resampled in 50-yr bins, which also means that more weight is given to archaeomagnetic data during the inversion. The sedimentary declination data are treated as relative values and adjusted iteratively based on prior information. Finally, an alternative way of treating the sediment data chronologies has enabled us to both assess the likely range of age uncertainties, often up to and possibly exceeding 500 yr and adjust the timescale of each record based on comparisons with predictions from a preliminary model. As a result of the data adjustments, power has been shifted from quadrupole and octupole to higher degrees compared with previous Holocene geomagnetic field models. We find evidence for dominantly westward drift of northern high latitude high intensity flux patches at the core mantle boundary for the last 4000 yr. The new models also show intermittent occurrence of reversed flux at the edge of or inside the inner core tangent cylinder, possibly originating from the equator.
Reconstruction and analysis of hybrid composite shells using meshless methods
Bernardo, G. M. S.; Loja, M. A. R.
2017-06-01
The importance of focusing on the research of viable models to predict the behaviour of structures which may possess in some cases complex geometries is an issue that is growing in different scientific areas, ranging from the civil and mechanical engineering to the architecture or biomedical devices fields. In these cases, the research effort to find an efficient approach to fit laser scanning point clouds, to the desired surface, has been increasing, leading to the possibility of modelling as-built/as-is structures and components' features. However, combining the task of surface reconstruction and the implementation of a structural analysis model is not a trivial task. Although there are works focusing those different phases in separate, there is still an effective need to find approaches able to interconnect them in an efficient way. Therefore, achieving a representative geometric model able to be subsequently submitted to a structural analysis in a similar based platform is a fundamental step to establish an effective expeditious processing workflow. With the present work, one presents an integrated methodology based on the use of meshless approaches, to reconstruct shells described by points' clouds, and to subsequently predict their static behaviour. These methods are highly appropriate on dealing with unstructured points clouds, as they do not need to have any specific spatial or geometric requirement when implemented, depending only on the distance between the points. Details on the formulation, and a set of illustrative examples focusing the reconstruction of cylindrical and double-curvature shells, and its further analysis, are presented.
Extension of the Heat Flux Method to Elevated Pressures
Energy Technology Data Exchange (ETDEWEB)
Slikker, W.J.
2008-12-15
Laminar premixed flames are used in many residential and industrial applications such as surface and Bunsen burners in boilers and central heating systems. A key parameter for a premixed flame is the laminar burning velocity because practically it determines the rate with which a combustible mixture is consumed and fundamentally it contains the basic information regarding the diffusivity and reactivity of the flame. Also, the laminar burning velocity can be used to estimate the turbulent burning velocity and therefore it is an important parameter in designing combustion systems that work under high temperatures and pressures. Much research has been done to determine the laminar burning velocities of premixed hydrocarbon-air flames at both atmospheric and elevated pressures. For atmospheric pressure the reported burning velocities from various measurement methods agree very well, but for high pressures the results show a lot of scattering. The methods used for measuring the burning velocity at higher pressures need stretch corrections and therefore it is interesting to use a method that does not need to be corrected for stretch and to compare the results. The heat flux method makes use of a flat flame and therefore needs no stretch corrections. This method has successfully been used at (sub) atmospheric pressure and in this work it is extended to elevated pressure for the first time. An experimental setup for pressures up to 3 bar was used for measurements of premixed methane-air flames with equivalence ratios ranging from 0.8 to 1.4 for both 2 and 3 bar. The measured burning velocities are higher than most reported data and numerical calculations based on kinetic mechanisms, but very good agreement with the most recent (2007) experimental data is obtained. With use of experimental data from low pressure experiments obtained with the same setup, a correlation between burning velocity and pressure for stoichiometric methane-air flames is found for pressures ranging
Reverse optimization reconstruction method in non-null aspheric interferometry
Zhang, Lei; Liu, Dong; Shi, Tu; Yang, Yongying; Chong, Shiyao; Shen, Yibing; Bai, Jian
2015-10-01
Aspheric non-null test achieves more flexible measurements than the null test. However, the precision calibration for retrace error has always been difficult. A reverse optimization reconstruction (ROR) method is proposed for the retrace error calibration as well as the aspheric figure error extraction based on system modeling. An optimization function is set up with system model, in which the wavefront data from experiment is inserted as the optimization objective while the figure error under test in the model as the optimization variable. The optimization is executed by the reverse ray tracing in the system model until the test wavefront in the model is consistent with the one in experiment. At this point, the surface figure error in the model is considered to be consistent with the one in experiment. With the Zernike fitting, the aspheric surface figure error is then reconstructed in the form of Zernike polynomials. Numerical simulations verifying the high accuracy of the ROR method are presented with error considerations. A set of experiments are carried out to demonstrate the validity and repeatability of ROR method. Compared with the results of Zygo interferometer (null test), the measurement error by the ROR method achieves better than 1/10λ.
Digital module for neutron flux measurement by Campbell method
International Nuclear Information System (INIS)
Baratte, G.
1987-02-01
The study reported here concerns a wide range measurement channel for reactor control instrumentation but it may also be useful for specific measurements requiring the Campbell method. A wide range measurement channel allows the processing of the signal issued from a single fission chamber so it's possible to insure control of nuclear reactors in three different running modes: pulse processing, fluctuations and current. The study described in this note includes three parts: - the analogical wide range neutron measurement channel is presented in the first chapter; the fluctuation mode is thoroughly studied; the results of tests and proper limitations of analogical processing are summarized. A theoretical study of the neutron flux measurement by numerical calculation of the fluctuation signal variance is given in the second chapter. The digital module is described in the third chapter; the results of experiments are analysed. The validity of the digital method is proved by means of a practical realisation. The performances obtained with the digital fluctuation test model may be compared with those given by the analogical fluctuation channel which can be used for the control of lower fission rates. The digital module may also be used for any fluctuation measurement where very short response time and broad spectral band of analysis are not strictly necessary [fr
Several flux-calculation (FC) schemes are available for determining soil-to-atmosphere emissions of nitrous oxide (N2O) and other trace gases using data from non-steady-state flux chambers. Recently developed methods claim to provide more accuracy in estimating the true pre-deployment flux (f0) comp...
A refinement of the analytic function expansion nodal method with interface flux moments
International Nuclear Information System (INIS)
Woo, S. W.; Cho, N. Z.; Noh, J. M.
1999-01-01
A refinement of the AFEN method has been performed by increasing the number of flux expansion terms in the manner that the original basis functions are combined with the transverse-direction linear functions. In this manner, the added terms can be kept to still satisfy the diffusion equation. The additional constraints required are provided by the interface flux moments defined as the weighted-average fluxes at the interface. The refined AFEN method was tested against the OECD-L336 benchmark problem. The results show that the method improves the accuracy in predicting the flux distribution and that it can replace the corner-point fluxes with the interface moments without accuracy degradation. Excluding the corner-point flux increases the flexibility in implementing this method into the existing codes that do not have the corner-point flux scheme and may make it fit better for the non-linear scheme based on two-node problems
Directory of Open Access Journals (Sweden)
C. Möstl
2009-05-01
Full Text Available We analyze a magnetic signature associated with the leading edge of a bursty bulk flow observed by Cluster at −19 R_{E} downtail on 22 August 2001. A distinct rotation of the magnetic field was seen by all four spacecraft. This event was previously examined by Slavin et al. (2003b using both linear force-free modeling as well as a curlometer technique. Extending this work, we apply here single- and multi-spacecraft Grad-Shafranov (GS reconstruction techniques to the Cluster observations and find good evidence that the structure encountered is indeed a magnetic flux rope and contains helical magnetic field lines. We find that the flux rope has a diameter of approximately 1 R_{E}, an axial field of 26.4 nT, a velocity of ≈650 km/s, a total axial current of 0.16 MA and magnetic fluxes of order 10^{5} Wb. The field line twist is estimated as half a turn per R_{E}. The invariant axis is inclined at 40° to the ecliptic plane and 10° to the GSM equatorial plane. The flux rope has a force-free core and non-force-free boundaries. When we compare and contrast our results with those obtained from minimum variance, single-spacecraft force-free fitting and curlometer techniques, we find in general fair agreement, but also clear differences such as a higher inclination of the axis to the ecliptic. We further conclude that single-spacecraft methods have limitations which should be kept in mind when applied to THEMIS observations, and that non-force-free GS and curlometer techniques are to be preferred in their analysis. Some properties we derived for this earthward– moving structure are similar to those inferred by Lui et al. (2007, using a different approach, for a tailward-moving flux rope observed during the expansion phase of the same substorm.
Extension of local front reconstruction method with controlled coalescence model
Rajkotwala, A. H.; Mirsandi, H.; Peters, E. A. J. F.; Baltussen, M. W.; van der Geld, C. W. M.; Kuerten, J. G. M.; Kuipers, J. A. M.
2018-02-01
The physics of droplet collisions involves a wide range of length scales. This poses a challenge to accurately simulate such flows with standard fixed grid methods due to their inability to resolve all relevant scales with an affordable number of computational grid cells. A solution is to couple a fixed grid method with subgrid models that account for microscale effects. In this paper, we improved and extended the Local Front Reconstruction Method (LFRM) with a film drainage model of Zang and Law [Phys. Fluids 23, 042102 (2011)]. The new framework is first validated by (near) head-on collision of two equal tetradecane droplets using experimental film drainage times. When the experimental film drainage times are used, the LFRM method is better in predicting the droplet collisions, especially at high velocity in comparison with other fixed grid methods (i.e., the front tracking method and the coupled level set and volume of fluid method). When the film drainage model is invoked, the method shows a good qualitative match with experiments, but a quantitative correspondence of the predicted film drainage time with the experimental drainage time is not obtained indicating that further development of film drainage model is required. However, it can be safely concluded that the LFRM coupled with film drainage models is much better in predicting the collision dynamics than the traditional methods.
Virtanen, Iiro; Virtanen, Ilpo; Pevtsov, Alexei; Yeates, Anthony; Mursula, Kalevi
2017-04-01
We aim to use the surface flux transport model to simulate the long-term evolution of the photospheric magnetic field from historical observations. In this work we study the accuracy of the model and its sensitivity to uncertainties in its main parameters and the input data. We test the model by running simulations with different values of meridional circulation and supergranular diffusion parameters, and study how the flux distribution inside active regions and the initial magnetic field affect the simulation. We compare the results to assess how sensitive the simulation is to uncertainties in meridional circulation speed, supergranular diffusion and input data. We also compare the simulated magnetic field with observations. We find that there is generally good agreement between simulations and observations. While the model is not capable of replicating fine details of the magnetic field, the long-term evolution of the polar field is very similar in simulations and observations. Simulations typically yield a smoother evolution of polar fields than observations, that often include artificial variations due to observational limitations. We also find that the simulated field is fairly insensitive to uncertainties in model parameters or the input data. Due to the decay term included in the model the effects of the uncertainties are rather minor or temporary, lasting typically one solar cycle.
Computational methods for three-dimensional microscopy reconstruction
Frank, Joachim
2014-01-01
Approaches to the recovery of three-dimensional information on a biological object, which are often formulated or implemented initially in an intuitive way, are concisely described here based on physical models of the object and the image-formation process. Both three-dimensional electron microscopy and X-ray tomography can be captured in the same mathematical framework, leading to closely-related computational approaches, but the methodologies differ in detail and hence pose different challenges. The editors of this volume, Gabor T. Herman and Joachim Frank, are experts in the respective methodologies and present research at the forefront of biological imaging and structural biology. Computational Methods for Three-Dimensional Microscopy Reconstruction will serve as a useful resource for scholars interested in the development of computational methods for structural biology and cell biology, particularly in the area of 3D imaging and modeling.
Lee, Joon-Jae; Shin, Donghak; Yoo, Hoon
2013-09-01
In this paper, we propose an image quality improvement method of partially occluded objects using two different computational integral imaging reconstruction (CIIR) methods. In the proposed method, we first remove the occlusion in the recorded elemental images using two different plane images which are generated from two different CIIR methods. We introduce a CIIR method based on a round-mapping model for combined use of the previous method. The difference between two plane images reconstructed at a specific distance enables us to estimate the position of the occlusion in the elemental images. The occlusion-removed elemental images are used to reconstruct the improved 3D images. We carry out some experiments and present the results to show the usefulness of the proposed method.
Li, ZhaoYu; Chen, Tao; Yan, GuangQing
2016-10-01
A new method for determining the central axial orientation of a two-dimensional coherent magnetic flux rope (MFR) via multipoint analysis of the magnetic-field structure is developed. The method is devised under the following geometrical assumptions: (1) on its cross section, the structure is left-right symmetric; (2) the projected structure velocity is vertical to the line of symmetry. The two conditions can be naturally satisfied for cylindrical MFRs and are expected to be satisfied for MFRs that are flattened within current sheets. The model test demonstrates that, for determining the axial orientation of such structures, the new method is more efficient and reliable than traditional techniques such as minimum-variance analysis of the magnetic field, Grad-Shafranov (GS) reconstruction, and the more recent method based on the cylindrically symmetric assumption. A total of five flux transfer events observed by Cluster are studied using the proposed approach, and the application results indicate that the observed structures, regardless of their actual physical properties, fit the assumed geometrical model well. For these events, the inferred axial orientations are all in excellent agreement with those obtained using the multi-GS reconstruction technique.
A new method for depth profiling reconstruction in confocal microscopy
Esposito, Rosario; Scherillo, Giuseppe; Mensitieri, Giuseppe
2018-05-01
Confocal microscopy is commonly used to reconstruct depth profiles of chemical species in multicomponent systems and to image nuclear and cellular details in human tissues via image intensity measurements of optical sections. However, the performance of this technique is reduced by inherent effects related to wave diffraction phenomena, refractive index mismatch and finite beam spot size. All these effects distort the optical wave and cause an image to be captured of a small volume around the desired illuminated focal point within the specimen rather than an image of the focal point itself. The size of this small volume increases with depth, thus causing a further loss of resolution and distortion of the profile. Recently, we proposed a theoretical model that accounts for the above wave distortion and allows for a correct reconstruction of the depth profiles for homogeneous samples. In this paper, this theoretical approach has been adapted for describing the profiles measured from non-homogeneous distributions of emitters inside the investigated samples. The intensity image is built by summing the intensities collected from each of the emitters planes belonging to the illuminated volume, weighed by the emitters concentration. The true distribution of the emitters concentration is recovered by a new approach that implements this theoretical model in a numerical algorithm based on the Maximum Entropy Method. Comparisons with experimental data and numerical simulations show that this new approach is able to recover the real unknown concentration distribution from experimental profiles with an accuracy better than 3%.
Data processing and image reconstruction methods for pixel detectors
International Nuclear Information System (INIS)
Jakubek, Jan
2007-01-01
Semiconductor single-particle-counting pixel detectors offer many advantages for radiation imaging: high detection efficiency, energy discrimination, noiseless digital integration (counting), high frame rate and virtually unlimited dynamic range. All these properties allow to achieve high quality images. Examples of transmission images and 3D tomographic reconstruction using X-rays and slow neutrons are presented demonstrating effects that can affect the quality of images. A number of obstacles can limit detector performance if not handled. The pixel detector is in fact an array of individual detectors (pixels), each of them has its own efficiency, energy calibration and also noise. The common effort is to make all these parameters uniform for all pixels. However, an ideal uniformity can be never reached. Moreover, it is often seen that the signal in one pixel affects neighboring pixels due to various reasons (charge sharing, crosstalk, etc.). All such effects have to be taken into account during data processing to avoid false data interpretation. The main intention of this contribution is to summarize techniques of data processing and image correction to eliminate residual drawbacks of pixel detectors. It is shown how to extend these methods to handle further physical effects such as hardening of the beam and edge enhancement by deflection. Besides, more advanced methods of data processing such as tomographic 3D reconstruction are discussed. All methods are demonstrated on real experiments from biology and material science performed mostly with the Medipix2 pixel device. A brief view to the future of pixel detectors and their applications also including spectroscopy and particle tracking is given too
Nagler, Pamela L.; Glenn, Edward P.; Morino, Kiyomi; Neale, Christopher M.U; Cosh, Michael H.
2010-01-01
Riparian evapotranspiration (ET) was measured on a salt cedar (Tamarix spp.) dominated river terrace on the Lower Colorado River from 2007 to 2009 using tissue-heat-balance sap flux sensors at six sites representing very dense, medium dense, and sparse stands of plants. Salt cedar ET varied markedly across sites, and sap flux sensors showed that plants were subject to various degrees of stress, detected as mid-day depression of transpiration and stomatal conductance. Sap flux results were scaled from the leaf level of measurement to the stand level by measuring plant-specific leaf area index and fractional ground cover at each site. Results were compared to Bowen ratio moisture tower data available for three of the sites. Sap flux sensors and flux tower results ranked the sites the same and had similar estimates of ET. A regression equation, relating measured ET of salt cedar and other riparian plants and crops on the Lower Colorado River to the Enhanced Vegetation Index from the MODIS sensor on the Terra satellite and reference crop ET measured at meteorological stations, was able to predict actual ET with an accuracy or uncertainty of about 20%, despite between-site differences for salt cedar. Peak summer salt cedar ET averaged about 6 mm d-1 across sites and methods of measurement.
Features of the method of large-scale paleolandscape reconstructions
Nizovtsev, Vyacheslav; Erman, Natalia; Graves, Irina
2017-04-01
The method of paleolandscape reconstructions was tested in the key area of the basin of the Central Dubna, located at the junction of the Taldom and Sergiev Posad districts of the Moscow region. A series of maps was created which shows paleoreconstructions of the original (indigenous) living environment of initial settlers during main time periods of the Holocene age and features of human interaction with landscapes at the early stages of economic development of the territory (in the early and middle Holocene). The sequence of these works is as follows. 1. Comprehensive analysis of topographic maps of different scales and aerial and satellite images, stock materials of geological and hydrological surveys and prospecting of peat deposits, archaeological evidence on ancient settlements, palynological and osteological analysis, analysis of complex landscape and archaeological studies. 2. Mapping of factual material and analyzing of the spatial distribution of archaeological sites were performed. 3. Running of a large-scale field landscape mapping (sample areas) and compiling of maps of the modern landscape structure. On this basis, edaphic properties of the main types of natural boundaries were analyzed and their resource base was determined. 4. Reconstruction of lake-river system during the main periods of the Holocene. The boundaries of restored paleolakes were determined based on power and territorial confinement of decay ooze. 5. On the basis of landscape and edaphic method the actual paleolandscape reconstructions for the main periods of the Holocene were performed. During the reconstructions of the original, indigenous flora we relied on data of palynological studies conducted on the studied area or in similar landscape conditions. 6. The result was a retrospective analysis and periodization of the settlement process, economic development and the formation of the first anthropogenically transformed landscape complexes. The reconstruction of the dynamics of the
International Nuclear Information System (INIS)
Gardarein, J.L.; Corre, Y.; Reichle, R.; Rigollet, F.; Le Niliot, Ch.
2006-01-01
In this work, a deconvolution of the temperatures measured with thermocouples fitted inside the plasma-facing components of a controlled fusion machine is performed. A 2D pulse response is used which is obtained by the thermal quadrupole method. The shape and intensity of the plasma flux deposited at the surface of the component is calculated and some experimental results are presented. (J.S.)
Energy reconstruction methods in the IceCube neutrino telescope
Aartsen, M. G.; Abbasi, R.; Ackermann, M.; Adams, J.; Aguilar, J. A.; Ahlers, M.; Altmann, D.; Arguelles, C.; Auffenberg, J.; Bai, X.; Baker, M.; Barwick, S. W.; Baum, V.; Bay, R.; Beatty, J. J.; Becker Tjus, J.; Becker, K.-H.; BenZvi, S.; Berghaus, P.; Berley, D.; Bernardini, E.; Bernhard, A.; Besson, D. Z.; Binder, G.; Bindig, D.; Bissok, M.; Blaufuss, E.; Blumenthal, J.; Boersma, D. J.; Bohm, C.; Bose, D.; Böser, S.; Botner, O.; Brayeur, L.; Bretz, H.-P.; Brown, A. M.; Bruijn, R.; Casey, J.; Casier, M.; Chirkin, D.; Christov, A.; Christy, B.; Clark, K.; Classen, L.; Clevermann, F.; Coenders, S.; Cohen, S.; Cowen, D. F.; Cruz Silva, A. H.; Danninger, M.; Daughhetee, J.; Davis, J. C.; Day, M.; De Clercq, C.; De Ridder, S.; Desiati, P.; de Vries, K. D.; de With, M.; DeYoung, T.; Díaz-Vélez, J. C.; Dunkman, M.; Eagan, R.; Eberhardt, B.; Eichmann, B.; Eisch, J.; Euler, S.; Evenson, P. A.; Fadiran, O.; Fazely, A. R.; Fedynitch, A.; Feintzeig, J.; Feusels, T.; Filimonov, K.; Finley, C.; Fischer-Wasels, T.; Flis, S.; Franckowiak, A.; Frantzen, K.; Fuchs, T.; Gaisser, T. K.; Gallagher, J.; Gerhardt, L.; Gladstone, L.; Glüsenkamp, T.; Goldschmidt, A.; Golup, G.; Gonzalez, J. G.; Goodman, J. A.; Góra, D.; Grandmont, D. T.; Grant, D.; Gretskov, P.; Groh, J. C.; Groß, A.; Ha, C.; Haj Ismail, A.; Hallen, P.; Hallgren, A.; Halzen, F.; Hanson, K.; Hebecker, D.; Heereman, D.; Heinen, D.; Helbing, K.; Hellauer, R.; Hickford, S.; Hill, G. C.; Hoffman, K. D.; Hoffmann, R.; Homeier, A.; Hoshina, K.; Huang, F.; Huelsnitz, W.; Hulth, P. O.; Hultqvist, K.; Hussain, S.; Ishihara, A.; Jackson, S.; Jacobi, E.; Jacobsen, J.; Jagielski, K.; Japaridze, G. S.; Jero, K.; Jlelati, O.; Kaminsky, B.; Kappes, A.; Karg, T.; Karle, A.; Kauer, M.; Kelley, J. L.; Kiryluk, J.; Kläs, J.; Klein, S. R.; Köhne, J.-H.; Kohnen, G.; Kolanoski, H.; Köpke, L.; Kopper, C.; Kopper, S.; Koskinen, D. J.; Kowalski, M.; Krasberg, M.; Kriesten, A.; Krings, K.; Kroll, G.; Kunnen, J.; Kurahashi, N.; Kuwabara, T.; Labare, M.; Landsman, H.; Larson, M. J.; Lesiak-Bzdak, M.; Leuermann, M.; Leute, J.; Lünemann, J.; Macías, O.; Madsen, J.; Maggi, G.; Maruyama, R.; Mase, K.; Matis, H. S.; McNally, F.; Meagher, K.; Merck, M.; Meures, T.; Miarecki, S.; Middell, E.; Milke, N.; Miller, J.; Mohrmann, L.; Montaruli, T.; Morse, R.; Nahnhauer, R.; Naumann, U.; Niederhausen, H.; Nowicki, S. C.; Nygren, D. R.; Obertacke, A.; Odrowski, S.; Olivas, A.; Omairat, A.; O'Murchadha, A.; Paul, L.; Pepper, J. A.; Pérez de los Heros, C.; Pfendner, C.; Pieloth, D.; Pinat, E.; Posselt, J.; Price, P. B.; Przybylski, G. T.; Quinnan, M.; Rädel, L.; Rameez, M.; Rawlins, K.; Redl, P.; Reimann, R.; Resconi, E.; Rhode, W.; Ribordy, M.; Richman, M.; Riedel, B.; Robertson, S.; Rodrigues, J. P.; Rott, C.; Ruhe, T.; Ruzybayev, B.; Ryckbosch, D.; Saba, S. M.; Sander, H.-G.; Santander, M.; Sarkar, S.; Schatto, K.; Scheriau, F.; Schmidt, T.; Schmitz, M.; Schoenen, S.; Schöneberg, S.; Schönwald, A.; Schukraft, A.; Schulte, L.; Schulz, O.; Seckel, D.; Sestayo, Y.; Seunarine, S.; Shanidze, R.; Sheremata, C.; Smith, M. W. E.; Soldin, D.; Spiczak, G. M.; Spiering, C.; Stamatikos, M.; Stanev, T.; Stanisha, N. A.; Stasik, A.; Stezelberger, T.; Stokstad, R. G.; Stößl, A.; Strahler, E. A.; Ström, R.; Strotjohann, N. L.; Sullivan, G. W.; Taavola, H.; Taboada, I.; Tamburro, A.; Tepe, A.; Ter-Antonyan, S.; Te{š}ić, G.; Tilav, S.; Toale, P. A.; Tobin, M. N.; Toscano, S.; Tselengidou, M.; Unger, E.; Usner, M.; Vallecorsa, S.; van Eijndhoven, N.; Van Overloop, A.; van Santen, J.; Vehring, M.; Voge, M.; Vraeghe, M.; Walck, C.; Waldenmaier, T.; Wallraff, M.; Weaver, Ch; Wellons, M.; Wendt, C.; Westerhoff, S.; Whelan, B.; Whitehorn, N.; Wiebe, K.; Wiebusch, C. H.; Williams, D. R.; Wissing, H.; Wolf, M.; Wood, T. R.; Woschnagg, K.; Xu, D. L.; Xu, X. W.; Yanez, J. P.; Yodh, G.; Yoshida, S.; Zarzhitsky, P.; Ziemann, J.; Zierke, S.; Zoll, M.
2014-03-01
Accurate measurement of neutrino energies is essential to many of the scientific goals of large-volume neutrino telescopes. The fundamental observable in such detectors is the Cherenkov light produced by the transit through a medium of charged particles created in neutrino interactions. The amount of light emitted is proportional to the deposited energy, which is approximately equal to the neutrino energy for νe and νμ charged-current interactions and can be used to set a lower bound on neutrino energies and to measure neutrino spectra statistically in other channels. Here we describe methods and performance of reconstructing charged-particle energies and topologies from the observed Cherenkov light yield, including techniques to measure the energies of uncontained muon tracks, achieving average uncertainties in electromagnetic-equivalent deposited energy of ~ 15% above 10 TeV.
MO-DE-209-02: Tomosynthesis Reconstruction Methods
Energy Technology Data Exchange (ETDEWEB)
Mainprize, J. [Sunnybrook Health Sciences Centre, Toronto, ON (Canada)
2016-06-15
Digital Breast Tomosynthesis (DBT) is rapidly replacing mammography as the standard of care in breast cancer screening and diagnosis. DBT is a form of computed tomography, in which a limited set of projection images are acquired over a small angular range and reconstructed into tomographic data. The angular range varies from 15° to 50° and the number of projections varies between 9 and 25 projections, as determined by the equipment manufacturer. It is equally valid to treat DBT as the digital analog of classical tomography – that is, linear tomography. In fact, the name “tomosynthesis” stands for “synthetic tomography.” DBT shares many common features with classical tomography, including the radiographic appearance, dose, and image quality considerations. As such, both the science and practical physics of DBT systems is a hybrid between computed tomography and classical tomographic methods. In this lecture, we will explore the continuum from radiography to computed tomography to illustrate the characteristics of DBT. This lecture will consist of four presentations that will provide a complete overview of DBT, including a review of the fundamentals of DBT acquisition, a discussion of DBT reconstruction methods, an overview of dosimetry for DBT systems, and summary of the underlying image theory of DBT thereby relating image quality and dose. Learning Objectives: To understand the fundamental principles behind tomosynthesis image acquisition. To understand the fundamentals of tomosynthesis image reconstruction. To learn the determinants of image quality and dose in DBT, including measurement techniques. To learn the image theory underlying tomosynthesis, and the relationship between dose and image quality. ADM is a consultant to, and holds stock in, Real Time Tomography, LLC. ADM receives research support from Hologic Inc., Analogic Inc., and Barco NV.; ADM is a member of the Scientific Advisory Board for Gamma Medica Inc.; A. Maidment, Research Support
MO-DE-209-02: Tomosynthesis Reconstruction Methods
International Nuclear Information System (INIS)
Mainprize, J.
2016-01-01
Digital Breast Tomosynthesis (DBT) is rapidly replacing mammography as the standard of care in breast cancer screening and diagnosis. DBT is a form of computed tomography, in which a limited set of projection images are acquired over a small angular range and reconstructed into tomographic data. The angular range varies from 15° to 50° and the number of projections varies between 9 and 25 projections, as determined by the equipment manufacturer. It is equally valid to treat DBT as the digital analog of classical tomography – that is, linear tomography. In fact, the name “tomosynthesis” stands for “synthetic tomography.” DBT shares many common features with classical tomography, including the radiographic appearance, dose, and image quality considerations. As such, both the science and practical physics of DBT systems is a hybrid between computed tomography and classical tomographic methods. In this lecture, we will explore the continuum from radiography to computed tomography to illustrate the characteristics of DBT. This lecture will consist of four presentations that will provide a complete overview of DBT, including a review of the fundamentals of DBT acquisition, a discussion of DBT reconstruction methods, an overview of dosimetry for DBT systems, and summary of the underlying image theory of DBT thereby relating image quality and dose. Learning Objectives: To understand the fundamental principles behind tomosynthesis image acquisition. To understand the fundamentals of tomosynthesis image reconstruction. To learn the determinants of image quality and dose in DBT, including measurement techniques. To learn the image theory underlying tomosynthesis, and the relationship between dose and image quality. ADM is a consultant to, and holds stock in, Real Time Tomography, LLC. ADM receives research support from Hologic Inc., Analogic Inc., and Barco NV.; ADM is a member of the Scientific Advisory Board for Gamma Medica Inc.; A. Maidment, Research Support
Chevalier, M.; Cheddadi, R.; Chase, B. M.
2014-11-01
Several methods currently exist to quantitatively reconstruct palaeoclimatic variables from fossil botanical data. Of these, probability density function (PDF)-based methods have proven valuable as they can be applied to a wide range of plant assemblages. Most commonly applied to fossil pollen data, their performance, however, can be limited by the taxonomic resolution of the pollen data, as many species may belong to a given pollen type. Consequently, the climate information associated with different species cannot always be precisely identified, resulting in less-accurate reconstructions. This can become particularly problematic in regions of high biodiversity. In this paper, we propose a novel PDF-based method that takes into account the different climatic requirements of each species constituting the broader pollen type. PDFs are fitted in two successive steps, with parametric PDFs fitted first for each species and then a combination of those individual species PDFs into a broader single PDF to represent the pollen type as a unit. A climate value for the pollen assemblage is estimated from the likelihood function obtained after the multiplication of the pollen-type PDFs, with each being weighted according to its pollen percentage. To test its performance, we have applied the method to southern Africa as a regional case study and reconstructed a suite of climatic variables (e.g. winter and summer temperature and precipitation, mean annual aridity, rainfall seasonality). The reconstructions are shown to be accurate for both temperature and precipitation. Predictable exceptions were areas that experience conditions at the extremes of the regional climatic spectra. Importantly, the accuracy of the reconstructed values is independent of the vegetation type where the method is applied or the number of species used. The method used in this study is publicly available in a software package entitled CREST (Climate REconstruction SofTware) and will provide the
Analysis of the neutron flux in an annular pulsed reactor by using finite volume method
Energy Technology Data Exchange (ETDEWEB)
Silva, Mário A.B. da; Narain, Rajendra; Bezerra, Jair de L., E-mail: mabs500@gmail.com, E-mail: narain@ufpe.br, E-mail: jairbezerra@gmail.com [Universidade Federal de Pernambuco (UFPE), Recife, PE (Brazil). Centro de Tecnologia e Geociências. Departamento de Energia Nuclear
2017-07-01
Production of very intense neutron sources is important for basic nuclear physics and for material testing and isotope production. Nuclear reactors have been used as sources of intense neutron fluxes, although the achievement of such levels is limited by the inability to remove fission heat. Periodic pulsed reactors provide very intense fluxes by a rotating modulator near a subcritical core. A concept for the production of very intense neutron fluxes that combines features of periodic pulsed reactors and steady state reactors was proposed by Narain (1997). Such a concept is known as Very Intense Continuous High Flux Pulsed Reactor (VICHFPR) and was analyzed by using diffusion equation with moving boundary conditions and Finite Difference Method with Crank-Nicolson formalism. This research aims to analyze the flux distribution in the Very Intense Continuous Flux High Pulsed Reactor (VICHFPR) by using the Finite Volume Method and compares its results with those obtained by the previous computational method. (author)
Quantitative comparison of in situ soil CO2 flux measurement methods
Jennifer D. Knoepp; James M. Vose
2002-01-01
Development of reliable regional or global carbon budgets requires accurate measurement of soil CO2 flux. We conducted laboratory and field studies to determine the accuracy and comparability of methods commonly used to measure in situ soil CO2 fluxes. Methods compared included CO2...
Efficient parsimony-based methods for phylogenetic network reconstruction.
Jin, Guohua; Nakhleh, Luay; Snir, Sagi; Tuller, Tamir
2007-01-15
Phylogenies--the evolutionary histories of groups of organisms-play a major role in representing relationships among biological entities. Although many biological processes can be effectively modeled as tree-like relationships, others, such as hybrid speciation and horizontal gene transfer (HGT), result in networks, rather than trees, of relationships. Hybrid speciation is a significant evolutionary mechanism in plants, fish and other groups of species. HGT plays a major role in bacterial genome diversification and is a significant mechanism by which bacteria develop resistance to antibiotics. Maximum parsimony is one of the most commonly used criteria for phylogenetic tree inference. Roughly speaking, inference based on this criterion seeks the tree that minimizes the amount of evolution. In 1990, Jotun Hein proposed using this criterion for inferring the evolution of sequences subject to recombination. Preliminary results on small synthetic datasets. Nakhleh et al. (2005) demonstrated the criterion's application to phylogenetic network reconstruction in general and HGT detection in particular. However, the naive algorithms used by the authors are inapplicable to large datasets due to their demanding computational requirements. Further, no rigorous theoretical analysis of computing the criterion was given, nor was it tested on biological data. In the present work we prove that the problem of scoring the parsimony of a phylogenetic network is NP-hard and provide an improved fixed parameter tractable algorithm for it. Further, we devise efficient heuristics for parsimony-based reconstruction of phylogenetic networks. We test our methods on both synthetic and biological data (rbcL gene in bacteria) and obtain very promising results.
Revisiting a model-independent dark energy reconstruction method
Energy Technology Data Exchange (ETDEWEB)
Lazkoz, Ruth; Salzano, Vincenzo; Sendra, Irene [Euskal Herriko Unibertsitatea, Fisika Teorikoaren eta Zientziaren Historia Saila, Zientzia eta Teknologia Fakultatea, Bilbao (Spain)
2012-09-15
In this work we offer new insights into the model-independent dark energy reconstruction method developed by Daly and Djorgovski (Astrophys. J. 597:9, 2003; Astrophys. J. 612:652, 2004; Astrophys. J. 677:1, 2008). Our results, using updated SNeIa and GRBs, allow to highlight some of the intrinsic weaknesses of the method. Conclusions on the main dark energy features as drawn from this method are intimately related to the features of the samples themselves, particularly for GRBs, which are poor performers in this context and cannot be used for cosmological purposes, that is, the state of the art does not allow to regard them on the same quality basis as SNeIa. We find there is a considerable sensitivity to some parameters (window width, overlap, selection criteria) affecting the results. Then, we try to establish what the current redshift range is for which one can make solid predictions on dark energy evolution. Finally, we strengthen the former view that this model is modest in the sense it provides only a picture of the global trend and has to be managed very carefully. But, on the other hand, we believe it offers an interesting complement to other approaches, given that it works on minimal assumptions. (orig.)
An analytical transport theory method for calculating flux distribution in slab cells
International Nuclear Information System (INIS)
AbdelKrim, M.S.
2000-01-01
A transport theory method for calculating flux distributions in slab fuel cell is described. Two coupled integral equations for flux in fuel and moderator are obtained; assuming partial reflection at moderator external boundaries. Galerkin technique is used to solve these equations. N umerical results for average fluxes in fuel and moderator also the disadvantage factor are given. Comparison with exact numerical methods, that is for total reflection moderator outer boundaries, show that Galerkin technique gives accurate results for the disadvantage factor and average fluxes
An analytical transport theory method for calculating flux distribution in slab cells
International Nuclear Information System (INIS)
Abdel Krim, M.S.
2001-01-01
A transport theory method for calculating flux distributions in slab fuel cell is described. Two coupled integral equations for flux in fuel and moderator are obtained; assuming partial reflection at moderator external boundaries. Galerkin technique is used to solve these equations. Numerical results for average fluxes in fuel and moderator and the disadvantage factor are given. Comparison with exact numerical methods, that is for total reflection moderator outer boundaries, show that the Galerkin technique gives accurate results for the disadvantage factor and average fluxes. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Ballhausen, H.
2007-02-07
This treatise develops new methods for high flux neutron radiography and high flux neutron tomography and describes some of their applications in actual experiments. Instead of single images, time series can be acquired with short exposure times due to the available high intensity. To best use the increased amount of information, new estimators are proposed, which extract accurate results from the recorded ensembles, even if the individual piece of data is very noisy and in addition severely affected by systematic errors such as an influence of gamma background radiation. The spatial resolution of neutron radiographies, usually limited by beam divergence and inherent resolution of the scintillator, can be significantly increased by scanning the sample with a pinhole-micro-collimator. This technique circumvents any limitations in present detector design and, due to the available high intensity, could be successfully tested. Imaging with scattered neutrons as opposed to conventional total attenuation based imaging determines separately the absorption and scattering cross sections within the sample. For the first time even coherent angle dependent scattering could be visualized space-resolved. New applications of high flux neutron imaging are presented, such as materials engineering experiments on innovative metal joints, time-resolved tomography on multilayer stacks of fuel cells under operation, and others. A new implementation of an algorithm for the algebraic reconstruction of tomography data executes even in case of missing information, such as limited angle tomography, and returns quantitative reconstructions. The setup of the world-leading high flux radiography and tomography facility at the Institut Laue-Langevin is presented. A comprehensive appendix covers the physical and technical foundations of neutron imaging. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Nakos, James Thomas
2010-12-01
The purpose of this report is to describe the methods commonly used to measure heat flux in fire applications at Sandia National Laboratories in both hydrocarbon (JP-8 jet fuel, diesel fuel, etc.) and propellant fires. Because these environments are very severe, many commercially available heat flux gauges do not survive the test, so alternative methods had to be developed. Specially built sensors include 'calorimeters' that use a temperature measurement to infer heat flux by use of a model (heat balance on the sensing surface) or by using an inverse heat conduction method. These specialty-built sensors are made rugged so they will survive the environment, so are not optimally designed for ease of use or accuracy. Other methods include radiometers, co-axial thermocouples, directional flame thermometers (DFTs), Sandia 'heat flux gauges', transpiration radiometers, and transverse Seebeck coefficient heat flux gauges. Typical applications are described and pros and cons of each method are listed.
Advanced Online Flux Mapping of CANDU PHWR by Least-Squares Method
International Nuclear Information System (INIS)
Hong, In Seob; Kim, Chang Hyo; Suk, Ho Chun
2005-01-01
A least-squares method that solves both the core neutronics design equations and the in-core detector response equations on the least-squares principle is presented as a new advanced online flux-mapping method for CANada Deuterium Uranium (CANDU) pressurized heavy water reactors (PHWRs). The effectiveness of the new flux-mapping method is examined in terms of online flux-mapping calculations with numerically simulated true flux distribution and detector signals and those with the actual core-follow data for the Wolsong CANDU PHWRs in Korea. The effects of core neutronics models as well as the detector failures and uncertainties of measured detector signals on the effectiveness of the least-squares flux-mapping calculations are also examined.The following results are obtained. The least-squares method predicts the flux distribution in better agreement with the simulated true flux distribution than the standard core neutronics calculations by the finite difference method (FDM) computer code without using the detector signals. The adoption of the nonlinear nodal method based on the unified nodal method formulation instead of the FDM results in a significant improvement in prediction accuracy of the flux-mapping calculations. The detector signals estimated from the least-squares flux-mapping calculations are much closer to the measured detector signals than those from the flux synthesis method (FSM), the current online flux-mapping method for CANDU reactors. The effect of detector failures is relatively small so that the plant can tolerate up to 25% of detector failures without seriously affecting the plant operation. The detector signal uncertainties aggravate accuracy of the flux-mapping calculations, yet the effects of signal uncertainties of the order of 1% standard deviation can be tolerable without seriously degrading the prediction accuracy of the least-squares method. The least-squares method is disadvantageous because it requires longer CPU time than the
Tissue expansion for breast reconstruction: Methods and techniques
Directory of Open Access Journals (Sweden)
Nicolò Bertozzi
2017-09-01
Conclusions: TE/implant-based reconstruction has proved to be a safe, cost-effective, and reliable technique that can be performed in women with various comorbidities. Short operative time, fast recovery, and absence of donor site morbidity are other advantages over autologous breast reconstruction.
Managing dense nonaqueous phase liquid (DNAPL) contaminated sites continues to be among the most pressing environmental problems currently faced. One approach that has recently been investigated for use in DNAPL site characterization and remediation is mass flux (mass per unit ar...
Benchmarking burnup reconstruction methods for dynamically operated research reactors
Energy Technology Data Exchange (ETDEWEB)
Sternat, Matthew R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Charlton, William S. [Univ. of Nebraska, Lincoln, NE (United States). National Strategic Research Institute; Nichols, Theodore F. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2016-03-01
The burnup of an HEU fueled dynamically operated research reactor, the Oak Ridge Research Reactor, was experimentally reconstructed using two different analytic methodologies and a suite of signature isotopes to evaluate techniques for estimating burnup for research reactor fuel. The methods studied include using individual signature isotopes and the complete mass spectrometry spectrum to recover the sample’s burnup. The individual, or sets of, isotopes include ^{148}Nd, ^{137}Cs+^{137}Ba, ^{139}La, and ^{145}Nd+^{146}Nd. The storage documentation from the analyzed fuel material provided two different measures of burnup: burnup percentage and the total power generated from the assembly in MWd. When normalized to conventional units, these two references differed by 7.8% (395.42GWd/MTHM and 426.27GWd/MTHM) in the resulting burnup for the spent fuel element used in the benchmark. Among all methods being evaluated, the results were within 11.3% of either reference burnup. The results were mixed in closeness to both reference burnups; however, consistent results were achieved from all three experimental samples.
International Nuclear Information System (INIS)
Choi, Joonsung; Kim, Dongchan; Oh, Changhyun; Han, Yeji; Park, HyunWook
2013-01-01
In MRI (magnetic resonance imaging), signal sampling along a radial k-space trajectory is preferred in certain applications due to its distinct advantages such as robustness to motion, and the radial sampling can be beneficial for reconstruction algorithms such as parallel MRI (pMRI) due to the incoherency. For radial MRI, the image is usually reconstructed from projection data using analytic methods such as filtered back-projection or Fourier reconstruction after gridding. However, the quality of the reconstructed image from these analytic methods can be degraded when the number of acquired projection views is insufficient. In this paper, we propose a novel reconstruction method based on the expectation maximization (EM) method, where the EM algorithm is remodeled for MRI so that complex images can be reconstructed. Then, to optimize the proposed method for radial pMRI, a reconstruction method that uses coil sensitivity information of multichannel RF coils is formulated. Experiment results from synthetic and in vivo data show that the proposed method introduces better reconstructed images than the analytic methods, even from highly subsampled data, and provides monotonic convergence properties compared to the conjugate gradient based reconstruction method. (paper)
Estimating and localizing the algebraic and total numerical errors using flux reconstructions
Czech Academy of Sciences Publication Activity Database
Papež, Jan; Strakoš, Z.; Vohralík, M.
2018-01-01
Roč. 138, č. 3 (2018), s. 681-721 ISSN 0029-599X R&D Projects: GA ČR GA13-06684S Grant - others:GA MŠk(CZ) LL1202 Institutional support: RVO:67985807 Keywords : numerical solution of partial differential equations * finite element method * a posteriori error estimation * algebraic error * discretization error * stopping criteria * spatial distribution of the error Subject RIV: BA - General Mathematics Impact factor: 2.152, year: 2016
A two-step filtering-based iterative image reconstruction method for interior tomography.
Zhang, Hanming; Li, Lei; Yan, Bin; Wang, Linyuan; Cai, Ailong; Hu, Guoen
2016-10-06
The optimization-based method that utilizes the additional sparse prior of region-of-interest (ROI) image, such as total variation, has been the subject of considerable research in problems of interior tomography reconstruction. One challenge for optimization-based iterative ROI image reconstruction is to build the relationship between ROI image and truncated projection data. When the reconstruction support region is smaller than the original object, an unsuitable representation of data fidelity may lead to bright truncation artifacts in the boundary region of field of view. In this work, we aim to develop an iterative reconstruction method to suppress the truncation artifacts and improve the image quality for direct ROI image reconstruction. A novel reconstruction approach is proposed based on an optimization problem involving a two-step filtering-based data fidelity. Data filtering is achieved in two steps: the first takes the derivative of projection data; in the second step, Hilbert filtering is applied in the differentiated data. Numerical simulations and real data reconstructions have been conducted to validate the new reconstruction method. Both qualitative and quantitative results indicate that, as theoretically expected, the proposed method brings reasonable performance in suppressing truncation artifacts and preserving detailed features. The presented local reconstruction method based on the two-step filtering strategy provides a simple and efficient approach for the iterative reconstruction from truncated projections.
Flux schemes based finite volume method for internal transonic flow with condensation
Czech Academy of Sciences Publication Activity Database
Halama, Jan; Benkhaldoun, F.; Fořt, J.
2011-01-01
Roč. 65, č. 8 (2011), s. 953-968 ISSN 0271-2091 Institutional research plan: CEZ:AV0Z20760514 Keywords : VFFC flux * SRNH flux * two - phase homogeneous flow * fractional step method * condensation Subject RIV: BK - Fluid Dynamics Impact factor: 1.176, year: 2011
Measurement of absolute neutron flux in LWSCR based on the nuclear track method
International Nuclear Information System (INIS)
Sadeghzadeh, J.; Nassiri Mofakham, N.; Khajehmiri, Z.
2012-01-01
Highlights: ► Up to now the spectral parameters of thermal neutrons are measured with activation foils that are not always reliable in low flux systems. ► We applied a solid state nuclear track detector to measure the absolute neutron flux in the light water sub-critical reactor (LWSCR). ► Experiments concerning fission track detecting were performed and were investigated using the Monte Carlo code MCNP. ► The neutron fluxes obtained in experiment are in fairly good agreement with the results obtained by MCNP. - Abstract: In the present paper, a solid state nuclear track detector is applied to measure the absolute neutron flux in the light water sub-critical reactor (LWSCR) in Nuclear Science and Technology Research Institute (NSTRI). Up to now, the spectral parameters of thermal neutrons have been measured with activation foils that are not always reliable in low flux systems. The method investigated here is the irradiation method. Experiments concerning fission track detecting were performed. The experiment including neutron flux calculation method has also been investigated using the Monte Carlo code MCNP. The analysis shows that the values of neutron flux obtained by experiment are in fairly good agreement with the results obtained by MCNP. Thus, this method may be able to predict the absolute value of neutron flux at LWSCR and other similar reactors.
Iterative Reconstruction Methods for Hybrid Inverse Problems in Impedance Tomography
DEFF Research Database (Denmark)
Hoffmann, Kristoffer; Knudsen, Kim
2014-01-01
For a general formulation of hybrid inverse problems in impedance tomography the Picard and Newton iterative schemes are adapted and four iterative reconstruction algorithms are developed. The general problem formulation includes several existing hybrid imaging modalities such as current density...... impedance imaging, magnetic resonance electrical impedance tomography, and ultrasound modulated electrical impedance tomography, and the unified approach to the reconstruction problem encompasses several algorithms suggested in the literature. The four proposed algorithms are implemented numerically in two...
Evaluation of a 3D point cloud tetrahedral tomographic reconstruction method
International Nuclear Information System (INIS)
Pereira, N F; Sitek, A
2010-01-01
Tomographic reconstruction on an irregular grid may be superior to reconstruction on a regular grid. This is achieved through an appropriate choice of the image space model, the selection of an optimal set of points and the use of any available prior information during the reconstruction process. Accordingly, a number of reconstruction-related parameters must be optimized for best performance. In this work, a 3D point cloud tetrahedral mesh reconstruction method is evaluated for quantitative tasks. A linear image model is employed to obtain the reconstruction system matrix and five point generation strategies are studied. The evaluation is performed using the recovery coefficient, as well as voxel- and template-based estimates of bias and variance measures, computed over specific regions in the reconstructed image. A similar analysis is performed for regular grid reconstructions that use voxel basis functions. The maximum likelihood expectation maximization reconstruction algorithm is used. For the tetrahedral reconstructions, of the five point generation methods that are evaluated, three use image priors. For evaluation purposes, an object consisting of overlapping spheres with varying activity is simulated. The exact parallel projection data of this object are obtained analytically using a parallel projector, and multiple Poisson noise realizations of these exact data are generated and reconstructed using the different point generation strategies. The unconstrained nature of point placement in some of the irregular mesh-based reconstruction strategies has superior activity recovery for small, low-contrast image regions. The results show that, with an appropriately generated set of mesh points, the irregular grid reconstruction methods can out-perform reconstructions on a regular grid for mathematical phantoms, in terms of the performance measures evaluated.
Dynamic Error Analysis Method for Vibration Shape Reconstruction of Smart FBG Plate Structure
Directory of Open Access Journals (Sweden)
Hesheng Zhang
2016-01-01
Full Text Available Shape reconstruction of aerospace plate structure is an important issue for safe operation of aerospace vehicles. One way to achieve such reconstruction is by constructing smart fiber Bragg grating (FBG plate structure with discrete distributed FBG sensor arrays using reconstruction algorithms in which error analysis of reconstruction algorithm is a key link. Considering that traditional error analysis methods can only deal with static data, a new dynamic data error analysis method are proposed based on LMS algorithm for shape reconstruction of smart FBG plate structure. Firstly, smart FBG structure and orthogonal curved network based reconstruction method is introduced. Then, a dynamic error analysis model is proposed for dynamic reconstruction error analysis. Thirdly, the parameter identification is done for the proposed dynamic error analysis model based on least mean square (LMS algorithm. Finally, an experimental verification platform is constructed and experimental dynamic reconstruction analysis is done. Experimental results show that the dynamic characteristics of the reconstruction performance for plate structure can be obtained accurately based on the proposed dynamic error analysis method. The proposed method can also be used for other data acquisition systems and data processing systems as a general error analysis method.
Energy Technology Data Exchange (ETDEWEB)
Stefanicki, G.; Geissbuehler, P.; Siegwolf, R. [Paul Scherrer Inst. (PSI), Villigen (Switzerland)
1999-08-01
The Eddy covariance technique allows to measure different components of turbulent air fluxes, including the flow of water vapour. Sap flux measurements determine directly the water flow in tree stems. We compared the water flux just above the crowns of trees in a forest by the technique of Eddy covariance and the water flux by the xylem sap flux method. These two completely different approaches showed a good qualitative correspondence. The correlation coefficient is 0.8. With an estimation of the crown diameter of the measured tree we also find a very good quantitative agreement. (author) 3 figs., 5 refs.
Least-squares fitting method for on-line flux mapping of CANDU-PHWR
International Nuclear Information System (INIS)
Hong, I.S.; Kim, C.H.; Suk, H.C.
2002-01-01
A least-squares fitting method is developed for advanced on-line flux mapping in the CANDU-PHWR system. The method solves both the core neutronics design equations and the detector response equations on the least-squares principle which leads one to normal equations. The fine-mesh finite difference two-group diffusion theory calculations by SCAN code for Wolsong-3 unit are conducted to obtain the simulated real flux distribution and detector signals. The least-squares flux monitoring calculations are compared with the flux distribution calculation by the SCAN code without detector signals. It is shown that the least-squares method produces the flux distribution in better agreement with reference distribution than the coarse mesh SCAN calculation without detector signals. Through the 500 full power day burnup-history simulations of Wolsong-4 unit for benchmark, the mapped detector signals are compared with real detector signals. Maximum root mean squares (RMS) difference between the mapped detector signals and real detector signals are shown to be about 0.04 % by least-squares method, while it is about 5.43 % by the current flux-synthesis method. It is concluded that the least-squares fitting method is very promising as the advanced flux mapping methodology for CANDU-PHWR. (author)
A simulation of portable PET with a new geometric image reconstruction method
Energy Technology Data Exchange (ETDEWEB)
Kawatsu, Shoji [Department of Radiology, Kyoritu General Hospital, 4-33 Go-bancho, Atsuta-ku, Nagoya-shi, Aichi 456 8611 (Japan): Department of Brain Science and Molecular Imaging, National Institute for Longevity Sciences, National Center for Geriatrics and Gerontology, 36-3, Gengo Moriaka-cho, Obu-shi, Aichi 474 8522 (Japan)]. E-mail: b6rgw@fantasy.plala.or.jp; Ushiroya, Noboru [Department of General Education, Wakayama National College of Technology, 77 Noshima, Nada-cho, Gobo-shi, Wakayama 644 0023 (Japan)
2006-12-20
A new method is proposed for three-dimensional positron emission tomography image reconstruction. The method uses the elementary geometric property of line of response whereby two lines of response, which originate from radioactive isotopes in the same position, lie within a few millimeters distance of each other. The method differs from the filtered back projection method and the iterative reconstruction method. The method is applied to a simulation of portable positron emission tomography.
Measuring CO2 fluxes from contrasting soil management practices is important for understanding the role of agriculture in source-sink relationship with CO2 flux. There are several micrometeorological methods for measuring CO2 emissions, however all are expensive and thus do not easily lend themselve...
The calculation of neutron flux using Monte Carlo method
Günay, Mehtap; Bardakçı, Hilal
2017-09-01
In this study, a hybrid reactor system was designed by using 99-95% Li20Sn80 + 1-5% RG-Pu, 99-95% Li20Sn80 + 1-5% RG-PuF4, and 99-95% Li20Sn80 + 1-5% RG-PuO2 fluids, ENDF/B-VII.0 evaluated nuclear data library and 9Cr2WVTa structural material. The fluids were used in the liquid first wall, liquid second wall (blanket) and shield zones of a fusion-fission hybrid reactor system. The neutron flux was calculated according to the mixture components, radial, energy spectrum in the designed hybrid reactor system for the selected fluids, library and structural material. Three-dimensional nucleonic calculations were performed using the most recent version MCNPX-2.7.0 the Monte Carlo code.
DEFF Research Database (Denmark)
Kongskov, Rasmus Dalgas; Jørgensen, Jakob Sauer; Poulsen, Henning Friis
2016-01-01
Classical reconstruction methods for phase-contrast tomography consist of two stages: phase retrieval and tomographic reconstruction. A novel algebraic method combining the two was suggested by Kostenko et al. [Opt. Express 21, 12185 (2013) [CrossRef], and preliminary results demonstrated improve...
TORT/MCNP coupling method for the calculation of neutron flux around a core of BWR
International Nuclear Information System (INIS)
Kurosawa, M.
2005-01-01
For the analysis of BWR neutronics performance, accurate data are required for neutron flux distribution over the In-Reactor Pressure Vessel equipments taking into account the detailed geometrical arrangement. The TORT code can calculate neutron flux around a core of BWR in a three-dimensional geometry model, but has difficulties in fine geometrical modelling and lacks huge computer resource. On the other hand, the MCNP code enables the calculation of the neutron flux with a detailed geometry model, but requires very long sampling time to give enough number of particles. Therefore, a TORT/MCNP coupling method has been developed to eliminate the two problems mentioned above in each code. In this method, the TORT code calculates angular flux distribution on the core surface and the MCNP code calculates neutron spectrum at the points of interest using the flux distribution. The coupling method will be used as the DOT-DOMINO-MORSE code system. This TORT/MCNP coupling method was applied to calculate the neutron flux at points where induced radioactivity data were measured for 54 Mn and 60 Co and the radioactivity calculations based on the neutron flux obtained from the above method were compared with the measured data. (authors)
TORT/MCNP coupling method for the calculation of neutron flux around a core of BWR.
Kurosawa, Masahiko
2005-01-01
For the analysis of BWR neutronics performance, accurate data are required for neutron flux distribution over the In-Reactor Pressure Vessel equipments taking into account the detailed geometrical arrangement. The TORT code can calculate neutron flux around a core of BWR in a three-dimensional geometry model, but has difficulties in fine geometrical modelling and lacks huge computer resource. On the other hand, the MCNP code enables the calculation of the neutron flux with a detailed geometry model, but requires very long sampling time to give enough number of particles. Therefore, a TORT/MCNP coupling method has been developed to eliminate the two problems mentioned above in each code. In this method, the TORT code calculates angular flux distribution on the core surface and the MCNP code calculates neutron spectrum at the points of interest using the flux distribution. The coupling method will be used as the DOT-DOMINO-MORSE code system. This TORT/MCNP coupling method was applied to calculate the neutron flux at points where induced radioactivity data were measured for 54Mn and 60Co and the radioactivity calculations based on the neutron flux obtained from the above method were compared with the measured data.
Energy Technology Data Exchange (ETDEWEB)
Diogenes, Alysson N.; Santos, Luis O.E. dos; Fernandes, Celso P. [Universidade Federal de Santa Catarina (UFSC), Florianopolis, SC (Brazil); Appoloni, Carlos R. [Universidade Estadual de Londrina (UEL), PR (Brazil)
2008-07-01
The reservoir rocks physical properties are usually obtained in laboratory, through standard experiments. These experiments are often very expensive and time-consuming. Hence, the digital image analysis techniques are a very fast and low cost methodology for physical properties prediction, knowing only geometrical parameters measured from the rock microstructure thin sections. This research analyzes two methods for porous media reconstruction using the relaxation method simulated annealing. Using geometrical parameters measured from rock thin sections, it is possible to construct a three-dimensional (3D) model of the microstructure. We assume statistical homogeneity and isotropy and the 3D model maintains porosity spatial correlation, chord size distribution and d 3-4 distance transform distribution for a pixel-based reconstruction and spatial correlation for an object-based reconstruction. The 2D and 3D preliminary results are compared with microstructures reconstructed by truncated Gaussian methods. As this research is in its beginning, only the 2D results will be presented. (author)
Directory of Open Access Journals (Sweden)
M. Lockwood
2014-04-01
Full Text Available In the concluding paper of this tetralogy, we here use the different geomagnetic activity indices to reconstruct the near-Earth interplanetary magnetic field (IMF and solar wind flow speed, as well as the open solar flux (OSF from 1845 to the present day. The differences in how the various indices vary with near-Earth interplanetary parameters, which are here exploited to separate the effects of the IMF and solar wind speed, are shown to be statistically significant at the 93% level or above. Reconstructions are made using four combinations of different indices, compiled using different data and different algorithms, and the results are almost identical for all parameters. The correction to the aa index required is discussed by comparison with the Ap index from a more extensive network of mid-latitude stations. Data from the Helsinki magnetometer station is used to extend the aa index back to 1845 and the results confirmed by comparison with the nearby St Petersburg observatory. The optimum variations, using all available long-term geomagnetic indices, of the near-Earth IMF and solar wind speed, and of the open solar flux, are presented; all with ±2σ uncertainties computed using the Monte Carlo technique outlined in the earlier papers. The open solar flux variation derived is shown to be very similar indeed to that obtained using the method of Lockwood et al. (1999.
Energy Technology Data Exchange (ETDEWEB)
Nogrette, F.; Chang, R.; Bouton, Q.; Westbrook, C. I.; Clément, D. [Laboratoire Charles Fabry, Institut d’Optique Graduate School, CNRS, Univ. Paris-Saclay, 91127 Palaiseau cedex (France); Heurteau, D.; Sellem, R. [Fédération de Recherche LUMAT (DTPI), CNRS, Univ. Paris-Sud, Institut d’Optique Graduate School, Univ. Paris-Saclay, F-91405 Orsay (France)
2015-11-15
We report on the development of a novel FPGA-based time-to-digital converter and its implementation in a detection chain that records the coordinates of single particles along three dimensions. The detector is composed of micro-channel plates mounted on top of a cross delay line and connected to fast electronics. We demonstrate continuous recording of the timing signals from the cross delay line at rates up to 4.1 × 10{sup 6} s{sup −1} and three-dimensional reconstruction of the coordinates up to 3.2 × 10{sup 6} particles per second. From the imaging of a calibrated structure we measure the in-plane resolution of the detector to be 140(20) μm at a flux of 3 × 10{sup 5} particles per second. In addition, we analyze a method to estimate the resolution without placing any structure under vacuum, a significant practical improvement. While we use UV photons here, the results of this work apply to the detection of other kinds of particles.
Dobramysl, U; Holcman, D
2018-02-15
Is it possible to recover the position of a source from the steady-state fluxes of Brownian particles to small absorbing windows located on the boundary of a domain? To address this question, we develop a numerical procedure to avoid tracking Brownian trajectories in the entire infinite space. Instead, we generate particles near the absorbing windows, computed from the analytical expression of the exit probability. When the Brownian particles are generated by a steady-state gradient at a single point, we compute asymptotically the fluxes to small absorbing holes distributed on the boundary of half-space and on a disk in two dimensions, which agree with stochastic simulations. We also derive an expression for the splitting probability between small windows using the matched asymptotic method. Finally, when there are more than two small absorbing windows, we show how to reconstruct the position of the source from the diffusion fluxes. The present approach provides a computational first principle for the mechanism of sensing a gradient of diffusing particles, a ubiquitous problem in cell biology.
Energy Technology Data Exchange (ETDEWEB)
Van Eyndhoven, G., E-mail: geert.vaneyndhoven@uantwerpen.be [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Kurttepeli, M. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Van Oers, C.J.; Cool, P. [Laboratory of Adsorption and Catalysis, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Bals, S. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Batenburg, K.J. [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Centrum Wiskunde and Informatica, Science Park 123, NL-1090 GB Amsterdam (Netherlands); Mathematical Institute, Universiteit Leiden, Niels Bohrweg 1, NL-2333 CA Leiden (Netherlands); Sijbers, J. [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium)
2015-01-15
Electron tomography is currently a versatile tool to investigate the connection between the structure and properties of nanomaterials. However, a quantitative interpretation of electron tomography results is still far from straightforward. Especially accurate quantification of pore-space is hampered by artifacts introduced in all steps of the processing chain, i.e., acquisition, reconstruction, segmentation and quantification. Furthermore, most common approaches require subjective manual user input. In this paper, the PORES algorithm “POre REconstruction and Segmentation” is introduced; it is a tailor-made, integral approach, for the reconstruction, segmentation, and quantification of porous nanomaterials. The PORES processing chain starts by calculating a reconstruction with a nanoporous-specific reconstruction algorithm: the Simultaneous Update of Pore Pixels by iterative REconstruction and Simple Segmentation algorithm (SUPPRESS). It classifies the interior region to the pores during reconstruction, while reconstructing the remaining region by reducing the error with respect to the acquired electron microscopy data. The SUPPRESS reconstruction can be directly plugged into the remaining processing chain of the PORES algorithm, resulting in accurate individual pore quantification and full sample pore statistics. The proposed approach was extensively validated on both simulated and experimental data, indicating its ability to generate accurate statistics of nanoporous materials. - Highlights: • An electron tomography reconstruction/segmentation method for nanoporous materials. • The method exploits the porous nature of the scanned material. • Validated extensively on both simulation and real data experiments. • Results in increased image resolution and improved porosity quantification.
Jiang, Hongzhen; Liu, Xu; Liu, Yong; Li, Dong; Chen, Zhu; Zheng, Fanglan; Yu, Deqiang
2017-10-01
An effective approach for reconstructing on-axis lensless Fourier Transform digital hologram by using the screen division method is proposed. Firstly, the on-axis Fourier Transform digital hologram is divided into sub-holograms. Then the reconstruction result of every sub-hologram is obtained according to the position of corresponding sub-hologram in the hologram reconstruction plane with Fourier transform operation. Finally, the reconstruction image of on-axis Fourier Transform digital hologram can be acquired by the superposition of the reconstruction result of sub-holograms. Compared with the traditional reconstruction method with the phase shifting technology, in which multiple digital holograms are required to record for obtaining the reconstruction image, this method can obtain the reconstruction image with only one digital hologram and therefore greatly simplify the recording and reconstruction process of on-axis lensless Fourier Transform digital holography. The effectiveness of the proposed method is well proved with the experimental results and it will have potential application foreground in the holographic measurement and display field.
Dictionary-learning-based reconstruction method for electron tomography.
Liu, Baodong; Yu, Hengyong; Verbridge, Scott S; Sun, Lizhi; Wang, Ge
2014-01-01
Electron tomography usually suffers from so-called “missing wedge” artifacts caused by limited tilt angle range. An equally sloped tomography (EST) acquisition scheme (which should be called the linogram sampling scheme) was recently applied to achieve 2.4-angstrom resolution. On the other hand, a compressive sensing inspired reconstruction algorithm, known as adaptive dictionary based statistical iterative reconstruction (ADSIR), has been reported for X-ray computed tomography. In this paper, we evaluate the EST, ADSIR, and an ordered-subset simultaneous algebraic reconstruction technique (OS-SART), and compare the ES and equally angled (EA) data acquisition modes. Our results show that OS-SART is comparable to EST, and the ADSIR outperforms EST and OS-SART. Furthermore, the equally sloped projection data acquisition mode has no advantage over the conventional equally angled mode in this context.
Dictionary-Learning-Based Reconstruction Method for Electron Tomography
LIU, BAODONG; YU, HENGYONG; VERBRIDGE, SCOTT S.; SUN, LIZHI; WANG, GE
2014-01-01
Summary Electron tomography usually suffers from so-called “missing wedge” artifacts caused by limited tilt angle range. An equally sloped tomography (EST) acquisition scheme (which should be called the linogram sampling scheme) was recently applied to achieve 2.4-angstrom resolution. On the other hand, a compressive sensing inspired reconstruction algorithm, known as adaptive dictionary based statistical iterative reconstruction (ADSIR), has been reported for X-ray computed tomography. In this paper, we evaluate the EST, ADSIR, and an ordered-subset simultaneous algebraic reconstruction technique (OS-SART), and compare the ES and equally angled (EA) data acquisition modes. Our results show that OS-SART is comparable to EST, and the ADSIR outperforms EST and OS-SART. Furthermore, the equally sloped projection data acquisition mode has no advantage over the conventional equally angled mode in this context. PMID:25104167
Accelerated gradient methods for total-variation-based CT image reconstruction
DEFF Research Database (Denmark)
Jørgensen, Jakob Heide; Jensen, Tobias Lindstrøm; Hansen, Per Christian
2011-01-01
incorporates several heuristics from the optimization literature such as Barzilai-Borwein (BB) step size selection and nonmonotone line search. The latter uses a cleverly chosen sequence of auxiliary points to achieve a better convergence rate. The methods are memory efficient and equipped with a stopping...... reconstruction can in principle be found by any optimization method, but in practice the large scale of the systems arising in CT image reconstruction preclude the use of memory-demanding methods such as Newton’s method. The simple gradient method has much lower memory requirements, but exhibits slow convergence....... In the present work we address the question of how to reduce the number of gradient method iterations needed to achieve a high-accuracy TV reconstruction. We consider the use of two accelerated gradient-based methods, GPBB and UPN, to solve the 3D-TV minimization problem in CT image reconstruction. The former...
Reconstruction of electron beam distribution in phase space by using parallel maximum entropy method
International Nuclear Information System (INIS)
Hajima, R.; Hirotsu, T.; Kondo, S.
1997-01-01
Reconstruction of electron beam distribution in six-dimensional phase space by tomographic approach is presented. Maximum entropy method (MENT) is applied to the reconstruction and compared with filtered back-projection. Finally, MENT is adapted to parallel computing environment with PVM. (orig.)
Shu, Chi-Wang
1992-01-01
The present treatment of elliptic regions via hyperbolic flux-splitting and high order methods proposes a flux splitting in which the corresponding Jacobians have real and positive/negative eigenvalues. While resembling the flux splitting used in hyperbolic systems, the present generalization of such splitting to elliptic regions allows the handling of mixed-type systems in a unified and heuristically stable fashion. The van der Waals fluid-dynamics equation is used. Convergence with good resolution to weak solutions for various Riemann problems are observed.
Rezaeian, P.; Ataenia, V.; Shafiei, S.
2017-12-01
In this paper, the flux of photons inside the irradiation cell of the Gammacell-220 is calculated using an analytical method based on multipole moment expansion. The flux of the photons inside the irradiation cell is introduced as the function of monopole, dipoles and quadruples in the Cartesian coordinate system. For the source distribution of the Gammacell-220, the values of the multipole moments are specified by direct integrating. To confirm the validation of the presented methods, the flux distribution inside the irradiation cell was determined utilizing MCNP simulations as well as experimental measurements. To measure the flux inside the irradiation cell, Amber dosimeters were employed. The calculated values of the flux were in agreement with the values obtained by simulations and measurements, especially in the central zones of the irradiation cell. In order to show that the present method is a good approximation to determine the flux in the irradiation cell, the values of the multipole moments were obtained by fitting the simulation and experimental data using Levenberg-Marquardt algorithm. The present method leads to reasonable results for the all source distribution even without any symmetry which makes it a powerful tool for the source load planning.
Deep Learning Methods for Particle Reconstruction in the HGCal
Arzi, Ofir
2017-01-01
The High Granularity end-cap Calorimeter is part of the phase-2 CMS upgrade (see Figure \\ref{fig:cms})\\cite{Contardo:2020886}. It's goal it to provide measurements of high resolution in time, space and energy. Given such measurements, the purpose of this work is to discuss the use of Deep Neural Networks for the task of particle and trajectory reconstruction, identification and energy estimation, during my participation in the CERN Summer Students Program.
Deep Learning Methods for Particle Reconstruction in the HGCal
Arzi, Ofir
2017-01-01
The High Granularity end-cap Calorimeter is part of the phase-2 CMS upgrade (see Figure 1)[1]. It’s goal it to provide measurements of high resolution in time, space and energy. Given such measurements, the purpose of this work is to discuss the use of Deep Neural Networks for the task of particle and trajectory reconstruction, identiﬁcation and energy estimation, during my participation in the CERN Summer Students Program.
Skin sparing mastectomy: Technique and suggested methods of reconstruction
Directory of Open Access Journals (Sweden)
Ahmed M. Farahat
2014-09-01
Conclusions: Skin Sparing mastectomy through a circum-areolar incision has proven to be a safe and feasible option for the management of breast cancer in Egyptian women, offering them adequate oncologic control and optimum cosmetic outcome through preservation of the skin envelope of the breast when ever indicated. Our patients can benefit from safe surgery and have good cosmetic outcomeby applying different reconstructive techniques.
Witherden, F. D.; Farrington, A. M.; Vincent, P. E.
2014-11-01
High-order numerical methods for unstructured grids combine the superior accuracy of high-order spectral or finite difference methods with the geometric flexibility of low-order finite volume or finite element schemes. The Flux Reconstruction (FR) approach unifies various high-order schemes for unstructured grids within a single framework. Additionally, the FR approach exhibits a significant degree of element locality, and is thus able to run efficiently on modern streaming architectures, such as Graphical Processing Units (GPUs). The aforementioned properties of FR mean it offers a promising route to performing affordable, and hence industrially relevant, scale-resolving simulations of hitherto intractable unsteady flows within the vicinity of real-world engineering geometries. In this paper we present PyFR, an open-source Python based framework for solving advection-diffusion type problems on streaming architectures using the FR approach. The framework is designed to solve a range of governing systems on mixed unstructured grids containing various element types. It is also designed to target a range of hardware platforms via use of an in-built domain specific language based on the Mako templating engine. The current release of PyFR is able to solve the compressible Euler and Navier-Stokes equations on grids of quadrilateral and triangular elements in two dimensions, and hexahedral elements in three dimensions, targeting clusters of CPUs, and NVIDIA GPUs. Results are presented for various benchmark flow problems, single-node performance is discussed, and scalability of the code is demonstrated on up to 104 NVIDIA M2090 GPUs. The software is freely available under a 3-Clause New Style BSD license (see www.pyfr.org).
A fast 4D cone beam CT reconstruction method based on the OSC-TV algorithm.
Mascolo-Fortin, Julia; Matenine, Dmitri; Archambault, Louis; Després, Philippe
2018-01-01
Four-dimensional cone beam computed tomography allows for temporally resolved imaging with useful applications in radiotherapy, but raises particular challenges in terms of image quality and computation time. The purpose of this work is to develop a fast and accurate 4D algorithm by adapting a GPU-accelerated ordered subsets convex algorithm (OSC), combined with the total variation minimization regularization technique (TV). Different initialization schemes were studied to adapt the OSC-TV algorithm to 4D reconstruction: each respiratory phase was initialized either with a 3D reconstruction or a blank image. Reconstruction algorithms were tested on a dynamic numerical phantom and on a clinical dataset. 4D iterations were implemented for a cluster of 8 GPUs. All developed methods allowed for an adequate visualization of the respiratory movement and compared favorably to the McKinnon-Bates and adaptive steepest descent projection onto convex sets algorithms, while the 4D reconstructions initialized from a prior 3D reconstruction led to better overall image quality. The most suitable adaptation of OSC-TV to 4D CBCT was found to be a combination of a prior FDK reconstruction and a 4D OSC-TV reconstruction with a reconstruction time of 4.5 minutes. This relatively short reconstruction time could facilitate a clinical use.
Time-varying magnetotail magnetic flux calculation: a test of the method
Directory of Open Access Journals (Sweden)
M. A. Shukhtina
2009-04-01
Full Text Available We modified the Petrinec and Russell (1996 algorithm to allow the computation of time-varying magnetotail magnetic flux based on simultaneous spacecraft measurements in the magnetotail and near-Earth solar wind. In view of many assumptions made we tested the algorithm against MHD simulation in the artificial event, which provides the input from two artificial spacecraft to compute the magnetic flux F values with our algorithm; the latter are compared with flux values, obtained by direct integration in the tail cross-section. The comparison shows similar time variations of predicted and simulated fluxes as well as their good correlation (cc>0.9 for the input taken from the tail lobe, which somewhat degrades if using the "measurements" from the central plasma sheet. The regression relationship between the predicted and computed flux values is rather stable allowing one to correct the absolute value of predicted magnetic flux.
We conclude that this method is a perspective tool to monitor the tail magnetic flux which is one of the main global magnetotail parameters.
Time-varying magnetotail magnetic flux calculation: a test of the method
Directory of Open Access Journals (Sweden)
M. A. Shukhtina
2009-04-01
Full Text Available We modified the Petrinec and Russell (1996 algorithm to allow the computation of time-varying magnetotail magnetic flux based on simultaneous spacecraft measurements in the magnetotail and near-Earth solar wind. In view of many assumptions made we tested the algorithm against MHD simulation in the artificial event, which provides the input from two artificial spacecraft to compute the magnetic flux F values with our algorithm; the latter are compared with flux values, obtained by direct integration in the tail cross-section. The comparison shows similar time variations of predicted and simulated fluxes as well as their good correlation (cc>0.9 for the input taken from the tail lobe, which somewhat degrades if using the "measurements" from the central plasma sheet. The regression relationship between the predicted and computed flux values is rather stable allowing one to correct the absolute value of predicted magnetic flux. We conclude that this method is a perspective tool to monitor the tail magnetic flux which is one of the main global magnetotail parameters.
Xu, Guan; Yuan, Jing; Li, Xiaotao; Su, Jian
2018-01-24
An optimization method to reconstruct the object profile is performed by using a flexible laser plane and bi-planar references. The bi-planar references are considered as flexible benchmarks to realize the transforms among two world coordinate systems on the bi-planar references, the camera coordinate system and the image coordinate system. The laser plane is confirmed by the intersection points between the bi-planar references and laser plane. The 3D camera coordinates of the intersection points between the laser plane and a measured object are initially reconstructed by the image coordinates of the intersection points, the intrinsic parameter matrix and the laser plane. Meanwhile, an optimization function is designed by the parameterized differences of the reconstruction distances with the help of a target with eight markers, and the parameterized reprojection errors of feature points on the bi-planar references. The reconstruction method with the bi-planar references is evaluated by the difference comparisons between true distances and standard distances. The mean of the reconstruction errors of the initial method is 1.01 mm. Moreover, the mean of the reconstruction errors of the optimization method is 0.93 mm. Therefore, the optimization method with the bi-planar references has great application prospects in the profile reconstruction.
A shape-based quality evaluation and reconstruction method for electrical impedance tomography
International Nuclear Information System (INIS)
Antink, Christoph Hoog; Pikkemaat, Robert; Leonhardt, Steffen; Malmivuo, Jaakko
2015-01-01
Linear methods of reconstruction play an important role in medical electrical impedance tomography (EIT) and there is a wide variety of algorithms based on several assumptions. With the Graz consensus reconstruction algorithm for EIT (GREIT), a novel linear reconstruction algorithm as well as a standardized framework for evaluating and comparing methods of reconstruction were introduced that found widespread acceptance in the community.In this paper, we propose a two-sided extension of this concept by first introducing a novel method of evaluation. Instead of being based on point-shaped resistivity distributions, we use 2759 pairs of real lung shapes for evaluation that were automatically segmented from human CT data. Necessarily, the figures of merit defined in GREIT were adjusted. Second, a linear method of reconstruction that uses orthonormal eigenimages as training data and a tunable desired point spread function are proposed.Using our novel method of evaluation, this approach is compared to the classical point-shaped approach. Results show that most figures of merit improve with the use of eigenimages as training data. Moreover, the possibility of tuning the reconstruction by modifying the desired point spread function is shown. Finally, the reconstruction of real EIT data shows that higher contrasts and fewer artifacts can be achieved in ventilation- and perfusion-related images. (paper)
Hommen, G.; de M. Baar,; Citrin, J.; de Blank, H. J.; Voorhoeve, R. J.; de Bock, M. F. M.; Steinbuch, M.
2013-01-01
The flux surfaces' layout and the magnetic winding number q are important quantities for the performance and stability of tokamak plasmas. Normally, these quantities are iteratively derived by solving the plasma equilibrium for the poloidal and toroidal flux. In this work, a fast, non-iterative
Least Square NUFFT Methods Applied to 2D and 3D Radially Encoded MR Image Reconstruction
Song, Jiayu; Liu, Qing H.; Gewalt, Sally L.; Cofer, Gary; Johnson, G. Allan
2009-01-01
Radially encoded MR imaging (MRI) has gained increasing attention in applications such as hyperpolarized gas imaging, contrast-enhanced MR angiography, and dynamic imaging, due to its motion insensitivity and improved artifact properties. However, since the technique collects k-space samples nonuniformly, multidimensional (especially 3D) radially sampled MRI image reconstruction is challenging. The balance between reconstruction accuracy and speed becomes critical when a large data set is processed. Kaiser-Bessel gridding reconstruction has been widely used for non-Cartesian reconstruction. The objective of this work is to provide an alternative reconstruction option in high dimensions with on-the-fly kernels calculation. The work develops general multi-dimensional least square nonuniform fast Fourier transform (LS-NUFFT) algorithms and incorporates them into a k-space simulation and image reconstruction framework. The method is then applied to reconstruct the radially encoded k-space, although the method addresses general nonuniformity and is applicable to any non-Cartesian patterns. Performance assessments are made by comparing the LS-NUFFT based method with the conventional Kaiser-Bessel gridding method for 2D and 3D radially encoded computer simulated phantoms and physically scanned phantoms. The results show that the LS-NUFFT reconstruction method has better accuracy-speed efficiency than the Kaiser-Bessel gridding method when the kernel weights are calculated on the fly. The accuracy of the LS-NUFFT method depends on the choice of scaling factor, and it is found that for a particular conventional kernel function, using its corresponding deapodization function as scaling factor and utilizing it into the LS-NUFFT framework has the potential to improve accuracy. When a cosine scaling factor is used, in particular, the LS-NUFFT method is faster than Kaiser-Bessel gridding method because of a quasi closed-form solution. The method is successfully applied to 2D and
Directory of Open Access Journals (Sweden)
Songjun Zeng
2010-01-01
Full Text Available A method for three-dimensional (3D reconstruction of macromolecule assembles, that is, octahedral symmetrical adapted functions (OSAFs method, was introduced in this paper and a series of formulations for reconstruction by OSAF method were derived. To verify the feasibility and advantages of the method, two octahedral symmetrical macromolecules, that is, heat shock protein Degp24 and the Red-cell L Ferritin, were utilized as examples to implement reconstruction by the OSAF method. The schedule for simulation was designed as follows: 2000 random orientated projections of single particles with predefined Euler angles and centers of origins were generated, then different levels of noises that is signal-to-noise ratio (S/N =0.1,0.5, and 0.8 were added. The structures reconstructed by the OSAF method were in good agreement with the standard models and the relative errors of the structures reconstructed by the OSAF method to standard structures were very little even for high level noise. The facts mentioned above account for that the OSAF method is feasible and efficient approach to reconstruct structures of macromolecules and have ability to suppress the influence of noise.
Virtual Flux Droop Method – A New Control Strategy of Inverters in Microgrids
DEFF Research Database (Denmark)
Hu, Jiefeng; Zhu, Jianguo; Dorrell, David
2014-01-01
The parallel operation of inverters in microgrids is mainly based on the droop method. Conventional voltage droop method consists of adjusting the output voltage frequency and amplitude to achieve autonomous power sharing without control wire interconnections. Nevertheless, the conventional voltage...... droop method shows several drawbacks, such as complicated inner multiloop feedback control, and most importantly, frequency and voltage deviations. This paper proposes a new control strategy in microgrid applications by drooping the virtual flux instead of the inverter output voltage. First......, the relationship between the inverter virtual flux and the active and reactive powers is mathematically obtained. This is used to develop a new flux droop method. In addition, a small-signal model is developed in order to design the main control parameters and study the system dynamics and stability. Furthermore...
A Family of Multipoint Flux Mixed Finite Element Methods for Elliptic Problems on General Grids
Wheeler, Mary F.
2011-01-01
In this paper, we discuss a family of multipoint flux mixed finite element (MFMFE) methods on simplicial, quadrilateral, hexahedral, and triangular-prismatic grids. The MFMFE methods are locally conservative with continuous normal fluxes, since they are developed within a variational framework as mixed finite element methods with special approximating spaces and quadrature rules. The latter allows for local flux elimination giving a cell-centered system for the scalar variable. We study two versions of the method: with a symmetric quadrature rule on smooth grids and a non-symmetric quadrature rule on rough grids. Theoretical and numerical results demonstrate first order convergence for problems with full-tensor coefficients. Second order superconvergence is observed on smooth grids. © 2011 Published by Elsevier Ltd.
Green, C. T.; Liao, L.; Nolan, B. T.; Juckem, P. F.; Ransom, K.; Harter, T.
2017-12-01
Process-based modeling of regional NO3- fluxes to groundwater is critical for understanding and managing water quality. Measurements of atmospheric tracers of groundwater age and dissolved-gas indicators of denitrification progress have potential to improve estimates of NO3- reactive transport processes. This presentation introduces a regionalized version of a vertical flux method (VFM) that uses simple mathematical estimates of advective-dispersive reactive transport with regularization procedures to calibrate estimated tracer concentrations to observed equivalents. The calibrated VFM provides estimates of chemical, hydrologic and reaction parameters (source concentration time series, recharge, effective porosity, dispersivity, reaction rate coefficients) and derived values (e.g. mean unsaturated zone travel time, eventual depth of the NO3- front) for individual wells. Statistical learning methods are used to extrapolate parameters and predictions from wells to continuous areas. The regional VFM was applied to 473 well samples in central-eastern Wisconsin. Chemical measurements included O2, NO3-, N2 from denitrification, and atmospheric tracers of groundwater age including carbon-14, chlorofluorocarbons, tritium, and triogiogenic helium. VFM results were consistent with observed chemistry, and calibrated parameters were in-line with independent estimates. Results indicated that (1) unsaturated zone travel times were a substantial portion of the transit time to wells and streams (2) fractions of N leached to groundwater have changed over time, with increasing fractions from manure and decreasing fractions from fertilizer, and (3) under current practices and conditions, 60% of the shallow aquifer will eventually be affected by NO3- contamination. Based on GIS coverages of variables related to soils, land use and hydrology, the VFM results at individual wells were extrapolated regionally using boosted regression trees, a statistical learning approach, that related
Gas-Kinetic Theory Based Flux Splitting Method for Ideal Magnetohydrodynamics
Xu, Kun
1998-01-01
A gas-kinetic solver is developed for the ideal magnetohydrodynamics (MHD) equations. The new scheme is based on the direct splitting of the flux function of the MHD equations with the inclusion of "particle" collisions in the transport process. Consequently, the artificial dissipation in the new scheme is much reduced in comparison with the MHD Flux Vector Splitting Scheme. At the same time, the new scheme is compared with the well-developed Roe-type MHD solver. It is concluded that the kinetic MHD scheme is more robust and efficient than the Roe- type method, and the accuracy is competitive. In this paper the general principle of splitting the macroscopic flux function based on the gas-kinetic theory is presented. The flux construction strategy may shed some light on the possible modification of AUSM- and CUSP-type schemes for the compressible Euler equations, as well as to the development of new schemes for a non-strictly hyperbolic system.
Energy Technology Data Exchange (ETDEWEB)
Schaaf, S.; Daemmgen, U.; Burkart, S. [Federal Agricultural Research Centre, Inst. of Agroecology, Braunschweig (Germany); Gruenhage, L. [Justus-Liebig-Univ., Inst. for Plant Ecology, Giessen (Germany)
2005-04-01
Vertical fluxes of water vapour and carbon dioxide obtained from gradient, eddy covariance (closed and open path systems) and chamber measurements above arable crops were compared with the directly measured energy balance and the harvested net biomass carbon. The gradient and chamber measurements were in the correct order of magnitude, whereas the closed path eddy covariance system showed unacceptably small fluxes. Correction methods based on power spectra analysis yielded increased fluxes. However, the energy balance could not be closed satisfactorily. The application of the open path system proved to be successful. The SVAT model PLATIN which had been adapted to various arable crops was able to depict the components of the energy balance adequately. Net carbon fluxes determined with the corrected closed path data sets, chamber, and SVAT model equal those of the harvested carbon. (orig.)
Evaluation of surface renewal and flux-variance methods above agricultural and forest surfaces
Fischer, M.; Katul, G. G.; Noormets, A.; Poznikova, G.; Domec, J. C.; Trnka, M.; King, J. S.
2016-12-01
Measurements of turbulent surface energy fluxes are of high interest in agriculture and forest research. During last decades, eddy covariance (EC), has been adopted as the most commonly used micrometeorological method for measuring fluxes of greenhouse gases, energy and other scalars at the surface-atmosphere interface. Despite its robustness and accuracy, the costs of EC hinder its deployment at some research experiments and in practice like e.g. for irrigation scheduling. Therefore, testing and development of other cost-effective methods is of high interest. In our study, we tested performance of surface renewal (SR) and flux variance method (FV) for estimates of sensible heat flux density. Surface renewal method is based on the concept of non-random transport of scalars via so-called coherent structures which if accurately identified can be used for the computing of associated flux. Flux variance method predicts the flux from the scalar variance following the surface-layer similarity theory. We tested SR and FV against EC in three types of ecosystem with very distinct aerodynamic properties. First site was represented by agricultural wheat field in the Czech Republic. The second site was a 20-m tall mixed deciduous wetland forest on the coast of North Carolina, USA. The third site was represented by pine-switchgrass intercropping agro-forestry system located in coastal plain of North Carolina, USA. Apart from solving the coherent structures in a SR framework from the structure functions (representing the most common approach), we applied ramp wavelet detection scheme to test the hypothesis that the duration and amplitudes of the coherent structures are normally distributed within the particular 30-minutes time intervals and so just the estimates of their averages is sufficient for the accurate flux determination. Further, we tested whether the orthonormal wavelet thresholding can be used for isolating of the coherent structure scales which are associated with
Hasegawa, H.; Nakamura, R.; Fujimoto, M.; Sergeev, V. A.; Lucek, E. A.; RèMe, H.; Khotyaintsev, Y.
2007-11-01
Southward-then-northward magnetic perturbations are often seen in the tail plasma sheet, along with earthward jets, but the generation mechanism of such bipolar Bz (magnetic flux rope created through multiple X-line reconnection, transient reconnection, or else) has been controversial. At ˜2313 UT on 13 August 2002, Cluster encountered a bipolar Bz at the leading edge of an earthward jet, with one of the four spacecraft in the middle of the current sheet. Application to this bipolar signature of Grad-Shafranov (GS) reconstruction, the technique for recovery of two-dimensional (2D) magnetohydrostatic structures, suggests that a flux rope with diameter of ˜2 RE was embedded in the jet. To investigate the validity of the GS results, the technique is applied to synthetic data from a three-dimensional (3D) MHD simulation, in which a bipolar Bz can be produced through localized (3D) reconnection in the presence of guide field By (Shirataka et al., 2006) without invoking multiple X-lines. A flux rope-type structure, which does not in fact exist in the simulation, is reconstructed but with a shape elongated in the jet direction. Unambiguous identification of a mechanism that leads to an observed bipolar Bz thus seems difficult based on the topological property in the GS maps. We however infer that a flux rope was responsible for the bipolar pulse in this particular Cluster event, because the recovered magnetic structure is roughly circular, suggesting a relaxed and minimum energy state. Our results also indicate that one has to be cautious about interpretation of some (e.g., force-free, or magnetohydrostatic) model-based results.
Multiobjective flux balancing using the NISE method for metabolic network analysis.
Oh, Young-Gyun; Lee, Dong-Yup; Lee, Sang Yup; Park, Sunwon
2009-01-01
Flux balance analysis (FBA) is well acknowledged as an analysis tool of metabolic networks in the framework of metabolic engineering. However, FBA has a limitation for solving a multiobjective optimization problem which considers multiple conflicting objectives. In this study, we propose a novel multiobjective flux balance analysis method, which adapts the noninferior set estimation (NISE) method (Solanki et al., 1993) for multiobjective linear programming (MOLP) problems. NISE method can generate an approximation of the Pareto curve for conflicting objectives without redundant iterations of single objective optimization. Furthermore, the flux distributions at each Pareto optimal solution can be obtained for understanding the internal flux changes in the metabolic network. The functionality of this approach is shown by applying it to a genome-scale in silico model of E. coli. Multiple objectives for the poly(3-hydroxybutyrate) [P(3HB)] production are considered simultaneously, and relationships among them are identified. The Pareto curve for maximizing succinic acid production vs. maximizing biomass production is used for the in silico analysis of various combinatorial knockout strains. This proposed method accelerates the strain improvement in the metabolic engineering by reducing computation time of obtaining the Pareto curve and analysis time of flux distribution at each Pareto optimal solution. (c) 2009 American Institute of Chemical Engineers Biotechnol. Prog., 2009.
Sensible heat flux of oil palm plantation: Comparing Aerodynamic and Penman-Monteith Methods
Amri Komarudin, Nurul; June, Tania; Meijide, Ana
2017-01-01
Oil Palm (Elaeis guinensis Jacq) has a unique morphological characteristics, in particular it has a uniform canopy. As the plant become older, its canopy coverage will completely cover the surface and influence characteristics of its microclimate. Sensible heat flux estimation of oil palm plantation could be used to identify the contribution of oil palm in reducing or increasing heat to its surrounding environment. Determination of heat flux from oil palm plantation was conducted using two methods, Aerodynamic and Penman-Monteith. The result shows that the two methods have similar diurnal pattern. The sensible heat flux peaks in the afternoon, both for two and twelve years oil palm plantations. Sensible heat flux of young plantation is affected by atmospheric stability (stable, unstable and neutral), and is higher than that of older plantation, with mean values of 0.52 W/m2 (stable), 43.53 W/m2 (unstable), 0.63 W/m2 (neutral), with standard deviation of 0.50, 28.75 and 0.46 respectively. Sensible heat flux estimated by Penman-Monteith method in both young and older plantation was higher than the value determined by Aerodynamic method with respective value of 0.77 W/m2 (stable), 45.13 W/m2 (unstable) and 0.63 W/m2 (neutral) and 0.34 W/m2 (stable), 35.82 W/m2 (unstable) and 0.71 W/m2 (neutral).
International Nuclear Information System (INIS)
Wasastjerna, F.; Lux, I.
1980-03-01
A transmission probability method implemented in the program TPHEX is described. This program was developed for the calculation of neutron flux distributions in hexagonal light water reactor fuel assemblies. The accuracy appears to be superior to diffusion theory, and the computation time is shorter than that of the collision probability method. (author)
Zhu, Dianwen; Zhang, Wei; Zhao, Yue; Li, Changqing
2016-03-01
Dynamic fluorescence molecular tomography (FMT) has the potential to quantify physiological or biochemical information, known as pharmacokinetic parameters, which are important for cancer detection, drug development and delivery etc. To image those parameters, there are indirect methods, which are easier to implement but tend to provide images with low signal-to-noise ratio, and direct methods, which model all the measurement noises together and are statistically more efficient. The direct reconstruction methods in dynamic FMT have attracted a lot of attention recently. However, the coupling of tomographic image reconstruction and nonlinearity of kinetic parameter estimation due to the compartment modeling has imposed a huge computational burden to the direct reconstruction of the kinetic parameters. In this paper, we propose to take advantage of both the direct and indirect reconstruction ideas through a variable splitting strategy under the augmented Lagrangian framework. Each iteration of the direct reconstruction is split into two steps: the dynamic FMT image reconstruction and the node-wise nonlinear least squares fitting of the pharmacokinetic parameter images. Through numerical simulation studies, we have found that the proposed algorithm can achieve good reconstruction results within a small amount of time. This will be the first step for a combined dynamic PET and FMT imaging in the future.
Performance Evaluation of Super-Resolution Reconstruction Methods on Real-World Data
Directory of Open Access Journals (Sweden)
L. J. van Vliet
2007-01-01
Full Text Available The performance of a super-resolution (SR reconstruction method on real-world data is not easy to measure, especially as a ground-truth (GT is often not available. In this paper, a quantitative performance measure is used, based on triangle orientation discrimination (TOD. The TOD measure, simulating a real-observer task, is capable of determining the performance of a specific SR reconstruction method under varying conditions of the input data. It is shown that the performance of an SR reconstruction method on real-world data can be predicted accurately by measuring its performance on simulated data. This prediction of the performance on real-world data enables the optimization of the complete chain of a vision system; from camera setup and SR reconstruction up to image detection/recognition/identification. Furthermore, different SR reconstruction methods are compared to show that the TOD method is a useful tool to select a specific SR reconstruction method according to the imaging conditions (camera's fill-factor, optical point-spread-function (PSF, signal-to-noise ratio (SNR.
Cox, Christopher
Low-order numerical methods are widespread in academic solvers and ubiquitous in industrial solvers due to their robustness and usability. High-order methods are less robust and more complicated to implement; however, they exhibit low numerical dissipation and have the potential to improve the accuracy of flow simulations at a lower computational cost when compared to low-order methods. This motivates our development of a high-order compact method using Huynh's flux reconstruction scheme for solving unsteady incompressible flow on unstructured grids. We use Chorin's classic artificial compressibility formulation with dual time stepping to solve unsteady flow problems. In 2D, an implicit non-linear lower-upper symmetric Gauss-Seidel scheme with backward Euler discretization is used to efficiently march the solution in pseudo time, while a second-order backward Euler discretization is used to march in physical time. We verify and validate implementation of the high-order method coupled with our implicit time stepping scheme using both steady and unsteady incompressible flow problems. The current implicit time stepping scheme is proven effective in satisfying the divergence-free constraint on the velocity field in the artificial compressibility formulation. The high-order solver is extended to 3D and parallelized using MPI. Due to its simplicity, time marching for 3D problems is done explicitly. The feasibility of using the current implicit time stepping scheme for large scale three-dimensional problems with high-order polynomial basis still remains to be seen. We directly use the aforementioned numerical solver to simulate pulsatile flow of a Newtonian blood-analog fluid through a rigid 180-degree curved artery model. One of the most physiologically relevant forces within the cardiovascular system is the wall shear stress. This force is important because atherosclerotic regions are strongly correlated with curvature and branching in the human vasculature, where the
Funamizu, Hideki; Onodera, Yusei; Aizu, Yoshihisa
2018-05-01
In this study, we report color quality improvement of reconstructed images in color digital holography using the speckle method and the spectral estimation. In this technique, an object is illuminated by a speckle field and then an object wave is produced, while a plane wave is used as a reference wave. For three wavelengths, the interference patterns of two coherent waves are recorded as digital holograms on an image sensor. Speckle fields are changed by moving a ground glass plate in an in-plane direction, and a number of holograms are acquired to average the reconstructed images. After the averaging process of images reconstructed from multiple holograms, we use the Wiener estimation method for obtaining spectral transmittance curves in reconstructed images. The color reproducibility in this method is demonstrated and evaluated using a Macbeth color chart film and staining cells of onion.
International Nuclear Information System (INIS)
Chen, Zhenmao; Aoto, Kazumi; Kato, Syoichi
1999-07-01
In this report, reconstruction of magnetic charges induced by mechanical damages in a test piece of SUS304 stainless steel is performed as a part of efforts to establish a passive nondestructive testing method on the basis of the inspection of leakage magnetic field. The approach for solving this typical ill-posed inverse problem is selected as a way in the least square method category. Concerning the ill-poseness of the system of equations, an iteration algorithm is adopted to its solving in which the designations of initial profile, the weight coefficients and the total number of iterations are taken as means of regularization. From examples using simulated input data, it is verified that the approach gives good reconstruction results in case of signals with a relative high S/N ratio. For improving the robustness of the proposed method, a Galerkin procedure with base functions chosen as the Daubechies' wavelet is also introduced for discretizing the governing equation. By comparing the reconstruction results of the least square method and those using wavelet discretization, it is found that the wavelet used approach is more feasible in the inversion of noise polluted signals. Reconstruction of 1-D and 2-D magnetic charges with the least square strategy and reconstruction of an 1-D problem with the wavelet used method are carried out from both simulated and measured magnetic field signals which are used as the validation of the proposed inversion strategy. (author)
Energy Technology Data Exchange (ETDEWEB)
Chen, Zhenmao; Aoto, Kazumi; Kato, Syoichi [Structure Safety Engineering Group, Oarai Engineering Center, Japan Nuclear Cycle Development Inst., Oarai, Ibaraki (Japan)
1999-07-01
In this report, reconstruction of magnetic charges induced by mechanical damages in a test piece of SUS304 stainless steel is performed as a part of efforts to establish a passive nondestructive testing method on the basis of the inspection of leakage magnetic field. The approach for solving this typical ill-posed inverse problem is selected as a way in the least square method category. Concerning the ill-poseness of the system of equations, an iteration algorithm is adopted to its solving in which the designations of initial profile, the weight coefficients and the total number of iterations are taken as means of regularization. From examples using simulated input data, it is verified that the approach gives good reconstruction results in case of signals with a relative high S/N ratio. For improving the robustness of the proposed method, a Galerkin procedure with base functions chosen as the Daubechies' wavelet is also introduced for discretizing the governing equation. By comparing the reconstruction results of the least square method and those using wavelet discretization, it is found that the wavelet used approach is more feasible in the inversion of noise polluted signals. Reconstruction of 1-D and 2-D magnetic charges with the least square strategy and reconstruction of an 1-D problem with the wavelet used method are carried out from both simulated and measured magnetic field signals which are used as the validation of the proposed inversion strategy. (author)
Directory of Open Access Journals (Sweden)
D. J. Bolinius
2016-04-01
Full Text Available Semi-volatile persistent organic pollutants (POPs cycle between the atmosphere and terrestrial surfaces; however measuring fluxes of POPs between the atmosphere and other media is challenging. Sampling times of hours to days are required to accurately measure trace concentrations of POPs in the atmosphere, which rules out the use of eddy covariance techniques that are used to measure gas fluxes of major air pollutants. An alternative, the modified Bowen ratio (MBR method, has been used instead. In this study we used data from FLUXNET for CO2 and water vapor (H2O to compare fluxes measured by eddy covariance to fluxes measured with the MBR method using vertical concentration gradients in air derived from averaged data that simulate the long sampling times typically required to measure POPs. When concentration gradients are strong and fluxes are unidirectional, the MBR method and the eddy covariance method agree within a factor of 3 for CO2, and within a factor of 10 for H2O. To remain within the range of applicability of the MBR method, field studies should be carried out under conditions such that the direction of net flux does not change during the sampling period. If that condition is met, then the performance of the MBR method is neither strongly affected by the length of sample duration nor the use of a fixed value for the transfer coefficient.
Inter-comparison of different direct and indirect methods to determine radon flux from soil
International Nuclear Information System (INIS)
Grossi, C.; Vargas, A.; Camacho, A.; Lopez-Coto, I.; Bolivar, J.P.; Xia Yu; Conen, F.
2011-01-01
The physical and chemical characteristics of radon gas make it a good tracer for use in the application of atmospheric transport models. For this purpose the radon source needs to be known on a global scale and this is difficult to achieve by only direct experimental methods. However, indirect methods can provide radon flux maps on larger scales, but their reliability has to be carefully checked. It is the aim of this work to compare radon flux values obtained by direct and indirect methods in a measurement campaign performed in the summer of 2008. Different systems to directly measure radon flux from the soil surface and to measure the related parameters terrestrial γ dose and 226 Ra activity in soil, for indirect estimation of radon flux, were tested. Four eastern Spanish sites with different geological and soil characteristics were selected: Teruel, Los Pedrones, Quintanar de la Orden and Madrid. The study shows the usefulness of both direct and indirect methods for obtaining radon flux data. Direct radon flux measurements by continuous and integrated monitors showed a coefficient of variation between 10% and 23%. At the same time, indirect methods based on correlations between 222 Rn and terrestrial γ dose rate, or 226 Ra activity in soil, provided results similar to the direct measurements, when these proxies were directly measured at the site. Larger discrepancies were found when proxy values were extracted from existing data bases. The participating members involved in the campaign study were the Institute of Energy Technology (INTE) of the Technical University of Catalonia (UPC), Huelva University (UHU), and Basel University (BASEL).
Inter-comparison of different direct and indirect methods to determine radon flux from soil
Energy Technology Data Exchange (ETDEWEB)
Grossi, C., E-mail: claudia.grossi@upc.ed [Institute of Energy (INTE), Technical University of Catalonia (UPC) (Spain); Vargas, A.; Camacho, A. [Institute of Energy (INTE), Technical University of Catalonia (UPC) (Spain); Lopez-Coto, I.; Bolivar, J.P. [University of Huelva (Spain); Xia Yu; Conen, F. [University of Basel (Switzerland)
2011-01-15
The physical and chemical characteristics of radon gas make it a good tracer for use in the application of atmospheric transport models. For this purpose the radon source needs to be known on a global scale and this is difficult to achieve by only direct experimental methods. However, indirect methods can provide radon flux maps on larger scales, but their reliability has to be carefully checked. It is the aim of this work to compare radon flux values obtained by direct and indirect methods in a measurement campaign performed in the summer of 2008. Different systems to directly measure radon flux from the soil surface and to measure the related parameters terrestrial {gamma} dose and {sup 226}Ra activity in soil, for indirect estimation of radon flux, were tested. Four eastern Spanish sites with different geological and soil characteristics were selected: Teruel, Los Pedrones, Quintanar de la Orden and Madrid. The study shows the usefulness of both direct and indirect methods for obtaining radon flux data. Direct radon flux measurements by continuous and integrated monitors showed a coefficient of variation between 10% and 23%. At the same time, indirect methods based on correlations between {sup 222}Rn and terrestrial {gamma} dose rate, or {sup 226}Ra activity in soil, provided results similar to the direct measurements, when these proxies were directly measured at the site. Larger discrepancies were found when proxy values were extracted from existing data bases. The participating members involved in the campaign study were the Institute of Energy Technology (INTE) of the Technical University of Catalonia (UPC), Huelva University (UHU), and Basel University (BASEL).
Combining two complementary micrometeorological methods to measure CH4 and N2O fluxes over pasture
Laubach, Johannes; Barthel, Matti; Fraser, Anitra; Hunt, John E.; Griffith, David W. T.
2016-03-01
New Zealand's largest industrial sector is pastoral agriculture, giving rise to a large fraction of the country's emissions of methane (CH4) and nitrous oxide (N2O). We designed a system to continuously measure CH4 and N2O fluxes at the field scale on two adjacent pastures that differed with respect to management. At the core of this system was a closed-cell Fourier transform infrared (FTIR) spectrometer, which measured the mole fractions of CH4, N2O and carbon dioxide (CO2) at two heights at each site. In parallel, CO2 fluxes were measured using eddy-covariance instrumentation. We applied two different micrometeorological ratio methods to infer the CH4 and N2O fluxes from their respective mole fractions and the CO2 fluxes. The first is a variant of the flux-gradient method, where it is assumed that the turbulent diffusivities of CH4 and N2O equal that of CO2. This method was reliable when the CO2 mole-fraction difference between heights was at least 4 times greater than the FTIR's resolution of differences. For the second method, the temporal increases of mole fractions in the stable nocturnal boundary layer, which are correlated for concurrently emitted gases, are used to infer the unknown fluxes of CH4 and N2O from the known flux of CO2. This method was sensitive to "contamination" from trace gas sources other than the pasture of interest and therefore required careful filtering. With both methods combined, estimates of mean daily CH4 and N2O fluxes were obtained for 56 % of days at one site and 73 % at the other. Both methods indicated both sites as net sources of CH4 and N2O. Mean emission rates for 1 year at the unfertilised, winter-grazed site were 8.9 (±0.79) nmol CH4 m-2 s-1 and 0.38 (±0.018) nmol N2O m-2 s-1. During the same year, mean emission rates at the irrigated, fertilised and rotationally grazed site were 8.9 (±0.79) nmol CH4 m-2 s-1 and 0.58 (±0.020) nmol N2O m-2 s-1. At this site, the N2O emissions amounted to 1.21 (±0.15) % of the nitrogen
Energy Technology Data Exchange (ETDEWEB)
Seiz, Julie Burger [Union College, Schenectady, NY (United States)
1997-04-01
This paper presents a review of the Direct Stator Flux Field Orientation control method. This method can be used to control an induction motor`s torque and flux directly and is the application of interest for this thesis. This control method is implemented without the traditional feedback loops and associated hardware. Predictions are made, by mathematical calculations, of the stator voltage vector. The voltage vector is determined twice a switching period. The switching period is fixed throughout the analysis. The three phase inverter duty cycle necessary to control the torque and flux of the induction machine is determined by the voltage space vector Pulse Width Modulation (PWM) technique. Transient performance of either the flux or torque requires an alternate modulation scheme which is also addressed in this thesis. A block diagram of this closed loop system is provided. 22 figs., 7 tabs.
Langston, T.; Fonstad, M. A.
2014-12-01
The Willamette is a gravel-bed river that drains ~28,800 km^2 between the Coast Range and Cascade Range in northwestern Oregon before entering the Columbia River near Portland. In the last 150 years, natural and anthropogenic drivers have altered the sediment transport regime, drastically reducing the geomorphic complexity of the river. Previously dynamic multi-threaded reaches have transformed into stable single channels to the detriment of ecosystem diversity and productivity. Flow regulation by flood-control dams, bank revetments, and conversion of riparian forests to agriculture have been key drivers of channel change. To date, little has been done to quantitatively describe temporal and spatial trends of sediment transport in the Willamette. This knowledge is critical for understanding how modern processes shape landforms and habitats. The goal of this study is to describe large-scale temporal and spatial trends in the sediment budget by reconstructing historical topography and bathymetry from aerial imagery. The area of interest for this project is a reach of the Willamette stretching from the confluence of the McKenzie River to the town of Peoria. While this reach remains one of the most dynamic sections of the river, it has exhibited a great loss in geomorphic complexity. Aerial imagery for this section of the river is available from USDA and USACE projects dating back to the 1930's. Above water surface elevations are extracted using the Imagine Photogrammetry package in ERDAS. Bathymetry is estimated using a method known as Hydraulic Assisted Bathymetry in which hydraulic parameters are used to develop a regression between water depth and pixel values. From this, pixel values are converted to depth below the water surface. Merged together, topography and bathymetry produce a spatially continuous digital elevation model of the geomorphic floodplain. Volumetric changes in sediment stored along the study reach are then estimated for different historic periods
A Survey on Methods for Reconstructing Surfaces from Unorganized Point Sets
Directory of Open Access Journals (Sweden)
Vilius Matiukas
2011-08-01
Full Text Available This paper addresses the issue of reconstructing and visualizing surfaces from unorganized point sets. These can be acquired using different techniques, such as 3D-laser scanning, computerized tomography, magnetic resonance imaging and multi-camera imaging. The problem of reconstructing surfaces from their unorganized point sets is common for many diverse areas, including computer graphics, computer vision, computational geometry or reverse engineering. The paper presents three alternative methods that all use variations in complementary cones to triangulate and reconstruct the tested 3D surfaces. The article evaluates and contrasts three alternatives.Article in English
Analysis on the reconstruction accuracy of the Fitch method for inferring ancestral states
Directory of Open Access Journals (Sweden)
Grünewald Stefan
2011-01-01
Full Text Available Abstract Background As one of the most widely used parsimony methods for ancestral reconstruction, the Fitch method minimizes the total number of hypothetical substitutions along all branches of a tree to explain the evolution of a character. Due to the extensive usage of this method, it has become a scientific endeavor in recent years to study the reconstruction accuracies of the Fitch method. However, most studies are restricted to 2-state evolutionary models and a study for higher-state models is needed since DNA sequences take the format of 4-state series and protein sequences even have 20 states. Results In this paper, the ambiguous and unambiguous reconstruction accuracy of the Fitch method are studied for N-state evolutionary models. Given an arbitrary phylogenetic tree, a recurrence system is first presented to calculate iteratively the two accuracies. As complete binary tree and comb-shaped tree are the two extremal evolutionary tree topologies according to balance, we focus on the reconstruction accuracies on these two topologies and analyze their asymptotic properties. Then, 1000 Yule trees with 1024 leaves are generated and analyzed to simulate real evolutionary scenarios. It is known that more taxa not necessarily increase the reconstruction accuracies under 2-state models. The result under N-state models is also tested. Conclusions In a large tree with many leaves, the reconstruction accuracies of using all taxa are sometimes less than those of using a leaf subset under N-state models. For complete binary trees, there always exists an equilibrium interval [a, b] of conservation probability, in which the limiting ambiguous reconstruction accuracy equals to the probability of randomly picking a state. The value b decreases with the increase of the number of states, and it seems to converge. When the conservation probability is greater than b, the reconstruction accuracies of the Fitch method increase rapidly. The reconstruction
Evaluation of image reconstruction methods for 123I-MIBG-SPECT. A rank-order study
International Nuclear Information System (INIS)
Soederberg, Marcus; Mattsson, Soeren; Oddstig, Jenny; Uusijaervi-Lizana, Helena; Leide-Svegborn, Sigrid; Valind, Sven; Thorsson, Ola; Garpered, Sabine; Prautzsch, Tilmann; Tischenko, Oleg
2012-01-01
Background: There is an opportunity to improve the image quality and lesion detectability in single photon emission computed tomography (SPECT) by choosing an appropriate reconstruction method and optimal parameters for the reconstruction. Purpose: To optimize the use of the Flash 3D reconstruction algorithm in terms of equivalent iteration (EI) number (number of subsets times the number of iterations) and to compare with two recently developed reconstruction algorithms ReSPECT and orthogonal polynomial expansion on disc (OPED) for application on 123 I-metaiodobenzylguanidine (MIBG)-SPECT. Material and Methods: Eleven adult patients underwent SPECT 4 h and 14 patients 24 h after injection of approximately 200 MBq 123 I-MIBG using a Siemens Symbia T6 SPECT/CT. Images were reconstructed from raw data using the Flash 3D algorithm at eight different EI numbers. The images were ranked by three experienced nuclear medicine physicians according to their overall impression of the image quality. The obtained optimal images were then compared in one further visual comparison with images reconstructed using the ReSPECT and OPED algorithms. Results: The optimal EI number for Flash 3D was determined to be 32 for acquisition 4 h and 24 h after injection. The average rank order (best first) for the different reconstructions for acquisition after 4 h was: Flash 3D 32 > ReSPECT > Flash 3D 64 > OPED, and after 24 h: Flash 3D 16 > ReSPECT > Flash 3D 32 > OPED. A fair level of inter-observer agreement concerning optimal EI number and reconstruction algorithm was obtained, which may be explained by the different individual preferences of what is appropriate image quality. Conclusion: Using Siemens Symbia T6 SPECT/CT and specified acquisition parameters, Flash 3D 32 (4 h) and Flash 3D 16 (24 h), followed by ReSPECT, were assessed to be the preferable reconstruction algorithms in visual assessment of 123 I-MIBG images
Comparing and improving reconstruction methods for proxies based on compositional data
Nolan, C.; Tipton, J.; Booth, R.; Jackson, S. T.; Hooten, M.
2017-12-01
Many types of studies in paleoclimatology and paleoecology involve compositional data. Often, these studies aim to use compositional data to reconstruct an environmental variable of interest; the reconstruction is usually done via the development of a transfer function. Transfer functions have been developed using many different methods. Existing methods tend to relate the compositional data and the reconstruction target in very simple ways. Additionally, the results from different methods are rarely compared. Here we seek to address these two issues. First, we introduce a new hierarchical Bayesian multivariate gaussian process model; this model allows for the relationship between each species in the compositional dataset and the environmental variable to be modeled in a way that captures the underlying complexities. Then, we compare this new method to machine learning techniques and commonly used existing methods. The comparisons are based on reconstructing the water table depth history of Caribou Bog (an ombrotrophic Sphagnum peat bog in Old Town, Maine, USA) from a new 7500 year long record of testate amoebae assemblages. The resulting reconstructions from different methods diverge in both their resulting means and uncertainties. In particular, uncertainty tends to be drastically underestimated by some common methods. These results will help to improve inference of water table depth from testate amoebae. Furthermore, this approach can be applied to test and improve inferences of past environmental conditions from a broad array of paleo-proxies based on compositional data
Evaluation of time-efficient reconstruction methods in digital breast tomosynthesis
International Nuclear Information System (INIS)
Svahn, T.M.; Houssami, N.
2015-01-01
Three reconstruction algorithms for digital breast tomosynthesis were compared in this article: filtered back-projection (FBP), iterative adapted FBP and maximum likelihood-convex iterative algorithms. Quality metrics such as signal-difference-to-noise ratio, normalised line-profiles and artefact-spread function were used for evaluation of reconstructed tomosynthesis images. The iterative-based methods offered increased image quality in terms of higher detectability and reduced artefacts, which will be further examined in clinical images. (authors)
Energy Technology Data Exchange (ETDEWEB)
Hong Luo; Luquing Luo; Robert Nourgaliev; Vincent Mousseau
2009-06-01
A reconstruction-based discontinuous Galerkin (DG) method is presented for the solution of the compressible Euler equations on arbitrary grids. By taking advantage of handily available and yet invaluable information, namely the derivatives, in the context of the discontinuous Galerkin methods, a solution polynomial of one degree higher is reconstructed using a least-squares method. The stencils used in the reconstruction involve only the van Neumann neighborhood (face-neighboring cells) and are compact and consistent with the underlying DG method. The resulting DG method can be regarded as an improvement of a recovery-based DG method in the sense that it shares the same nice features as the recovery-based DG method, such as high accuracy and efficiency, and yet overcomes some of its shortcomings such as a lack of flexibility, compactness, and robustness. The developed DG method is used to compute a variety of flow problems on arbitrary meshes to demonstrate the accuracy and efficiency of the method. The numerical results indicate that this reconstructed DG method is able to obtain a third-order accurate solution at a slightly higher cost than its second-order DG method and provide an increase in performance over the third order DG method in terms of computing time and storage requirement.
Efficient 3D Volume Reconstruction from a Point Cloud Using a Phase-Field Method
Directory of Open Access Journals (Sweden)
Darae Jeong
2018-01-01
Full Text Available We propose an explicit hybrid numerical method for the efficient 3D volume reconstruction from unorganized point clouds using a phase-field method. The proposed three-dimensional volume reconstruction algorithm is based on the 3D binary image segmentation method. First, we define a narrow band domain embedding the unorganized point cloud and an edge indicating function. Second, we define a good initial phase-field function which speeds up the computation significantly. Third, we use a recently developed explicit hybrid numerical method for solving the three-dimensional image segmentation model to obtain efficient volume reconstruction from point cloud data. In order to demonstrate the practical applicability of the proposed method, we perform various numerical experiments.
AIR Tools II: algebraic iterative reconstruction methods, improved implementation
DEFF Research Database (Denmark)
Hansen, Per Christian; Jørgensen, Jakob Sauer
2017-01-01
with algebraic iterative methods and their convergence properties. The present software is a much expanded and improved version of the package AIR Tools from 2012, based on a new modular design. In addition to improved performance and memory use, we provide more flexible iterative methods, a column-action method...
Application of the pseudo-harmonic method for calculating pertubed flux
International Nuclear Information System (INIS)
Silva, F.C. da; Rotenberg, S.; Thome Filho, Z.D.
1985-01-01
It is realized a semi-analytical test in order to verify the potentiality of the pseudo-harmonic methods to calculate the neutron flux and perturbed eigenvalues. It was chosen for test, the case which the pseudo-harmonics are the Bessel functions to facilitate the analysis. (M.C.K.) [pt
A multigrid Newton-Krylov method for flux-limited radiation diffusion
International Nuclear Information System (INIS)
Rider, W.J.; Knoll, D.A.; Olson, G.L.
1998-01-01
The authors focus on the integration of radiation diffusion including flux-limited diffusion coefficients. The nonlinear integration is accomplished with a Newton-Krylov method preconditioned with a multigrid Picard linearization of the governing equations. They investigate the efficiency of the linear and nonlinear iterative techniques
HNO3 fluxes to a deciduous forest derived using gradient and REA methods
DEFF Research Database (Denmark)
Pryor, S.C.; Barthelmie, R.J.; Jensen, B.
2002-01-01
Summertime nitric acid concentrations over a deciduous forest in the midwestern United States are reported, which range between 0.36 and 3.3 mug m(-3). Fluxes to the forest are computed using the relaxed eddy accumulation technique and gradient methods. In accord with previous studies, the result...
On-line reconstruction of in-core power distribution by harmonics expansion method
International Nuclear Information System (INIS)
Wang Changhui; Wu Hongchun; Cao Liangzhi; Yang Ping
2011-01-01
Highlights: → A harmonics expansion method for the on-line in-core power reconstruction is proposed. → A harmonics data library is pre-generated off-line and a code named COMS is developed. → Numerical results show that the maximum relative error of the reconstruction is less than 5.5%. → This method has a high computational speed compared to traditional methods. - Abstract: Fixed in-core detectors are most suitable in real-time response to in-core power distributions in pressurized water reactors (PWRs). In this paper, a harmonics expansion method is used to reconstruct the in-core power distribution of a PWR on-line. In this method, the in-core power distribution is expanded by the harmonics of one reference case. The expansion coefficients are calculated using signals provided by fixed in-core detectors. To conserve computing time and improve reconstruction precision, a harmonics data library containing the harmonics of different reference cases is constructed. Upon reconstruction of the in-core power distribution on-line, the two closest reference cases are searched from the harmonics data library to produce expanded harmonics by interpolation. The Unit 1 reactor of DayaBay Nuclear Power Plant (DayaBay NPP) in China is considered for verification. The maximum relative error between the measurement and reconstruction results is less than 5.5%, and the computing time is about 0.53 s for a single reconstruction, indicating that this method is suitable for the on-line monitoring of PWRs.
Energy Technology Data Exchange (ETDEWEB)
Ridel, M
2002-04-01
The D{phi} experiment is located at the Fermi National Accelerator Laboratory on the TeVatron proton-antiproton collider. The Run II has started in march 2001 after 5 years of shutdown and will allow D{phi} extend its reach in squarks and gluinos searches, particles predicted by supersymmetry. In this work, I focussed on their decays that lead to signature with jets and missing transverse energy. But before the data taking started, I studied both software and hardware ways to improve the energy measurement which is crucial for jets and for missing transverse energy. Energy deposits in the calorimeter has been clustered with cellNN, at the cell level instead of the tower level. Efforts have been made to take advantage of the calorimeter granularity to aim at individual particles showers reconstruction. CellNN starts from the third floor which has a quadruple granularity compared to the other floors. The longitudinal information has been used to detect electromagnetic and hadronic showers overlaps. Then, clusters and reconstructed tracks from the central detectors are combined and their energies compared. The better measurement is kept. Using this procedure allows to improve the reconstruction of each event energy flow. The efficiency of the current calorimeter triggers has been determined. They has been used to perform a Monte Carlo search analysis of squarks and gluinos in the mSUGRA framework. The lower bound that Ddiameter will be able to put on squarks and gluinos masses with a 100 pb{sup -1} integrated luminosity has been predicted. The use of the energy flow instead of standard reconstruction tools will be able to improve this lower limit. (author)
A Method for the neutron flux determination during the activation process
International Nuclear Information System (INIS)
Maayouf, R.M.A.; Khalil, M.I.
2000-01-01
The present work deals with an accurate method for determining the neutron flux coming out from a neutron source during the experimental measurements. Accordingly, a suitable detector, followed by preamplifier and amplifier, is connected to a data acquisition system designed specially for this purpose; and the number of neutrons detected during every sampling period is stored in the PC. The historical file can be used to compute the average or the integral flux during any time period; considering the detector efficiency, geometrical arrangement and the amplification gain
A Method to Assess Flux Hazards at CSP Plants to Reduce Avian Mortality
Energy Technology Data Exchange (ETDEWEB)
Ho, Clifford K.; Wendelin, Timothy; Horstman, Luke; Yellowhair, Julius
2017-06-27
A method to evaluate avian flux hazards at concentrating solar power plants (CSP) has been developed. A heat-transfer model has been coupled to simulations of the irradiance in the airspace above a CSP plant to determine the feather temperature along prescribed bird flight paths. Probabilistic modeling results show that the irradiance and assumed feather properties (thickness, absorptance, heat capacity) have the most significant impact on the simulated feather temperature, which can increase rapidly (hundreds of degrees Celsius in seconds) depending on the parameter values. The avian flux hazard model is being combined with a plant performance model to identify alternative heliostat standby aiming strategies that minimize both avian flux hazards and negative impacts on plant performance.
Schneider, Harold
1959-01-01
This method is investigated for semi-infinite multiple-slab configurations of arbitrary width, composition, and source distribution. Isotropic scattering in the laboratory system is assumed. Isotropic scattering implies that the fraction of neutrons scattered in the i(sup th) volume element or subregion that will make their next collision in the j(sup th) volume element or subregion is the same for all collisions. These so-called "transfer probabilities" between subregions are calculated and used to obtain successive-collision densities from which the flux and transmission probabilities directly follow. For a thick slab with little or no absorption, a successive-collisions technique proves impractical because an unreasonably large number of collisions must be followed in order to obtain the flux. Here the appropriate integral equation is converted into a set of linear simultaneous algebraic equations that are solved for the average total flux in each subregion. When ordinary diffusion theory applies with satisfactory precision in a portion of the multiple-slab configuration, the problem is solved by ordinary diffusion theory, but the flux is plotted only in the region of validity. The angular distribution of neutrons entering the remaining portion is determined from the known diffusion flux and the remaining region is solved by higher order theory. Several procedures for applying the numerical method are presented and discussed. To illustrate the calculational procedure, a symmetrical slab ia vacuum is worked by the numerical, Monte Carlo, and P(sub 3) spherical harmonics methods. In addition, an unsymmetrical double-slab problem is solved by the numerical and Monte Carlo methods. The numerical approach proved faster and more accurate in these examples. Adaptation of the method to anisotropic scattering in slabs is indicated, although no example is included in this paper.
Sheu, Yae-Lin; Wang, Weichung; Hung, Yukai; Li, Pai-Chi
2010-02-01
Photoacoustic reconstruction for linear scanning geometry includes the delay-and-sum method, the spectral-domain method and the time-domain based method. In practice, the data collection using the planar detection geometry is not full-view, causing the details of the reconstructed object to be blurred and distorted. In addition to the exact formulation, we adopt a heuristic reconstruction method. In this paper, we demonstrate photoacoustic reconstruction for linear scanning geometry by formulating the image reconstruction into an optimization problem, and solve the problem with the particle swarm optimization (PSO) method. In this method, first we guess the initial optical energy distribution. According to photoacoustic model, described by the Helmholtz equation, the generated photoacoustic wave can be collected with planar detection geometry. The spherical Radon transform is adopted for the simulation of arbitrarily guessed optical energy distribution. Next we compare the collected signals generated from the guessed optical energy distribution with the measured signals by the sum of squared differences. By minimizing the error sum among various guesses, the initial optical energy distribution is obtained. In this regard, no limited-view is encountered. To guess the initial distribution efficiently such that the sum of the squared differences is minimized is an optimization problem with the dimension of unknowns being the size of the initial optical energy distribution. PSO is a derivative-free and population-based stochastic method that has been used to solve various optimization problems due to its simplicity and efficiency. High computational costs aroused from a large number of particles required can be alleviated with the use of the graphic processing units (GPUs). The proposed reconstruction method based on the PSO algorithm along with the spherical Radon transform is implemented on a NVIDIA Telsa C1060 GPU.
International Nuclear Information System (INIS)
Klein, Philipp; Herold, Frank
2016-01-01
The Computed Tomography (CT) is one main imaging technique in the field of non-destructive testing. Newly, industrial robots are used to manipulate the object during the whole CT scan, instead of just placing the object onto a standard turntable as it was usual for industrial CT the times before. Using industrial robots for the object manipulation in CT systems provides an increase in spatial freedom and therefore more flexibility for various applications. For example complete CT trajectories concerning the Tuy-Smith Theorem are applied more easily than using conventional manipulators. These advantages are accompanied by a loss of precision in positioning, caused by mechanical limitations of the robotic systems. In this article we will present a comparison of established reconstruction methods for CT with industrial robots using a so-called Automatic Object Position Recognition (AOPR). AOPR is a new automatic method which improves the position-accuracy online by using a priori information about fix markers in space. The markers are used to reconstruct the position of the object during each image acquisition. These more precise positions lead to a higher quality of the reconstructed volume after the image reconstruction. We will study the image quality of several different reconstruction techniques. For example we will reconstruct real robot-CT datasets by filtered back-projection (FBP), simultaneous algebraic reconstruction technique (SART) or Siemens's theoretically exact reconstruction (TXR). Each time, we will evaluate the datasets with and without AOPR and will present the dedicated image quality. Moreover we will measure the computation time of AOPR to proof that we still fulfill the real time conditions.
A low error reconstruction method for confocal holography to determine 3-dimensional properties
International Nuclear Information System (INIS)
Jacquemin, P.B.; Herring, R.A.
2012-01-01
A confocal holography microscope developed at the University of Victoria uniquely combines holography with a scanning confocal microscope to non-intrusively measure fluid temperatures in three-dimensions (Herring, 1997), (Abe and Iwasaki, 1999), (Jacquemin et al., 2005). The Confocal Scanning Laser Holography (CSLH) microscope was built and tested to verify the concept of 3D temperature reconstruction from scanned holograms. The CSLH microscope used a focused laser to non-intrusively probe a heated fluid specimen. The focused beam probed the specimen instead of a collimated beam in order to obtain different phase-shift data for each scan position. A collimated beam produced the same information for scanning along the optical propagation z-axis. No rotational scanning mechanisms were used in the CSLH microscope which restricted the scan angle to the cone angle of the probe beam. Limited viewing angle scanning from a single view point window produced a challenge for tomographic 3D reconstruction. The reconstruction matrices were either singular or ill-conditioned making reconstruction with significant error or impossible. Establishing boundary conditions with a particular scanning geometry resulted in a method of reconstruction with low error referred to as “wily”. The wily reconstruction method can be applied to microscopy situations requiring 3D imaging where there is a single viewpoint window, a probe beam with high numerical aperture, and specified boundary conditions for the specimen. The issues and progress of the wily algorithm for the CSLH microscope are reported herein. -- Highlights: ► Evaluation of an optical confocal holography device to measure 3D temperature of a heated fluid. ► Processing of multiple holograms containing the cumulative refractive index through the fluid. ► Reconstruction issues due to restricting angular scanning to the numerical aperture of the beam. ► Minimizing tomographic reconstruction error by defining boundary
A new tracer technique for monitoring groundwater fluxes: the Finite Volume Point Dilution Method.
Brouyère, Serge; Batlle-Aguilar, Jordi; Goderniaux, Pascal; Dassargues, Alain
2008-01-28
Quantification of pollutant mass fluxes is essential for assessing the impact of contaminated sites on their surrounding environment, particularly on adjacent surface water bodies. In this context, it is essential to quantify but also to be able to monitor the variations with time of Darcy fluxes in relation with changes in hydrogeological conditions and groundwater - surface water interactions. A new tracer technique is proposed that generalizes the single-well point dilution method to the case of finite volumes of tracer fluid and water flush. It is called the Finite Volume Point Dilution Method (FVPDM). It is based on an analytical solution derived from a mathematical model proposed recently to accurately model tracer injection into a well. Using a non-dimensional formulation of the analytical solution, a sensitivity analysis is performed on the concentration evolution in the injection well, according to tracer injection conditions and well-aquifer interactions. Based on this analysis, optimised field techniques and interpretation methods are proposed. The new tracer technique is easier to implement in the field than the classical point dilution method while it further allows monitoring temporal changes of the magnitude of estimated Darcy fluxes, which is not the case for the former technique. The new technique was applied to two experimental sites with contrasting objectives, geological and hydrogeological conditions, and field equipment facilities. In both cases, field tracer concentrations monitored in the injection wells were used to fit the calculated modelled concentrations by adjusting the apparent Darcy flux crossing the well screens. Modelling results are very satisfactory and indicate that the methodology is efficient and accurate, with a wide range of potential applications in different environments and experimental conditions, including the monitoring with time of changes in Darcy fluxes.
Xing, Pei; Chen, Xin; Luo, Yong; Nie, Suping; Zhao, Zongci; Huang, Jianbin; Wang, Shaowu
2016-01-01
Large-scale climate history of the past millennium reconstructed solely from tree-ring data is prone to underestimate the amplitude of low-frequency variability. In this paper, we aimed at solving this problem by utilizing a novel method termed "MDVM", which was a combination of the ensemble empirical mode decomposition (EEMD) and variance matching techniques. We compiled a set of 211 tree-ring records from the extratropical Northern Hemisphere (30-90°N) in an effort to develop a new reconstruction of the annual mean temperature by the MDVM method. Among these dataset, a number of 126 records were screened out to reconstruct temperature variability longer than decadal scale for the period 850-2000 AD. The MDVM reconstruction depicted significant low-frequency variability in the past millennium with evident Medieval Warm Period (MWP) over the interval 950-1150 AD and pronounced Little Ice Age (LIA) cumulating in 1450-1850 AD. In the context of 1150-year reconstruction, the accelerating warming in 20th century was likely unprecedented, and the coldest decades appeared in the 1640s, 1600s and 1580s, whereas the warmest decades occurred in the 1990s, 1940s and 1930s. Additionally, the MDVM reconstruction covaried broadly with changes in natural radiative forcing, and especially showed distinct footprints of multiple volcanic eruptions in the last millennium. Comparisons of our results with previous reconstructions and model simulations showed the efficiency of the MDVM method on capturing low-frequency variability, particularly much colder signals of the LIA relative to the reference period. Our results demonstrated that the MDVM method has advantages in studying large-scale and low-frequency climate signals using pure tree-ring data.
Energy Technology Data Exchange (ETDEWEB)
Chen, Xueli, E-mail: xlchen@xidian.edu.cn, E-mail: jimleung@mail.xidian.edu.cn; Yang, Defu; Zhang, Qitan; Liang, Jimin, E-mail: xlchen@xidian.edu.cn, E-mail: jimleung@mail.xidian.edu.cn [School of Life Science and Technology, Xidian University, Xi' an 710071 (China); Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education (China)
2014-05-14
Even though bioluminescence tomography (BLT) exhibits significant potential and wide applications in macroscopic imaging of small animals in vivo, the inverse reconstruction is still a tough problem that has plagued researchers in a related area. The ill-posedness of inverse reconstruction arises from insufficient measurements and modeling errors, so that the inverse reconstruction cannot be solved directly. In this study, an l{sub 1/2} regularization based numerical method was developed for effective reconstruction of BLT. In the method, the inverse reconstruction of BLT was constrained into an l{sub 1/2} regularization problem, and then the weighted interior-point algorithm (WIPA) was applied to solve the problem through transforming it into obtaining the solution of a series of l{sub 1} regularizers. The feasibility and effectiveness of the proposed method were demonstrated with numerical simulations on a digital mouse. Stability verification experiments further illustrated the robustness of the proposed method for different levels of Gaussian noise.
Lower Lip Reconstruction after Tumor Resection; a Single Author's Experience with Various Methods
International Nuclear Information System (INIS)
Rifaat, M.A.
2006-01-01
Background: Squamous cell carcinoma is the most frequently seen malignant tumor of the lower lip The more tissue is lost from the lip after tumor resection, the more challenging is the reconstruction. Many methods have been described, but each has its own advantages and its disadvantages. The author presents through his own clinical experience with lower lip reconstruction at tbe NCI, an evaluation of the commonly practiced techniques. Patients and Methods: Over a 3 year period from May 2002 till May 2005, 17 cases presented at the National Cancer Institute, Cairo University, with lower lip squamous cell carcinoma. The lesions involved various regions of the lower lip excluding the commissures. Following resection, the resulting defects ranged from 1/3 of lip to total lip loss. The age of the patients ranged from 28 to 67 years and they were 13 males and 4 females With regards to the reconstructive procedures used, Karapandzic technique (orbicularis oris myocutaneous flaps) was used in 7 patients, 3 of whom underwent secondary lower lip augmentation with upper lip switch flaps Primary Abbe (Lip switch) nap reconstruction was used in two patients, while 2 other patients were reconstructed with bilateral fan flaps with vermilion reconstruction by mucosal advancement in one case and tongue flap in the other The radial forearm free nap was used only in 2 cases, and direct wound closure was achieved in three cases. All patients were evaluated for early postoperative results emphasizing on flap viability and wound problems and for late results emphasizing on oral continence, microstomia, and aesthetic outcome, in addition to the usual oncological follow-up. Results: All flaps used in this study survived completely including the 2 free flaps. In the early postoperative period, minor wound breakdown occurred in all three cases reconstructed by utilizing adjacent cheek skin flaps, but all wounds healed spontaneously. The latter three cases Involved defects greater than 2
Towards a d-bar reconstruction method for three-dimensional EIT
DEFF Research Database (Denmark)
Cornean, Horia Decebal; Knudsen, Kim
here. It is shown that exponentially growing solutions exist for low complex frequencies without imposing any regularity assumption on the conductivity. Further, a reconstruction method for conductivities close to a constant is given. In this method the complex frequency is taken to zero instead...
DEFF Research Database (Denmark)
Zhu, Yansong; Jha, Abhinav K.; Dreyer, Jakob K.
2017-01-01
effects could be exploited, traditional compressive-sensing methods cannot be directly applied as the system matrix in FMT is highly coherent. To overcome these issues, we propose and assess a three-step reconstruction method. First, truncated singular value decomposition is applied on the data to reduce...... considerable promise and will be tested using more realistic simulations and experimental setups....
Robust method for stator current reconstruction from DC link in a ...
African Journals Online (AJOL)
Using the switching signals and dc link current, this paper presents a new algorithm for the reconstruction of stator currents of an inverter-fed, three-phase induction motor drive. Unlike the classical and improved methods available in literature, the proposed method is neither based on pulse width modulation pattern ...
Gauss-Newton method for image reconstruction in diffuse optical tomography
International Nuclear Information System (INIS)
Schweiger, Martin; Arridge, Simon R; Nissilae, Ilkka
2005-01-01
We present a regularized Gauss-Newton method for solving the inverse problem of parameter reconstruction from boundary data in frequency-domain diffuse optical tomography. To avoid the explicit formation and inversion of the Hessian which is often prohibitively expensive in terms of memory resources and runtime for large-scale problems, we propose to solve the normal equation at each Newton step by means of an iterative Krylov method, which accesses the Hessian only in the form of matrix-vector products. This allows us to represent the Hessian implicitly by the Jacobian and regularization term. Further we introduce transformation strategies for data and parameter space to improve the reconstruction performance. We present simultaneous reconstructions of absorption and scattering distributions using this method for a simulated test case and experimental phantom data
Phase microscopy using light-field reconstruction method for cell observation.
Xiu, Peng; Zhou, Xin; Kuang, Cuifang; Xu, Yingke; Liu, Xu
2015-08-01
The refractive index (RI) distribution can serve as a natural label for undyed cell imaging. However, the majority of images obtained through quantitative phase microscopy is integrated along the illumination angle and cannot reflect additional information about the refractive map on a certain plane. Herein, a light-field reconstruction method to image the RI map within a depth of 0.2 μm is proposed. It records quantitative phase-delay images using a four-step phase shifting method in different directions and then reconstructs a similar scattered light field for the refractive sample on the focus plane. It can image the RI of samples, transparent cell samples in particular, in a manner similar to the observation of scattering characteristics. The light-field reconstruction method is therefore a powerful tool for use in cytobiology studies. Copyright © 2015 Elsevier Ltd. All rights reserved.
The e/h method of energy reconstruction for combined calorimeter
International Nuclear Information System (INIS)
Kul'chitskij, Yu.A.; Kuz'min, M.V.; Vinogradov, V.B.
1999-01-01
The new simple method of the energy reconstruction for a combined calorimeter, which we called the e/h method, is suggested. It uses only the known e/h ratios and the electron calibration constants and does not require the determination of any parameters by a minimization technique. The method has been tested on the basis of the 1996 test beam data of the ATLAS barrel combined calorimeter and demonstrated the correctness of the reconstruction of the mean values of energies. The obtained fractional energy resolution is [(58 ± 3)%/√E + (2.5 ± 0.3)%] O+ (1.7 ± 0.2) GeV/E. This algorithm can be used for the fast energy reconstruction in the first level trigger
Determining Accuracy of Thermal Dissipation Methods-based Sap Flux in Japanese Cedar Trees
Su, Man-Ping; Shinohara, Yoshinori; Laplace, Sophie; Lin, Song-Jin; Kume, Tomonori
2017-04-01
Thermal dissipation method, one kind of sap flux measurement method that can estimate individual tree transpiration, have been widely used because of its low cost and uncomplicated operation. Although thermal dissipation method is widespread, the accuracy of this method is doubted recently because some tree species materials in previous studies were not suitable for its empirical formula from Granier due to difference of wood characteristics. In Taiwan, Cryptomeria japonica (Japanese cedar) is one of the dominant species in mountainous area, quantifying the transpiration of Japanese cedar trees is indispensable to understand water cycling there. However, no one have tested the accuracy of thermal dissipation methods-based sap flux for Japanese cedar trees in Taiwan. Thus, in this study we conducted calibration experiment using twelve Japanese cedar stem segments from six trees to investigate the accuracy of thermal dissipation methods-based sap flux in Japanese cedar trees in Taiwan. By pumping water from segment bottom to top and inserting probes into segments to collect data simultaneously, we compared sap flux densities calculated from real water uptakes (Fd_actual) and empirical formula (Fd_Granier). Exact sapwood area and sapwood depth of each sample were obtained from dying segment with safranin stain solution. Our results showed that Fd_Granier underestimated 39 % of Fd_actual across sap flux densities ranging from 10 to 150 (cm3m-2s-1); while applying sapwood depth corrected formula from Clearwater, Fd_Granier became accurately that only underestimated 0.01 % of Fd_actual. However, when sap flux densities ranging from 10 to 50 (cm3m-2s-1)which is similar with the field data of Japanese cedar trees in a mountainous area of Taiwan, Fd_Granier underestimated 51 % of Fd_actual, and underestimated 26 % with applying Clearwater sapwood depth corrected formula. These results suggested sapwood depth significantly impacted on the accuracy of thermal dissipation
Convergence analysis for column-action methods in image reconstruction
DEFF Research Database (Denmark)
Elfving, Tommy; Hansen, Per Christian; Nikazad, Touraj
2016-01-01
Column-oriented versions of algebraic iterative methods are interesting alternatives to their row-version counterparts: they converge to a least squares solution, and they provide a basis for saving computational work by skipping small updates. In this paper we consider the case of noise-free data....... We present a convergence analysis of the column algorithms, we discuss two techniques (loping and flagging) for reducing the work, and we establish some convergence results for methods that utilize these techniques. The performance of the algorithms is illustrated with numerical examples from...
A method for measuring element fluxes in an undisturbed soil: nitrogen and carbon from earthworms
International Nuclear Information System (INIS)
Bouche, M.B.
1984-01-01
Data on chemical cycles, as nitrogen or carbon cycles, are extrapolated to the fields or ecosystems without the possibility for checking conclusions; i.e. from scientific knowledge (para-ecology). A new method, by natural introduction of an earthworm compartment into an undisturbed soil, with earthworms labelled both by isotopes ( 15 N, 14 C) and by staining is described. This method allows us to measure fluxes of chemicals. The first results, gathered during the improvement of the method in partly artificial conditions, are cross-checked with other data given by direct observation in the field. Measured flux (2.2 mg N/g fresh mass empty gut/day/15 0 C) is far more important than para-ecological estimations; animal metabolism plays directly an important role in nitrogen and carbon cycles. (author)
Absorbed Heat-flux Method for Ground Simulation of On-orbit Thermal Environment of Satellite
Directory of Open Access Journals (Sweden)
Jeong-Soo Kim
1999-12-01
Full Text Available An absorbed heat-flux method for ground simulation of on-orbit thermal environment of satellite is addressed in this paper. For satellite ground test, high vacuum and extremely low temperature of deep space are achieved by space simulation chamber, while spatial environmental heating is simulated by employing the absorbed heat-flux method. The methodology is explained in detail with test requirement and setup implemented on a satellite. Developed heat-load control system is presented with an adjusted PID-control logic and the system schematic realized is shown. A practical and successful application of the heat simulation method to KOMPSAT(Korea Multi-purpose Satellitethermal environmental test is demonstrated, finally.
Measurement of the epithermal neutron flux of the Argonauta reactor by the Sandwich method
International Nuclear Information System (INIS)
Nascimento, H.M.
1973-01-01
A common method of obtaining information about the neutron spectrum in the energy range of 1 eV to a few keV is by using resonance sandwich detectors. A sandwich detector is usually made up of three foils placed one on top of the other, each having the same thickness and being made of the same material which has a pronounced absorption resonance. To make an adequate evaluation, the sandwich method was compared with one using an isolated detector. The results obtained from approximate theoretical calculations were checked experimentally, using In, Au and Mn foils, in an isotropic 1/E flux in the Argonaut Reactor at I.E.N. As practical application of this method, the deviation from a 1/E spectrum of the epithermal neutron flux in the core and external graphite reflector of the Argonaut Reactor has been measured with the sandwich foils previously calibrated in a 1/E spectrum. (author)
Burger, Martin; Dirks, Hendrik; Frerking, Lena; Hauptmann, Andreas; Helin, Tapio; Siltanen, Samuli
2017-12-01
In this paper we study the reconstruction of moving object densities from undersampled dynamic x-ray tomography in two dimensions. A particular motivation of this study is to use realistic measurement protocols for practical applications, i.e. we do not assume to have a full Radon transform in each time step, but only projections in few angular directions. This restriction enforces a space-time reconstruction, which we perform by incorporating physical motion models and regularization of motion vectors in a variational framework. The methodology of optical flow, which is one of the most common methods to estimate motion between two images, is utilized to formulate a joint variational model for reconstruction and motion estimation. We provide a basic mathematical analysis of the forward model and the variational model for the image reconstruction. Moreover, we discuss the efficient numerical minimization based on alternating minimizations between images and motion vectors. A variety of results are presented for simulated and real measurement data with different sampling strategy. A key observation is that random sampling combined with our model allows reconstructions of similar amount of measurements and quality as a single static reconstruction.
METHOD OF DETERMINING ECONOMICAL EFFICIENCY OF HOUSING STOCK RECONSTRUCTION IN A CITY
Directory of Open Access Journals (Sweden)
Petreneva Ol’ga Vladimirovna
2016-03-01
Full Text Available RECONSTRUCTION IN A CITY The demand in comfortable housing has always been very high. The building density is not the same in different regions and sometimes there is no land for new housing construction, especially in the central regions of cities. Moreover, in many cities cultural and historical centers remain, which create the historical appearance of the city, that’s why new construction is impossible in these regions. Though taking into account the depreciation and obsolescence, the operation life of many buildings come to an end, they fall into disrepair. In these cases there arises a question on the reconstruction of the existing residential, public and industrial buildings. The aim of the reconstruction is bringing the existing worn-out building stock into correspondence with technical, social and sanitary requirements and living standards and conditions. The authors consider the currency and reasons for reconstruction of residential buildings. They attempt to answer the question, what is more economical efficient: new construction or reconstruction of residential buildings. The article offers a method to calculate the efficiency of residential buildings reconstruction.
Energy Reconstruction Methods in the IceCube Neutrino Telescope
DEFF Research Database (Denmark)
Aartsen, M.G.; Abbasi, R.; Ackermann, M.
2014-01-01
of light emitted is proportional to the deposited energy, which is approximately equal to the neutrino energy for νe and νμ charged-current interactions and can be used to set a lower bound on neutrino energies and to measure neutrino spectra statistically in other channels. Here we describe methods...
Gaining insight into food webs reconstructed by the inverse method
Kones, J.; Soetaert, K.E.R.; Van Oevelen, D.; Owino, J.; Mavuti, K.
2006-01-01
The use of the inverse method to analyze flow patterns of organic components in ecological systems has had wide application in ecological modeling. Through this approach, an infinite number of food web flows describing the food web and satisfying biological constraints are generated, from which one
A new method to reconstruct the structure from crystal images
Li, Y
2017-01-01
Biological molecules, especially the proteins, have a special and important function. We study their structure to understand their functions, and further make application, like the medical research. The routine method is diffraction, but not work for molecules which cannot grow into crystal and
Standard Test Method for Measuring Heat Flux Using Surface-Mounted One-Dimensional Flat Gages
American Society for Testing and Materials. Philadelphia
2009-01-01
1.1 This test method describes the measurement of the net heat flux normal to a surface using flat gages mounted onto the surface. Conduction heat flux is not the focus of this standard. Conduction applications related to insulation materials are covered by Test Method C 518 and Practices C 1041 and C 1046. The sensors covered by this test method all use a measurement of the temperature difference between two parallel planes normal to the surface to determine the heat that is exchanged to or from the surface in keeping with Fourier’s Law. The gages operate by the same principles for heat transfer in either direction. 1.2 This test method is quite broad in its field of application, size and construction. Different sensor types are described in detail in later sections as examples of the general method for measuring heat flux from the temperature gradient normal to a surface (1). Applications include both radiation and convection heat transfer. The gages have broad application from aerospace to biomedical en...
International Nuclear Information System (INIS)
Bosevski, T.
1986-01-01
An improved collision probability method for thermal-neutron-flux calculation in a cylindrical reactor cell has been developed. Expanding the neutron flux and source into a series of even powers of the radius, one' gets a convenient method for integration of the one-energy group integral transport equation. It is shown that it is possible to perform an analytical integration in the x-y plane in one variable and to use the effective Gaussian integration over another one. Choosing a convenient distribution of space points in fuel and moderator the transport matrix calculation and cell reaction rate integration were condensed. On the basis of the proposed method, the computer program DISKRET for the ZUSE-Z 23 K computer has been written. The suitability of the proposed method for the calculation of the thermal-neutron-flux distribution in a reactor cell can be seen from the test results obtained. Compared with the other collision probability methods, the proposed treatment excels with a mathematical simplicity and a faster convergence. (author)
Development of a method for reconstruction of crowded NMR spectra from undersampled time-domain data
Energy Technology Data Exchange (ETDEWEB)
Ueda, Takumi; Yoshiura, Chie; Matsumoto, Masahiko; Kofuku, Yutaka; Okude, Junya; Kondo, Keita; Shiraishi, Yutaro [The University of Tokyo, Graduate School of Pharmaceutical Sciences (Japan); Takeuchi, Koh [Japan Science and Technology Agency, Precursory Research for Embryonic Science and Technology (Japan); Shimada, Ichio, E-mail: shimada@iw-nmr.f.u-tokyo.ac.jp [The University of Tokyo, Graduate School of Pharmaceutical Sciences (Japan)
2015-05-15
NMR is a unique methodology for obtaining information about the conformational dynamics of proteins in heterogeneous biomolecular systems. In various NMR methods, such as transferred cross-saturation, relaxation dispersion, and paramagnetic relaxation enhancement experiments, fast determination of the signal intensity ratios in the NMR spectra with high accuracy is required for analyses of targets with low yields and stabilities. However, conventional methods for the reconstruction of spectra from undersampled time-domain data, such as linear prediction, spectroscopy with integration of frequency and time domain, and analysis of Fourier, and compressed sensing were not effective for the accurate determination of the signal intensity ratios of the crowded two-dimensional spectra of proteins. Here, we developed an NMR spectra reconstruction method, “conservation of experimental data in analysis of Fourier” (Co-ANAFOR), to reconstruct the crowded spectra from the undersampled time-domain data. The number of sampling points required for the transferred cross-saturation experiments between membrane proteins, photosystem I and cytochrome b{sub 6}f, and their ligand, plastocyanin, with Co-ANAFOR was half of that needed for linear prediction, and the peak height reduction ratios of the spectra reconstructed from truncated time-domain data by Co-ANAFOR were more accurate than those reconstructed from non-uniformly sampled data by compressed sensing.
Statistically Consistent k-mer Methods for Phylogenetic Tree Reconstruction.
Allman, Elizabeth S; Rhodes, John A; Sullivant, Seth
2017-02-01
Frequencies of k-mers in sequences are sometimes used as a basis for inferring phylogenetic trees without first obtaining a multiple sequence alignment. We show that a standard approach of using the squared Euclidean distance between k-mer vectors to approximate a tree metric can be statistically inconsistent. To remedy this, we derive model-based distance corrections for orthologous sequences without gaps, which lead to consistent tree inference. The identifiability of model parameters from k-mer frequencies is also studied. Finally, we report simulations showing that the corrected distance outperforms many other k-mer methods, even when sequences are generated with an insertion and deletion process. These results have implications for multiple sequence alignment as well since k-mer methods are usually the first step in constructing a guide tree for such algorithms.
Reconstruction of the limit cycles by the delays method
International Nuclear Information System (INIS)
Castillo D, R.; Ortiz V, J.; Calleros M, G.
2003-01-01
The boiling water reactors (BWRs) are designed for usually to operate in a stable-lineal regime. In a limit cycle the behavior of the one system is no lineal-stable. In a BWR, instabilities of nuclear- thermohydraulics nature can take the reactor to a limit cycle. The limit cycles should to be avoided since the oscillations of power can cause thermal fatigue to the fuel and/or shroud. In this work the employment of the delays method is analyzed for its application in the detection of limit cycles in a nuclear power plant. The foundations of the method and it application to power signals to different operation conditions are presented. The analyzed signals are: to steady state, nuclear-thermohydraulic instability, a non linear transitory and, finally, failure of a controller plant . Among the main results it was found that the delays method can be applied to detect limit cycles in the power monitors of the BWR reactors. It was also found that the first zero of the autocorrelation function is an appropriate approach to select the delay in the detection of limit cycles, for the analyzed cases. (Author)
Collier, A.; Lao, L. L.; Abla, G.; Chu, M. S.; Prater, R.; Smith, S. P.; St. John, H. E.; Guo, W.; Li, G.; Pan, C.; Ren, Q.; Park, J. M.; Bisai, N.; Srinivasan, R.; Sun, A. P.; Liu, Y.; Worrall, M.
2010-11-01
This presentation summarizes several useful applications provided by the IMFIT integrated modeling framework to support DIII-D and EAST research. IMFIT is based on Python and utilizes modular task-flow architecture with a central manager and extensive GUI support to coordinate tasks among component modules. The kinetic-EFIT application allows multiple time-slice reconstructions by fetching pressure profile data directly from MDS+ or from ONETWO or PTRANSP. The stability application analyzes a given reference equilibrium for stability limits by performing parameter perturbation studies with MHD codes such as DCON, GATO, ELITE, or PEST3. The transport task includes construction of experimental energy and momentum fluxes from profile analysis and comparison against theoretical models such as MMM95, GLF23, or TGLF.
A new method for three-dimensional laparoscopic ultrasound model reconstruction
DEFF Research Database (Denmark)
Fristrup, C W; Pless, T; Durup, J
2004-01-01
was to perform a volumetric test and a clinical feasibility test of a new 3D method using standard laparoscopic ultrasound equipment. METHODS: Three-dimensional models were reconstructed from a series of two-dimensional ultrasound images using either electromagnetic tracking or a new 3D method. The volumetric...... accuracy of the new method was tested ex vivo, and the clinical feasibility was tested on a small series of patients. RESULTS: Both electromagnetic tracked reconstructions and the new 3D method gave good volumetric information with no significant difference. Clinical use of the new 3D method showed...... accurate models comparable to findings at surgery and pathology. CONCLUSIONS: The use of the new 3D method is technically feasible, and its volumetrically, accurate compared to 3D with electromagnetic tracking....
A new method for the reconstruction of micro- and nanoscale planar periodic structures.
Hu, Zhenxing; Xie, Huimin; Lu, Jian; Liu, Zhanwei; Wang, Qinghua
2010-08-01
In recent years, the micro- and nanoscale structures and materials are observed and characterized under microscopes with large magnification at the cost of small view field. In this paper, a new phase-shifting inverse geometry moiré method for the full-field reconstruction of micro- and nanoscale planar periodic structures is proposed. The random phase shift techniques are realized under the scanning types of microscopes. A simulation test and a practical verification experiment were performed, which demonstrate this method is feasible. As an application, the method was used to reconstruct the structure of a butterfly wing and a holographic grating. The results verify the reconstruction process is convenient. When being compared with the direct measurement method using point-by-point way, the method is very effective with a large view field. This method can be extended to reconstruct other planar periodic microstructures and to locate the defects in material possessing the regular lattice structure. Furthermore, it can be applied to evaluate the quality of micro- and nanoscale planar periodic structures under various high-power scanning microscopes. 2010 Elsevier B.V. All rights reserved.
One step linear reconstruction method for continuous wave diffuse optical tomography
Ukhrowiyah, N.; Yasin, M.
2017-09-01
The method one step linear reconstruction method for continuous wave diffuse optical tomography is proposed and demonstrated for polyvinyl chloride based material and breast phantom. Approximation which used in this method is selecting regulation coefficient and evaluating the difference between two states that corresponding to the data acquired without and with a change in optical properties. This method is used to recovery of optical parameters from measured boundary data of light propagation in the object. The research is demonstrated by simulation and experimental data. Numerical object is used to produce simulation data. Chloride based material and breast phantom sample is used to produce experimental data. Comparisons of results between experiment and simulation data are conducted to validate the proposed method. The results of the reconstruction image which is produced by the one step linear reconstruction method show that the image reconstruction almost same as the original object. This approach provides a means of imaging that is sensitive to changes in optical properties, which may be particularly useful for functional imaging used continuous wave diffuse optical tomography of early diagnosis of breast cancer.
Erkkilä, Kukka-Maaria; Ojala, Anne; Bastviken, David; Biermann, Tobias; Heiskanen, Jouni J.; Lindroth, Anders; Peltola, Olli; Rantakari, Miitta; Vesala, Timo; Mammarella, Ivan
2018-01-01
Freshwaters bring a notable contribution to the global carbon budget by emitting both carbon dioxide (CO2) and methane (CH4) to the atmosphere. Global estimates of freshwater emissions traditionally use a wind-speed-based gas transfer velocity, kCC (introduced by Cole and Caraco, 1998), for calculating diffusive flux with the boundary layer method (BLM). We compared CH4 and CO2 fluxes from BLM with kCC and two other gas transfer velocities (kTE and kHE), which include the effects of water-side cooling to the gas transfer besides shear-induced turbulence, with simultaneous eddy covariance (EC) and floating chamber (FC) fluxes during a 16-day measurement campaign in September 2014 at Lake Kuivajärvi in Finland. The measurements included both lake stratification and water column mixing periods. Results show that BLM fluxes were mainly lower than EC, with the more recent model kTE giving the best fit with EC fluxes, whereas FC measurements resulted in higher fluxes than simultaneous EC measurements. We highly recommend using up-to-date gas transfer models, instead of kCC, for better flux estimates. BLM CO2 flux measurements had clear differences between daytime and night-time fluxes with all gas transfer models during both stratified and mixing periods, whereas EC measurements did not show a diurnal behaviour in CO2 flux. CH4 flux had higher values in daytime than night-time during lake mixing period according to EC measurements, with highest fluxes detected just before sunset. In addition, we found clear differences in daytime and night-time concentration difference between the air and surface water for both CH4 and CO2. This might lead to biased flux estimates, if only daytime values are used in BLM upscaling and flux measurements in general. FC measurements did not detect spatial variation in either CH4 or CO2 flux over Lake Kuivajärvi. EC measurements, on the other hand, did not show any spatial variation in CH4 fluxes but did show a clear difference between CO2
Directory of Open Access Journals (Sweden)
K.-M. Erkkilä
2018-01-01
Full Text Available Freshwaters bring a notable contribution to the global carbon budget by emitting both carbon dioxide (CO2 and methane (CH4 to the atmosphere. Global estimates of freshwater emissions traditionally use a wind-speed-based gas transfer velocity, kCC (introduced by Cole and Caraco, 1998, for calculating diffusive flux with the boundary layer method (BLM. We compared CH4 and CO2 fluxes from BLM with kCC and two other gas transfer velocities (kTE and kHE, which include the effects of water-side cooling to the gas transfer besides shear-induced turbulence, with simultaneous eddy covariance (EC and floating chamber (FC fluxes during a 16-day measurement campaign in September 2014 at Lake Kuivajärvi in Finland. The measurements included both lake stratification and water column mixing periods. Results show that BLM fluxes were mainly lower than EC, with the more recent model kTE giving the best fit with EC fluxes, whereas FC measurements resulted in higher fluxes than simultaneous EC measurements. We highly recommend using up-to-date gas transfer models, instead of kCC, for better flux estimates. BLM CO2 flux measurements had clear differences between daytime and night-time fluxes with all gas transfer models during both stratified and mixing periods, whereas EC measurements did not show a diurnal behaviour in CO2 flux. CH4 flux had higher values in daytime than night-time during lake mixing period according to EC measurements, with highest fluxes detected just before sunset. In addition, we found clear differences in daytime and night-time concentration difference between the air and surface water for both CH4 and CO2. This might lead to biased flux estimates, if only daytime values are used in BLM upscaling and flux measurements in general. FC measurements did not detect spatial variation in either CH4 or CO2 flux over Lake Kuivajärvi. EC measurements, on the other hand, did not show any spatial variation in CH4 fluxes but did show a clear difference
A comparative study of interface reconstruction methods for multi-material ALE simulations
International Nuclear Information System (INIS)
Kucharik, Milan; Garimella, Rao V.; Schofield, Samuel P.; Shashkov, Mikhail J.
2010-01-01
In this paper we compare the performance of different methods for reconstructing interfaces in multi-material compressible flow simulations. The methods compared are a material-order-dependent Volume-of-Fluid (VOF) method, a material-order-independent VOF method based on power diagram partitioning of cells and the Moment-of-Fluid method (MOF). We demonstrate that the MOF method provides the most accurate tracking of interfaces, followed by the VOF method with the right material ordering. The material-order-independent VOF method performs somewhat worse than the above two while the solutions with VOF using the wrong material order are considerably worse.
Research on assessment and improvement method of remote sensing image reconstruction
Sun, Li; Hua, Nian; Yu, Yanbo; Zhao, Zhanping
2018-01-01
Remote sensing image quality assessment and improvement is an important part of image processing. Generally, the use of compressive sampling theory in remote sensing imaging system can compress images while sampling which can improve efficiency. A method of two-dimensional principal component analysis (2DPCA) is proposed to reconstruct the remote sensing image to improve the quality of the compressed image in this paper, which contain the useful information of image and can restrain the noise. Then, remote sensing image quality influence factors are analyzed, and the evaluation parameters for quantitative evaluation are introduced. On this basis, the quality of the reconstructed images is evaluated and the different factors influence on the reconstruction is analyzed, providing meaningful referential data for enhancing the quality of remote sensing images. The experiment results show that evaluation results fit human visual feature, and the method proposed have good application value in the field of remote sensing image processing.
Hadron Energy Reconstruction for ATLAS Barrel Combined Calorimeter Using Non-Parametrical Method
Kulchitskii, Yu A
2000-01-01
Hadron energy reconstruction for the ATLAS barrel prototype combined calorimeter in the framework of the non-parametrical method is discussed. The non-parametrical method utilizes only the known e/h ratios and the electron calibration constants and does not require the determination of any parameters by a minimization technique. Thus, this technique lends itself to fast energy reconstruction in a first level trigger. The reconstructed mean values of the hadron energies are within \\pm1% of the true values and the fractional energy resolution is [(58\\pm 3)%{\\sqrt{GeV}}/\\sqrt{E}+(2.5\\pm0.3)%]\\bigoplus(1.7\\pm0.2) GeV/E. The value of the e/h ratio obtained for the electromagnetic compartment of the combined calorimeter is 1.74\\pm0.04. Results of a study of the longitudinal hadronic shower development are also presented.
Application of information theory methods to food web reconstruction
Moniz, L.J.; Cooch, E.G.; Ellner, S.P.; Nichols, J.D.; Nichols, J.M.
2007-01-01
In this paper we use information theory techniques on time series of abundances to determine the topology of a food web. At the outset, the food web participants (two consumers, two resources) are known; in addition we know that each consumer prefers one of the resources over the other. However, we do not know which consumer prefers which resource, and if this preference is absolute (i.e., whether or not the consumer will consume the non-preferred resource). Although the consumers and resources are identified at the beginning of the experiment, we also provide evidence that the consumers are not resources for each other, and the resources do not consume each other. We do show that there is significant mutual information between resources; the model is seasonally forced and some shared information between resources is expected. Similarly, because the model is seasonally forced, we expect shared information between consumers as they respond to the forcing of the resources. The model that we consider does include noise, and in an effort to demonstrate that these methods may be of some use in other than model data, we show the efficacy of our methods with decreasing time series size; in this particular case we obtain reasonably clear results with a time series length of 400 points. This approaches ecological time series lengths from real systems.
Swarup, Aditi; Lu, Jing; DeWoody, Kathleen C; Antoniewicz, Maciek R
2014-07-01
Thermus thermophilus is an extremely thermophilic bacterium with significant biotechnological potential. In this work, we have characterized aerobic growth characteristics of T. thermophilus HB8 at temperatures between 50 and 85°C, constructed a metabolic network model of its central carbon metabolism and validated the model using (13)C-metabolic flux analysis ((13)C-MFA). First, cells were grown in batch cultures in custom constructed mini-bioreactors at different temperatures to determine optimal growth conditions. The optimal temperature for T. thermophilus grown on defined medium with glucose was 81°C. The maximum growth rate was 0.25h(-1). Between 50 and 81°C the growth rate increased by 7-fold and the temperature dependence was described well by an Arrhenius model with an activation energy of 47kJ/mol. Next, we performed a (13)C-labeling experiment with [1,2-(13)C] glucose as the tracer and calculated intracellular metabolic fluxes using (13)C-MFA. The results provided support for the constructed network model and highlighted several interesting characteristics of T. thermophilus metabolism. We found that T. thermophilus largely uses glycolysis and TCA cycle to produce biosynthetic precursors, ATP and reducing equivalents needed for cells growth. Consistent with its proposed metabolic network model, we did not detect any oxidative pentose phosphate pathway flux or Entner-Doudoroff pathway activity. The biomass precursors erythrose-4-phosphate and ribose-5-phosphate were produced via the non-oxidative pentose phosphate pathway, and largely via transketolase, with little contribution from transaldolase. The high biomass yield on glucose that was measured experimentally was also confirmed independently by (13)C-MFA. The results presented here provide a solid foundation for future studies of T. thermophilus and its metabolic engineering applications. Copyright © 2014 International Metabolic Engineering Society. Published by Elsevier Inc. All rights reserved.
International Nuclear Information System (INIS)
Menezes, Welton Alves; Alves Filho, Hermes; Barros, Ricardo C.
2009-01-01
In this paper the X,Y-geometry SD-SGF-CN spectral nodal method, cf. spectral diamond-spectral Green's function-constant nodal, is used to determine the one-speed node-edge average angular fluxes in heterogeneous domains. This hybrid spectral nodal method uses the spectral diamond (SD) auxiliary equation for the multiplying regions and the spectral Green's function (SGF) auxiliary equation for the non-multiplying regions of the domain. Moreover, we consider constant approximations for the transverse-leakage terms in the transverse integrated S N nodal equations. We solve the SD-SGF-CN equations using the one-node block inversion (NBI) iterative scheme, which uses the most recent estimates available for the node-entering fluxes to evaluate the node-exiting fluxes in the directions that constitute the incoming fluxes for the adjacent node. Using these results, we offer an algorithm for analytical reconstruction of the coarse-mesh nodal solution within each spatial node, as localized numerical solutions are not generated by usual accurate nodal methods. Numerical results are presented to illustrate the accuracy of the present algorithm. (author)
Energy Technology Data Exchange (ETDEWEB)
Liu, Wenyang [Department of Bioengineering, University of California, Los Angeles, California 90095 (United States); Cheung, Yam; Sabouri, Pouya; Arai, Tatsuya J.; Sawant, Amit [Department of Radiation Oncology, University of Texas Southwestern, Dallas, Texas 75390 (United States); Ruan, Dan, E-mail: druan@mednet.ucla.edu [Department of Bioengineering, University of California, Los Angeles, California 90095 and Department of Radiation Oncology, University of California, Los Angeles, California 90095 (United States)
2015-11-15
Purpose: To accurately and efficiently reconstruct a continuous surface from noisy point clouds captured by a surface photogrammetry system (VisionRT). Methods: The authors have developed a level-set based surface reconstruction method on point clouds captured by a surface photogrammetry system (VisionRT). The proposed method reconstructs an implicit and continuous representation of the underlying patient surface by optimizing a regularized fitting energy, offering extra robustness to noise and missing measurements. By contrast to explicit/discrete meshing-type schemes, their continuous representation is particularly advantageous for subsequent surface registration and motion tracking by eliminating the need for maintaining explicit point correspondences as in discrete models. The authors solve the proposed method with an efficient narrowband evolving scheme. The authors evaluated the proposed method on both phantom and human subject data with two sets of complementary experiments. In the first set of experiment, the authors generated a series of surfaces each with different black patches placed on one chest phantom. The resulting VisionRT measurements from the patched area had different degree of noise and missing levels, since VisionRT has difficulties in detecting dark surfaces. The authors applied the proposed method to point clouds acquired under these different configurations, and quantitatively evaluated reconstructed surfaces by comparing against a high-quality reference surface with respect to root mean squared error (RMSE). In the second set of experiment, the authors applied their method to 100 clinical point clouds acquired from one human subject. In the absence of ground-truth, the authors qualitatively validated reconstructed surfaces by comparing the local geometry, specifically mean curvature distributions, against that of the surface extracted from a high-quality CT obtained from the same patient. Results: On phantom point clouds, their method
Li, Ruizhe; Li, Liang; Chen, Zhiqiang
2017-02-07
Accurate estimation of distortion-free spectra is important but difficult in various applications, especially for spectral computed tomography. Two key problems must be solved to reconstruct the incident spectrum. One is the acquisition of the detector energy response. It can be calculated by Monte Carlo simulation, which requires detailed modeling of the detector system and a high computational power. It can also be acquired by establishing a parametric response model and be calibrated using monochromatic x-ray sources, such as synchrotron sources or radioactive isotopes. However, these monochromatic sources are difficult to obtain. Inspired by x-ray fluorescence (XRF) spectrum modeling, we propose a feasible method to obtain the detector energy response based on an optimized parametric model for CdZnTe or CdTe detectors. The other key problem is the reconstruction of the incident spectrum with the detector response. Directly obtaining an accurate solution from noisy data is difficult because the reconstruction problem is severely ill-posed. Different from the existing spectrum stripping method, a maximum likelihood-expectation maximization iterative algorithm is developed based on the Poisson noise model of the system. Simulation and experiment results show that our method is effective for spectrum reconstruction and markedly increases the accuracy of XRF spectra compared with the spectrum stripping method. The applicability of the proposed method is discussed, and promising results are presented.
Methods of reconstruction of multi-particle events in the new coordinate-tracking setup
Vorobyev, V. S.; Shutenko, V. V.; Zadeba, E. A.
2018-01-01
At the Unique Scientific Facility NEVOD (MEPhI), a large coordinate-tracking detector based on drift chambers for investigations of muon bundles generated by ultrahigh energy primary cosmic rays is being developed. One of the main characteristics of the bundle is muon multiplicity. Three methods of reconstruction of multiple events were investigated: the sequential search method, method of finding the straight line and method of histograms. The last method determines the number of tracks with the same zenith angle in the event. It is most suitable for the determination of muon multiplicity: because of a large distance to the point of generation of muons, their trajectories are quasiparallel. The paper presents results of application of three reconstruction methods to data from the experiment, and also first results of the detector operation.
Wheeler, Mary
2013-11-16
We study the numerical approximation on irregular domains with general grids of the system of poroelasticity, which describes fluid flow in deformable porous media. The flow equation is discretized by a multipoint flux mixed finite element method and the displacements are approximated by a continuous Galerkin finite element method. First-order convergence in space and time is established in appropriate norms for the pressure, velocity, and displacement. Numerical results are presented that illustrate the behavior of the method. © Springer Science+Business Media Dordrecht 2013.
Liu, Wenyang; Cheung, Yam; Sabouri, Pouya; Arai, Tatsuya J; Sawant, Amit; Ruan, Dan
2015-11-01
To accurately and efficiently reconstruct a continuous surface from noisy point clouds captured by a surface photogrammetry system (VisionRT). The authors have developed a level-set based surface reconstruction method on point clouds captured by a surface photogrammetry system (VisionRT). The proposed method reconstructs an implicit and continuous representation of the underlying patient surface by optimizing a regularized fitting energy, offering extra robustness to noise and missing measurements. By contrast to explicit/discrete meshing-type schemes, their continuous representation is particularly advantageous for subsequent surface registration and motion tracking by eliminating the need for maintaining explicit point correspondences as in discrete models. The authors solve the proposed method with an efficient narrowband evolving scheme. The authors evaluated the proposed method on both phantom and human subject data with two sets of complementary experiments. In the first set of experiment, the authors generated a series of surfaces each with different black patches placed on one chest phantom. The resulting VisionRT measurements from the patched area had different degree of noise and missing levels, since VisionRT has difficulties in detecting dark surfaces. The authors applied the proposed method to point clouds acquired under these different configurations, and quantitatively evaluated reconstructed surfaces by comparing against a high-quality reference surface with respect to root mean squared error (RMSE). In the second set of experiment, the authors applied their method to 100 clinical point clouds acquired from one human subject. In the absence of ground-truth, the authors qualitatively validated reconstructed surfaces by comparing the local geometry, specifically mean curvature distributions, against that of the surface extracted from a high-quality CT obtained from the same patient. On phantom point clouds, their method achieved submillimeter
Knies, David; Wittmüß, Philipp; Appel, Sebastian; Sawodny, Oliver; Ederer, Michael; Feuer, Ronny
2015-10-28
The coccolithophorid unicellular alga Emiliania huxleyi is known to form large blooms, which have a strong effect on the marine carbon cycle. As a photosynthetic organism, it is subjected to a circadian rhythm due to the changing light conditions throughout the day. For a better understanding of the metabolic processes under these periodically-changing environmental conditions, a genome-scale model based on a genome reconstruction of the E. huxleyi strain CCMP 1516 was created. It comprises 410 reactions and 363 metabolites. Biomass composition is variable based on the differentiation into functional biomass components and storage metabolites. The model is analyzed with a flux balance analysis approach called diurnal flux balance analysis (diuFBA) that was designed for organisms with a circadian rhythm. It allows storage metabolites to accumulate or be consumed over the diurnal cycle, while keeping the structure of a classical FBA problem. A feature of this approach is that the production and consumption of storage metabolites is not defined externally via the biomass composition, but the result of optimal resource management adapted to the diurnally-changing environmental conditions. The model in combination with this approach is able to simulate the variable biomass composition during the diurnal cycle in proximity to literature data.
McCloskey, Rosemary M.; Liang, Richard H.; Harrigan, P. Richard; Brumme, Zabrina L.
2014-01-01
ABSTRACT A population of human immunodeficiency virus (HIV) within a host often descends from a single transmitted/founder virus. The high mutation rate of HIV, coupled with long delays between infection and diagnosis, make isolating and characterizing this strain a challenge. In theory, ancestral reconstruction could be used to recover this strain from sequences sampled in chronic infection; however, the accuracy of phylogenetic techniques in this context is unknown. To evaluate the accuracy of these methods, we applied ancestral reconstruction to a large panel of published longitudinal clonal and/or single-genome-amplification HIV sequence data sets with at least one intrapatient sequence set sampled within 6 months of infection or seroconversion (n = 19,486 sequences, median [interquartile range] = 49 [20 to 86] sequences/set). The consensus of the earliest sequences was used as the best possible estimate of the transmitted/founder. These sequences were compared to ancestral reconstructions from sequences sampled at later time points using both phylogenetic and phylogeny-naive methods. Overall, phylogenetic methods conferred a 16% improvement in reproducing the consensus of early sequences, compared to phylogeny-naive methods. This relative advantage increased with intrapatient sequence diversity (P reconstructing ancestral indel variation, especially within indel-rich regions of the HIV genome. Although further improvements are needed, our results indicate that phylogenetic methods for ancestral reconstruction significantly outperform phylogeny-naive alternatives, and we identify experimental conditions and study designs that can enhance accuracy of transmitted/founder virus reconstruction. IMPORTANCE When HIV is transmitted into a new host, most of the viruses fail to infect host cells. Consequently, an HIV infection tends to be descended from a single “founder” virus. A priority target for the vaccine research, these transmitted/founder viruses are
An overview of AmeriFlux data products and methods for data acquisition, processing, and publication
Pastorello, G.; Poindexter, C.; Agarwal, D.; Papale, D.; van Ingen, C.; Torn, M. S.
2014-12-01
The AmeriFlux network encompasses independently managed field sites measuring ecosystem carbon, water, and energy fluxes across the Americas. In close coordination with ICOS in Europe, a new set of fluxes data and metadata products is being produced and released at the FLUXNET level, including all AmeriFlux sites. This will enable continued releases of global standardized set of flux data products. In this release, new formats, structures, and ancillary information are being proposed and adopted. This presentation discusses these aspects, detailing current and future solutions. One of the major revisions was to the BADM (Biological, Ancillary, and Disturbance Metadata) protocols. The updates include structure and variable changes to address new developments in data collection related to flux towers and facilitate two-way data sharing. In particular, a new organization of templates is now in place, including changes in templates for biomass, disturbances, instrumentation, soils, and others. New variables and an extensive addition to the vocabularies used to describe BADM templates allow for a more flexible and comprehensible coverage of field sites and the data collection methods and results. Another extensive revision is in the data formats, levels, and versions for fluxes and micrometeorological data. A new selection and revision of data variables and an integrated new definition for data processing levels allow for a more intuitive and flexible notation for the variety of data products. For instance, all variables now include positional information that is tied to BADM instrumentation descriptions. This allows for a better characterization of spatial representativeness of data points, e.g., individual sensors or the tower footprint. Additionally, a new definition for data levels better characterizes the types of processing and transformations applied to the data across different dimensions (e.g., spatial representativeness of a data point, data quality checks
Energy Technology Data Exchange (ETDEWEB)
Garreta, Vincent; Guiot, Joel; Hely, Christelle [CEREGE, UMR 6635, CNRS, Universite Aix-Marseille, Europole de l' Arbois, Aix-en-Provence (France); Miller, Paul A.; Sykes, Martin T. [Lund University, Department of Physical Geography and Ecosystems Analysis, Geobiosphere Science Centre, Lund (Sweden); Brewer, Simon [Universite de Liege, Institut d' Astrophysique et de Geophysique, Liege (Belgium); Litt, Thomas [University of Bonn, Paleontological Institute, Bonn (Germany)
2010-08-15
Climate reconstructions from data sensitive to past climates provide estimates of what these climates were like. Comparing these reconstructions with simulations from climate models allows to validate the models used for future climate prediction. It has been shown that for fossil pollen data, gaining estimates by inverting a vegetation model allows inclusion of past changes in carbon dioxide values. As a new generation of dynamic vegetation model is available we have developed an inversion method for one model, LPJ-GUESS. When this novel method is used with high-resolution sediment it allows us to bypass the classic assumptions of (1) climate and pollen independence between samples and (2) equilibrium between the vegetation, represented as pollen, and climate. Our dynamic inversion method is based on a statistical model to describe the links among climate, simulated vegetation and pollen samples. The inversion is realised thanks to a particle filter algorithm. We perform a validation on 30 modern European sites and then apply the method to the sediment core of Meerfelder Maar (Germany), which covers the Holocene at a temporal resolution of approximately one sample per 30 years. We demonstrate that reconstructed temperatures are constrained. The reconstructed precipitation is less well constrained, due to the dimension considered (one precipitation by season), and the low sensitivity of LPJ-GUESS to precipitation changes. (orig.)
International Nuclear Information System (INIS)
Milechina, L.; Cederwall, B.
2003-01-01
Gamma-ray tracking, a new detection technique for nuclear spectroscopy, requires efficient algorithms for reconstructing the interaction paths of multiple γ rays in a detector volume. In the present work, we discuss the effect of the atomic electron momentum distribution in Ge as well as employment of different types of figure-of-merit within the context of the so called backtracking method
a Borehole-Dilution Method for Quantifying Vertical Darcy Fluxes in the Hyporheic Zone
Augustine, S. D.; Annable, M. D.; Cho, J.
2017-12-01
The borehole dilution method has consistently and successfully been used for estimating local water fluxes, however, this method can be relatively labor intensive and expensive. The focus of this research is aimed at developing a low-cost, borehole dilution method for quantifying vertical water fluxes in the hyporheic zone at the surface-groundwater interface. This would allow for the deployment of multiple units within a targeted surface water body and thus produce high-resolution, spatially distributed data on the infiltration rates over a short period of time with minimal set-up requirements. The device consists of a 2-inch, inner diameter PVC pipe containing short, screened sections in its upper and lower segments. The working unit is driven into the sediment and acts as a continuous flow reactor creating a pathway between the subsurface pore-water and the overlying surface water where the presence of a hydraulic gradient facilitates vertical movement. We developed a simple electrode and tracer-injection system housed within the unit to inject and measure salt tracer concentrations at the desired intervals while monitoring and storing those measurements using open-source Arduino technology. Preliminary lab and field scale trials provided data that was fit to both zero and first order reaction rate functions for analysis. The field test was conducted over approximately one day within a wet retention basin. The initial results estimated a vertical Darcy flux of 113.5 cm/d. Additional testing over a range of expected Darcy fluxes will be presented along with an evaluation considering enhanced water flow due to the high hydraulic conductivity of the device.
A FIB-nanotomography method for accurate 3D reconstruction of open nanoporous structures
Energy Technology Data Exchange (ETDEWEB)
Mangipudi, K.R., E-mail: mangipudi@ump.gwdg.de [Institut für Materialphysik, Georg-August-Universität Göttingen, Friedrich-Hund-Platz 1, D-37077 Göttingen (Germany); Radisch, V., E-mail: vradisch@ump.gwdg.de [Institut für Materialphysik, Georg-August-Universität Göttingen, Friedrich-Hund-Platz 1, D-37077 Göttingen (Germany); Holzer, L., E-mail: holz@zhaw.ch [Züricher Hochschule für Angewandte Wissenschaften, Institute of Computational Physics, Wildbachstrasse 21, CH-8400 Winterthur (Switzerland); Volkert, C.A., E-mail: volkert@ump.gwdg.de [Institut für Materialphysik, Georg-August-Universität Göttingen, Friedrich-Hund-Platz 1, D-37077 Göttingen (Germany)
2016-04-15
We present an automated focused ion beam nanotomography method for nanoporous microstructures with open porosity, and apply it to reconstruct nanoporous gold (np-Au) structures with ligament sizes on the order of a few tens of nanometers. This method uses serial sectioning of a well-defined wedge-shaped geometry to determine the thickness of individual slices from the changes in the sample width in successive cross-sectional images. The pore space of a selected region of the np-Au is infiltrated with ion-beam-deposited Pt composite before serial sectioning. The cross-sectional images are binarized and stacked according to the individual slice thicknesses, and then processed using standard reconstruction methods. For the image conditions and sample geometry used here, we are able to determine the thickness of individual slices with an accuracy much smaller than a pixel. The accuracy of the new method based on actual slice thickness is assessed by comparing it with (i) a reconstruction using the same cross-sectional images but assuming a constant slice thickness, and (ii) a reconstruction using traditional FIB-tomography method employing constant slice thickness. The morphology and topology of the structures are characterized using ligament and pore size distributions, interface shape distribution functions, interface normal distributions, and genus. The results suggest that the morphology and topology of the final reconstructions are significantly influenced when a constant slice thickness is assumed. The study reveals grain-to-grain variations in the morphology and topology of np-Au. - Highlights: • FIB nanotomography of nanoporous structure with features sizes of ∼40 nm or less. • Accurate determination of individual slice thickness with subpixel precision. • The method preserves surface topography. • Quantitative 3D microstructural analysis of materials with open porosity.
Three-dimensional Reconstruction Method Study Based on Interferometric Circular SAR
Directory of Open Access Journals (Sweden)
Hou Liying
2016-10-01
Full Text Available Circular Synthetic Aperture Radar (CSAR can acquire targets’ scattering information in all directions by a 360° observation, but a single-track CSAR cannot efficiently obtain height scattering information for a strong directive scatter. In this study, we examine the typical target of the three-dimensional circular SAR interferometry theoryand validate the theory in a darkroom experiment. We present a 3D reconstruction of the actual tank metal model of interferometric CSAR for the first time, verify the validity of the method, and demonstrate the important potential applications of combining 3D reconstruction with omnidirectional observation.
Inverse Heat Conduction Methods in the CHAR Code for Aerothermal Flight Data Reconstruction
Oliver, A. Brandon; Amar, Adam J.
2016-01-01
Reconstruction of flight aerothermal environments often requires the solution of an inverse heat transfer problem, which is an ill-posed problem of determining boundary conditions from discrete measurements in the interior of the domain. This paper will present the algorithms implemented in the CHAR code for use in reconstruction of EFT-1 flight data and future testing activities. Implementation details will be discussed, and alternative hybrid-methods that are permitted by the implementation will be described. Results will be presented for a number of problems.
Baxes, Gregory A. (Inventor); Linger, Timothy C. (Inventor)
2011-01-01
Systems and methods are provided for progressive mesh storage and reconstruction using wavelet-encoded height fields. A method for progressive mesh storage includes reading raster height field data, and processing the raster height field data with a discrete wavelet transform to generate wavelet-encoded height fields. In another embodiment, a method for progressive mesh storage includes reading texture map data, and processing the texture map data with a discrete wavelet transform to generate wavelet-encoded texture map fields. A method for reconstructing a progressive mesh from wavelet-encoded height field data includes determining terrain blocks, and a level of detail required for each terrain block, based upon a viewpoint. Triangle strip constructs are generated from vertices of the terrain blocks, and an image is rendered utilizing the triangle strip constructs. Software products that implement these methods are provided.
An accelerated test method of luminous flux depreciation for LED luminaires and lamps
International Nuclear Information System (INIS)
Qian, C.; Fan, X.J.; Fan, J.J.; Yuan, C.A.; Zhang, G.Q.
2016-01-01
Light Emitting Diode (LED) luminaires and lamps are energy-saving and environmental friendly alternatives to traditional lighting products. However, current luminous flux depreciation test at luminaire and lamp level requires a minimum of 6000 h testing, which is even longer than the product development cycle time. This paper develops an accelerated test method for luminous flux depreciation to reduce the test time within 2000 h at an elevated temperature. The method is based on lumen maintenance boundary curve, obtained from a collection of LED source lumen depreciation data, known as LM-80 data. The exponential decay model and Arrhenius acceleration relationship are used to determine the new threshold of lumen maintenance and acceleration factor. The proposed method has been verified by a number of simulation studies and experimental data for a wide range of LED luminaire and lamp types from both internal and external experiments. The qualification results obtained by the accelerated test method agree well with traditional 6000 h tests. - Highlights: • We develop an accelerated test method for LED luminaires and lamps. • The method is proposed based on a “Boundary Curve” concept. • The parameters of the boundary curve are extracted from LM-80 test reports. • Qualification results from the proposed method agree with ES requirements.
International Nuclear Information System (INIS)
Liu, Sha; Liu, Shi; Tong, Guowei
2017-01-01
In industrial areas, temperature distribution information provides a powerful data support for improving system efficiency, reducing pollutant emission, ensuring safety operation, etc. As a noninvasive measurement technology, acoustic tomography (AT) has been widely used to measure temperature distribution where the efficiency of the reconstruction algorithm is crucial for the reliability of the measurement results. Different from traditional reconstruction techniques, in this paper a two-phase reconstruction method is proposed to ameliorate the reconstruction accuracy (RA). In the first phase, the measurement domain is discretized by a coarse square grid to reduce the number of unknown variables to mitigate the ill-posed nature of the AT inverse problem. By taking into consideration the inaccuracy of the measured time-of-flight data, a new cost function is constructed to improve the robustness of the estimation, and a grey wolf optimizer is used to solve the proposed cost function to obtain the temperature distribution on the coarse grid. In the second phase, the Adaboost.RT based BP neural network algorithm is developed for predicting the temperature distribution on the refined grid in accordance with the temperature distribution data estimated in the first phase. Numerical simulations and experiment measurement results validate the superiority of the proposed reconstruction algorithm in improving the robustness and RA. (paper)
Liu, Sha; Liu, Shi; Tong, Guowei
2017-11-01
In industrial areas, temperature distribution information provides a powerful data support for improving system efficiency, reducing pollutant emission, ensuring safety operation, etc. As a noninvasive measurement technology, acoustic tomography (AT) has been widely used to measure temperature distribution where the efficiency of the reconstruction algorithm is crucial for the reliability of the measurement results. Different from traditional reconstruction techniques, in this paper a two-phase reconstruction method is proposed to ameliorate the reconstruction accuracy (RA). In the first phase, the measurement domain is discretized by a coarse square grid to reduce the number of unknown variables to mitigate the ill-posed nature of the AT inverse problem. By taking into consideration the inaccuracy of the measured time-of-flight data, a new cost function is constructed to improve the robustness of the estimation, and a grey wolf optimizer is used to solve the proposed cost function to obtain the temperature distribution on the coarse grid. In the second phase, the Adaboost.RT based BP neural network algorithm is developed for predicting the temperature distribution on the refined grid in accordance with the temperature distribution data estimated in the first phase. Numerical simulations and experiment measurement results validate the superiority of the proposed reconstruction algorithm in improving the robustness and RA.
Deep learning methods to guide CT image reconstruction and reduce metal artifacts
Gjesteby, Lars; Yang, Qingsong; Xi, Yan; Zhou, Ye; Zhang, Junping; Wang, Ge
2017-03-01
The rapidly-rising field of machine learning, including deep learning, has inspired applications across many disciplines. In medical imaging, deep learning has been primarily used for image processing and analysis. In this paper, we integrate a convolutional neural network (CNN) into the computed tomography (CT) image reconstruction process. Our first task is to monitor the quality of CT images during iterative reconstruction and decide when to stop the process according to an intelligent numerical observer instead of using a traditional stopping rule, such as a fixed error threshold or a maximum number of iterations. After training on ground truth images, the CNN was successful in guiding an iterative reconstruction process to yield high-quality images. Our second task is to improve a sinogram to correct for artifacts caused by metal objects. A large number of interpolation and normalization-based schemes were introduced for metal artifact reduction (MAR) over the past four decades. The NMAR algorithm is considered a state-of-the-art method, although residual errors often remain in the reconstructed images, especially in cases of multiple metal objects. Here we merge NMAR with deep learning in the projection domain to achieve additional correction in critical image regions. Our results indicate that deep learning can be a viable tool to address CT reconstruction challenges.
Rehanging Reynolds at the British Institution: Methods for Reconstructing Ephemeral Displays
Directory of Open Access Journals (Sweden)
Catherine Roach
2016-11-01
Full Text Available Reconstructions of historic exhibitions made with current technologies can present beguiling illusions, but they also put us in danger of recreating the past in our own image. This article and the accompanying reconstruction explore methods for representing lost displays, with an emphasis on visualizing uncertainty, illuminating process, and understanding the mediated nature of period images. These issues are highlighted in a partial recreation of a loan show held at the British Institution, London, in 1823, which featured the works of Sir Joshua Reynolds alongside continental old masters. This recreation demonstrates how speculative reconstructions can nonetheless shed light on ephemeral displays, revealing powerful visual and conceptual dialogues that took place on the crowded walls of nineteenth-century exhibitions.
Optical properties reconstruction using the adjoint method based on the radiative transfer equation
Addoum, Ahmad; Farges, Olivier; Asllanaj, Fatmir
2018-01-01
An efficient algorithm is proposed to reconstruct the spatial distribution of optical properties in heterogeneous media like biological tissues. The light transport through such media is accurately described by the radiative transfer equation in the frequency-domain. The adjoint method is used to efficiently compute the objective function gradient with respect to optical parameters. Numerical tests show that the algorithm is accurate and robust to retrieve simultaneously the absorption μa and scattering μs coefficients for lowly and highly absorbing medium. Moreover, the simultaneous reconstruction of μs and the anisotropy factor g of the Henyey-Greenstein phase function is achieved with a reasonable accuracy. The main novelty in this work is the reconstruction of g which might open the possibility to image this parameter in tissues as an additional contrast agent in optical tomography.
Fast data reconstructed method of Fourier transform imaging spectrometer based on multi-core CPU
Yu, Chunchao; Du, Debiao; Xia, Zongze; Song, Li; Zheng, Weijian; Yan, Min; Lei, Zhenggang
2017-10-01
Imaging spectrometer can gain two-dimensional space image and one-dimensional spectrum at the same time, which shows high utility in color and spectral measurements, the true color image synthesis, military reconnaissance and so on. In order to realize the fast reconstructed processing of the Fourier transform imaging spectrometer data, the paper designed the optimization reconstructed algorithm with OpenMP parallel calculating technology, which was further used for the optimization process for the HyperSpectral Imager of `HJ-1' Chinese satellite. The results show that the method based on multi-core parallel computing technology can control the multi-core CPU hardware resources competently and significantly enhance the calculation of the spectrum reconstruction processing efficiency. If the technology is applied to more cores workstation in parallel computing, it will be possible to complete Fourier transform imaging spectrometer real-time data processing with a single computer.
DEFF Research Database (Denmark)
Lu, Kaiyuan; Rasmussen, Peter Omand; Ritchie, Ewen
2011-01-01
This paper presents a new method for computation of the nonlinear flux linkage in 3-D finite-element models (FEMs) of electrical machines. Accurate computation of the nonlinear flux linkage in 3-D FEM is not an easy task. Compared to the existing energy-perturbation method, the new technique......-perturbation method. The new method proposed is validated using experimental results on two different permanent magnet machines....
Direct fourier methods in 3D-reconstruction from cone-beam data
International Nuclear Information System (INIS)
Axelsson, C.
1994-01-01
The problem of 3D-reconstruction is encountered in both medical and industrial applications of X-ray tomography. A method able to utilize a complete set of projections complying with Tuys condition was proposed by Grangeat. His method is mathematically exact and consists of two distinct phases. In phase 1 cone-beam projection data are used to produce the derivative of the radon transform. In phase 2, after interpolation, the radon transform data are used to reconstruct the three-dimensional object function. To a large extent our method is an extension of the Grangeat method. Our aim is to reduce the computational complexity, i.e. to produce a faster method. The most taxing procedure during phase 1 is computation of line-integrals in the detector plane. By applying the direct Fourier method in reverse for this computation, we reduce the complexity of phase 1 from O(N 4 ) to O(N 3 logN). Phase 2 can be performed either as a straight 3D-reconstruction or as a sequence of two 2D-reconstructions in vertical and horizontal planes, respectively. Direct Fourier methods can be applied for the 2D- and for the 3D-reconstruction, which reduces the complexity of phase 2 from O(N 4 ) to O(N 3 logN) as well. In both cases, linogram techniques are applied. For 3D-reconstruction the inversion formula contains the second derivative filter instead of the well-known ramp-filter employed in the 2D-case. The derivative filter is more well-behaved than the 2D ramp-filter. This implies that less zeropadding is necessary which brings about a further reduction of the computational efforts. The method has been verified by experiments on simulated data. The image quality is satisfactory and independent of cone-beam angles. For a 512 3 volume we estimate that our method is ten times faster than Grangeats method
Flux weighted method for solution of stiff neutron dynamic equations and its application
International Nuclear Information System (INIS)
Li Huiyun; Jiao Huixian
1987-12-01
To analyze reactivity event for nuclear power plants, it is necessary to solve the neutron dynamic equations, which is a group of typical stiff constant differential equations. Very small time steps could only be adopted when the group of equations is solved by common methods. However, a large time steps might be selected if the Flux Weighted Medthod introduced in this paper is used. Generally, weighted factor θ i1 is set as a constant. Naturally, this treatment method can decrease the accuracy of calculation for the increase of the steadiness of solving the equations. An accurate theoretical formula of 4 x 4 matrix of θ i1 is rigorously derived so that the accuracy of calculation is ensured, as well as the steadiness of solved equations is increased. This method have the advantage over classical Runge-kutta Method and other methods. The time steps could be increased by a factor of 1 ∼ 3 orders of magnitude so as to save a lot of computating time. The programe solving neutron dynamic equation, which is prepared by using Flux Weighted Method, could be sued for real time analog of training simulator, as well as for analysis and computation of reactivity event (including rod jumping out event)
A Stochastic Geometry Method for Pylon Reconstruction from Airborne LiDAR Data
Directory of Open Access Journals (Sweden)
Bo Guo
2016-03-01
Full Text Available Object detection and reconstruction from remotely sensed data are active research topic in photogrammetric and remote sensing communities. Power engineering device monitoring by detecting key objects is important for power safety. In this paper, we introduce a novel method for the reconstruction of self-supporting pylons widely used in high voltage power-line systems from airborne LiDAR data. Our work constructs pylons from a library of 3D parametric models, which are represented using polyhedrons based on stochastic geometry. Firstly, laser points of pylons are extracted from the dataset using an automatic classification method. An energy function made up of two terms is then defined: the first term measures the adequacy of the objects with respect to the data, and the second term has the ability to favor or penalize certain configurations based on prior knowledge. Finally, estimation is undertaken by minimizing the energy using simulated annealing. We use a Markov Chain Monte Carlo sampler, leading to an optimal configuration of objects. Two main contributions of this paper are: (1 building a framework for automatic pylon reconstruction; and (2 efficient global optimization. The pylons can be precisely reconstructed through energy optimization. Experiments producing convincing results validated the proposed method using a dataset of complex structure.
Influence of image reconstruction methods on statistical parametric mapping of brain PET images
International Nuclear Information System (INIS)
Yin Dayi; Chen Yingmao; Yao Shulin; Shao Mingzhe; Yin Ling; Tian Jiahe; Cui Hongyan
2007-01-01
Objective: Statistic parametric mapping (SPM) was widely recognized as an useful tool in brain function study. The aim of this study was to investigate if imaging reconstruction algorithm of PET images could influence SPM of brain. Methods: PET imaging of whole brain was performed in six normal volunteers. Each volunteer had two scans with true and false acupuncturing. The PET scans were reconstructed using ordered subsets expectation maximization (OSEM) and filtered back projection (FBP) with 3 varied parameters respectively. The images were realigned, normalized and smoothed using SPM program. The difference between true and false acupuncture scans was tested using a matched pair t test at every voxel. Results: (1) SPM corrected multiple comparison (P corrected uncorrected <0.001): SPM derived from the images with different reconstruction method were different. The largest difference, in number and position of the activated voxels, was noticed between FBP and OSEM re- construction algorithm. Conclusions: The method of PET image reconstruction could influence the results of SPM uncorrected multiple comparison. Attention should be paid when the conclusion was drawn using SPM uncorrected multiple comparison. (authors)
A multipoint flux mixed finite element method on distorted quadrilaterals and hexahedra
Wheeler, Mary
2011-11-06
In this paper, we develop a new mixed finite element method for elliptic problems on general quadrilateral and hexahedral grids that reduces to a cell-centered finite difference scheme. A special non-symmetric quadrature rule is employed that yields a positive definite cell-centered system for the pressure by eliminating local velocities. The method is shown to be accurate on highly distorted rough quadrilateral and hexahedral grids, including hexahedra with non-planar faces. Theoretical and numerical results indicate first-order convergence for the pressure and face fluxes. © 2011 Springer-Verlag.
A Lift-Off-Tolerant Magnetic Flux Leakage Testing Method for Drill Pipes at Wellhead
Wu, Jianbo; Fang, Hui; Li, Long; Wang, Jie; Huang, Xiaoming; Kang, Yihua; Sun, Yanhua; Tang, Chaoqing
2017-01-01
To meet the great needs for MFL (magnetic flux leakage) inspection of drill pipes at wellheads, a lift-off-tolerant MFL testing method is proposed and investigated in this paper. Firstly, a Helmholtz coil magnetization method and the whole MFL testing scheme are proposed. Then, based on the magnetic field focusing effect of ferrite cores, a lift-off-tolerant MFL sensor is developed and tested. It shows high sensitivity at a lift-off distance of 5.0 mm. Further, the follow-up high repeatabilit...
Negara, Ardiansyah
2013-01-01
Anisotropy of hydraulic properties of subsurface geologic formations is an essential feature that has been established as a consequence of the different geologic processes that they undergo during the longer geologic time scale. With respect to petroleum reservoirs, in many cases, anisotropy plays significant role in dictating the direction of flow that becomes no longer dependent only on the pressure gradient direction but also on the principal directions of anisotropy. Furthermore, in complex systems involving the flow of multiphase fluids in which the gravity and the capillarity play an important role, anisotropy can also have important influences. Therefore, there has been great deal of motivation to consider anisotropy when solving the governing conservation laws numerically. Unfortunately, the two-point flux approximation of finite difference approach is not capable of handling full tensor permeability fields. Lately, however, it has been possible to adapt the multipoint flux approximation that can handle anisotropy to the framework of finite difference schemes. In multipoint flux approximation method, the stencil of approximation is more involved, i.e., it requires the involvement of 9-point stencil for the 2-D model and 27-point stencil for the 3-D model. This is apparently challenging and cumbersome when making the global system of equations. In this work, we apply the equation-type approach, which is the experimenting pressure field approach that enables the solution of the global problem breaks into the solution of multitude of local problems that significantly reduce the complexity without affecting the accuracy of numerical solution. This approach also leads in reducing the computational cost during the simulation. We have applied this technique to a variety of anisotropy scenarios of 3-D subsurface flow problems and the numerical results demonstrate that the experimenting pressure field technique fits very well with the multipoint flux approximation
High-speed fan-beam reconstruction using direct two-dimensional Fourier transform method
International Nuclear Information System (INIS)
Niki, Noboru; Mizutani, Toshio; Takahashi, Yoshizo; Inouye, Tamon.
1984-01-01
Since the first development of X-ray computer tomography (CT), various efforts have been made to obtain high quality of high-speed image. However, the development of high resolution CT and the ultra-high speed CT to be applied to hearts is still desired. The X-ray beam scanning method was already changed from the parallel beam system to the fan-beam system in order to greatly shorten the scanning time. Also, the filtered back projection (DFBP) method has been employed to directly processing fan-beam projection data as reconstruction method. Although the two-dimensional Fourier transform (TFT) method significantly faster than FBP method was proposed, it has not been sufficiently examined for fan-beam projection data. Thus, the ITFT method was investigated, which first executes rebinning algorithm to convert the fan-beam projection data to the parallel beam projection data, thereafter, uses two-dimensional Fourier transform. By this method, although high speed is expected, the reconstructed images might be degraded due to the adoption of rebinning algorithm. Therefore, the effect of the interpolation error of rebinning algorithm on the reconstructed images has been analyzed theoretically, and finally, the result of the employment of spline interpolation which allows the acquisition of high quality images with less errors has been shown by the numerical and visual evaluation based on simulation and actual data. Computation time was reduced to 1/15 for the image matrix of 512 and to 1/30 for doubled matrix. (Wakatsuki, Y.)
Adeli, Ruhollah; Kasesaz, Yaser; Shirmardi, Seyed Pezhman; Ezaty, Arsalan
2018-03-01
For designing an appropriate neutron beam, the determination of neutron flux at any irradiation facility is an important key factor. Due to the importance of determining the thermal and epithermal neutron fluxes in a typical thermal column of a reactor, a simple and accurate technique is introduced in this study. Absolute thermal and epithermal fluxes were measured experimentally at a certain point using the foil activation method by neutron bombardment of bare and cadmium covered Au foils. The relative neutron fluxes were also derived simply by means of Monte Carlo simulation by accurate modelling of the reactor components. Finally, by normalization of the relative distribution flux with regard to information about the absolute neutron flux, the accurate thermal and epithermal neutron distributions were derived, separately. Copyright © 2017 Elsevier Ltd. All rights reserved.
A Method for 3D Histopathology Reconstruction Supporting Mouse Microvasculature Analysis.
Directory of Open Access Journals (Sweden)
Yiwen Xu
Full Text Available Structural abnormalities of the microvasculature can impair perfusion and function. Conventional histology provides good spatial resolution with which to evaluate the microvascular structure but affords no 3-dimensional information; this limitation could lead to misinterpretations of the complex microvessel network in health and disease. The objective of this study was to develop and evaluate an accurate, fully automated 3D histology reconstruction method to visualize the arterioles and venules within the mouse hind-limb. Sections of the tibialis anterior muscle from C57BL/J6 mice (both normal and subjected to femoral artery excision were reconstructed using pairwise rigid and affine registrations of 5 µm-thick, paraffin-embedded serial sections digitized at 0.25 µm/pixel. Low-resolution intensity-based rigid registration was used to initialize the nucleus landmark-based registration, and conventional high-resolution intensity-based registration method. The affine nucleus landmark-based registration was developed in this work and was compared to the conventional affine high-resolution intensity-based registration method. Target registration errors were measured between adjacent tissue sections (pairwise error, as well as with respect to a 3D reference reconstruction (accumulated error, to capture propagation of error through the stack of sections. Accumulated error measures were lower (p < 0.01 for the nucleus landmark technique and superior vasculature continuity was observed. These findings indicate that registration based on automatic extraction and correspondence of small, homologous landmarks may support accurate 3D histology reconstruction. This technique avoids the otherwise problematic "banana-into-cylinder" effect observed using conventional methods that optimize the pairwise alignment of salient structures, forcing them to be section-orthogonal. This approach will provide a valuable tool for high-accuracy 3D histology tissue
Institutional Problems in Urban Planning and Modern Methods of Reconstruction for Siberian Cities
Dayneko, A. I.; Dayneko, D. V.
2017-11-01
The work presents institutional problems in Russian urban planning. The institutional structure of the current system for the territories development is discussed. The necessity to conduct research is substantiated and methods and tools for evaluation of the institutional changes effectiveness are suggested. The article suggests the program and tested methods of reconstruction which are to be adopted for Siberia considering climatic, seismic and ecological peculiarities of the regions.
System and method for image reconstruction, analysis, and/or de-noising
Laleg-Kirati, Taous-Meriem
2015-11-12
A method and system can analyze, reconstruct, and/or denoise an image. The method and system can include interpreting a signal as a potential of a Schrödinger operator, decomposing the signal into squared eigenfunctions, reducing a design parameter of the Schrödinger operator, analyzing discrete spectra of the Schrödinger operator and combining the analysis of the discrete spectra to construct the image.
A three-dimensional graphic reconstruction method of the vertebral column from CT scans
International Nuclear Information System (INIS)
Verbout, A.J.; Falke, T.H.M.; Tinkelenberg, J.
1983-01-01
The method of graphic reconstruction using the oblique view technique was applied on the transverse CT scans of the vertebral column. In the scanning procedure the low-dose thin-slice technique was used. The method proved valuable for the construction of three-dimensional models as reliable reproduction of the original. The results are useful for preoperative evaluation of the deformed spine as well as for anatomic research. (orig.)
Benjamin N. Sulman; Daniel Tyler Roman; Todd M. Scanlon; Lixin Wang; Kimberly A. Novick
2016-01-01
The eddy covariance (EC) method is routinely used to measure net ecosystem fluxes of carbon dioxide (CO2) and evapotranspiration (ET) in terrestrial ecosystems. It is often desirable to partition CO2 flux into gross primary production (GPP) and ecosystem respiration (RE), and to partition ET into evaporation and...
Nelson, A. J.; Koloutsou-Vakakis, S.; Rood, M. J.; Lichiheb, N.; Heuer, M.; Myles, L.
2017-12-01
Ammonia (NH3) is a precursor to fine particulate matter (PM) in the ambient atmosphere. Agricultural activities represent over 80% of anthropogenic emissions of NH3 in the United States. The use of nitrogen-based fertilizers contribute > 50% of total NH3 emissions in central Illinois. The U.S. EPA Science Advisory Board has called for improved methods to measure, model, and report atmospheric NH3 concentrations and emissions from agriculture. High uncertainties in the temporal and spatial distribution of NH3 emissions contribute to poor performance of air quality models in predicting ambient PM concentrations. This study reports and compares NH3 flux measurements of differing temporal resolution obtained with two methods: relaxed eddy accumulation (REA) and flux-gradient (FG). REA and FG systems were operated concurrently above a corn canopy at the University of Illinois at Urbana-Champaign (UIUC) Energy Biosciences Institute (EBI) Energy Farm during the 2014 corn-growing season. The REA system operated during daytime, providing average fluxes over four-hour sampling intervals, where time resolution was limited by detection limit of denuders. The FG system employed a cavity ring-down spectrometer, and was operated continuously, reporting 30 min flux averages. A flux-footprint evaluation was used for quality control, resulting in 1,178 qualified FG measurements, 82 of which were coincident with REA measurements. Similar emission trends were observed with both systems, with peak NH3 emission observed one week after fertilization. For all coincident samples, mean NH3 flux was 205 ± 300 ng-N-m2s-1 and 110 ± 256 ng-N-m2s-1 as measured with REA and FG, respectively, where positive flux indicates emission. This is the first reported inter-comparison of REA and FG methods as used for quantifying NH3 fluxes from cropland. Preliminary analysis indicates the improved temporal resolution and continuous sampling enabled by FG allow for the identification of emission pulses
Quartet-net: a quartet-based method to reconstruct phylogenetic networks.
Yang, Jialiang; Grünewald, Stefan; Wan, Xiu-Feng
2013-05-01
Phylogenetic networks can model reticulate evolutionary events such as hybridization, recombination, and horizontal gene transfer. However, reconstructing such networks is not trivial. Popular character-based methods are computationally inefficient, whereas distance-based methods cannot guarantee reconstruction accuracy because pairwise genetic distances only reflect partial information about a reticulate phylogeny. To balance accuracy and computational efficiency, here we introduce a quartet-based method to construct a phylogenetic network from a multiple sequence alignment. Unlike distances that only reflect the relationship between a pair of taxa, quartets contain information on the relationships among four taxa; these quartets provide adequate capacity to infer a more accurate phylogenetic network. In applications to simulated and biological data sets, we demonstrate that this novel method is robust and effective in reconstructing reticulate evolutionary events and it has the potential to infer more accurate phylogenetic distances than other conventional phylogenetic network construction methods such as Neighbor-Joining, Neighbor-Net, and Split Decomposition. This method can be used in constructing phylogenetic networks from simple evolutionary events involving a few reticulate events to complex evolutionary histories involving a large number of reticulate events. A software called "Quartet-Net" is implemented and available at http://sysbio.cvm.msstate.edu/QuartetNet/.
A hybrid 3D SEM reconstruction method optimized for complex geologic material surfaces.
Yan, Shang; Adegbule, Aderonke; Kibbey, Tohren C G
2017-08-01
Reconstruction methods are widely used to extract three-dimensional information from scanning electron microscope (SEM) images. This paper presents a new hybrid reconstruction method that combines stereoscopic reconstruction with shape-from-shading calculations to generate highly-detailed elevation maps from SEM image pairs. The method makes use of an imaged glass sphere to determine the quantitative relationship between observed intensity and angles between the beam and surface normal, and the detector and surface normal. Two specific equations are derived to make use of image intensity information in creating the final elevation map. The equations are used together, one making use of intensities in the two images, the other making use of intensities within a single image. The method is specifically designed for SEM images captured with a single secondary electron detector, and is optimized to capture maximum detail from complex natural surfaces. The method is illustrated with a complex structured abrasive material, and a rough natural sand grain. Results show that the method is capable of capturing details such as angular surface features, varying surface roughness, and surface striations. Copyright © 2017 Elsevier Ltd. All rights reserved.
WATSFAR: numerical simulation of soil WATer and Solute fluxes using a FAst and Robust method
Crevoisier, David; Voltz, Marc
2013-04-01
To simulate the evolution of hydro- and agro-systems, numerous spatialised models are based on a multi-local approach and improvement of simulation accuracy by data-assimilation techniques are now used in many application field. The latest acquisition techniques provide a large amount of experimental data, which increase the efficiency of parameters estimation and inverse modelling approaches. In turn simulations are often run on large temporal and spatial domains which requires a large number of model runs. Eventually, despite the regular increase in computing capacities, the development of fast and robust methods describing the evolution of saturated-unsaturated soil water and solute fluxes is still a challenge. Ross (2003, Agron J; 95:1352-1361) proposed a method, solving 1D Richards' and convection-diffusion equation, that fulfil these characteristics. The method is based on a non iterative approach which reduces the numerical divergence risks and allows the use of coarser spatial and temporal discretisations, while assuring a satisfying accuracy of the results. Crevoisier et al. (2009, Adv Wat Res; 32:936-947) proposed some technical improvements and validated this method on a wider range of agro- pedo- climatic situations. In this poster, we present the simulation code WATSFAR which generalises the Ross method to other mathematical representations of soil water retention curve (i.e. standard and modified van Genuchten model) and includes a dual permeability context (preferential fluxes) for both water and solute transfers. The situations tested are those known to be the less favourable when using standard numerical methods: fine textured and extremely dry soils, intense rainfall and solute fluxes, soils near saturation, ... The results of WATSFAR have been compared with the standard finite element model Hydrus. The analysis of these comparisons highlights two main advantages for WATSFAR, i) robustness: even on fine textured soil or high water and solute
Directory of Open Access Journals (Sweden)
Tae Joon Choi
2016-01-01
Full Text Available Titanium micro-mesh implants are widely used in orbital wall reconstructions because they have several advantageous characteristics. However, the rough and irregular marginal spurs of the cut edges of the titanium mesh sheet impede the efficacious and minimally traumatic insertion of the implant, because these spurs may catch or hook the orbital soft tissue, skin, or conjunctiva during the insertion procedure. In order to prevent this problem, we developed an easy method of inserting a titanium micro-mesh, in which it is wrapped with the aseptic transparent plastic film that is used to pack surgical instruments or is attached to one side of the inner suture package. Fifty-four patients underwent orbital wall reconstruction using a transconjunctival or transcutaneous approach. The wrapped implant was easily inserted without catching or injuring the orbital soft tissue, skin, or conjunctiva. In most cases, the implant was inserted in one attempt. Postoperative computed tomographic scans showed excellent placement of the titanium micro-mesh and adequate anatomic reconstruction of the orbital walls. This wrapping insertion method may be useful for making the insertion of titanium micro-mesh implants in the reconstruction of orbital wall fractures easier and less traumatic.
Bai, Bing
2012-03-01
There has been a lot of work on total variation (TV) regularized tomographic image reconstruction recently. Many of them use gradient-based optimization algorithms with a differentiable approximation of the TV functional. In this paper we apply TV regularization in Positron Emission Tomography (PET) image reconstruction. We reconstruct the PET image in a Bayesian framework, using Poisson noise model and TV prior functional. The original optimization problem is transformed to an equivalent problem with inequality constraints by adding auxiliary variables. Then we use an interior point method with logarithmic barrier functions to solve the constrained optimization problem. In this method, a series of points approaching the solution from inside the feasible region are found by solving a sequence of subproblems characterized by an increasing positive parameter. We use preconditioned conjugate gradient (PCG) algorithm to solve the subproblems directly. The nonnegativity constraint is enforced by bend line search. The exact expression of the TV functional is used in our calculations. Simulation results show that the algorithm converges fast and the convergence is insensitive to the values of the regularization and reconstruction parameters.
In situ methods for measuring thermal properties and heat flux on planetary bodies
Kömle, Norbert I.; Hütter, Erika S.; Macher, Wolfgang; Kaufmann, Erika; Kargl, Günter; Knollenberg, Jörg; Grott, Matthias; Spohn, Tilman; Wawrzaszek, Roman; Banaszkiewicz, Marek; Seweryn, Karoly; Hagermann, Axel
2011-01-01
The thermo-mechanical properties of planetary surface and subsurface layers control to a high extent in which way a body interacts with its environment, in particular how it responds to solar irradiation and how it interacts with a potentially existing atmosphere. Furthermore, if the natural temperature profile over a certain depth can be measured in situ, this gives important information about the heat flux from the interior and thus about the thermal evolution of the body. Therefore, in most of the recent and planned planetary lander missions experiment packages for determining thermo-mechanical properties are part of the payload. Examples are the experiment MUPUS on Rosetta's comet lander Philae, the TECP instrument aboard NASA's Mars polar lander Phoenix, and the mole-type instrument HP3 currently developed for use on upcoming lunar and Mars missions. In this review we describe several methods applied for measuring thermal conductivity and heat flux and discuss the particular difficulties faced when these properties have to be measured in a low pressure and low temperature environment. We point out the abilities and disadvantages of the different instruments and outline the evaluation procedures necessary to extract reliable thermal conductivity and heat flux data from in situ measurements. PMID:21760643
Clinical correlative evaluation of an iterative method for reconstruction of brain SPECT images
International Nuclear Information System (INIS)
Nobili, Flavio; Vitali, Paolo; Calvini, Piero; Bollati, Francesca; Girtler, Nicola; Delmonte, Marta; Mariani, Giuliano; Rodriguez, Guido
2001-01-01
Background: Brain SPECT and PET investigations have showed discrepancies in Alzheimer's disease (AD) when considering data deriving from deeply located structures, such as the mesial temporal lobe. These discrepancies could be due to a variety of factors, including substantial differences in gamma-cameras and underlying technology. Mesial temporal structures are deeply located within the brain and the commonly used Filtered Back-Projection (FBP) technique does not fully take into account either the physical parameters of gamma-cameras or geometry of collimators. In order to overcome these limitations, alternative reconstruction methods have been proposed, such as the iterative method of the Conjugate Gradients with modified matrix (CG). However, the clinical applications of these methods have so far been only anecdotal. The present study was planned to compare perfusional SPECT data as derived from the conventional FBP method and from the iterative CG method, which takes into account the geometrical and physical characteristics of the gamma-camera, by a correlative approach with neuropsychology. Methods: Correlations were compared between perfusion of the hippocampal region, as achieved by both the FBP and the CG reconstruction methods, and a short-memory test (Selective Reminding Test, SRT), specifically addressing one of its function. A brain-dedicated camera (CERASPECT) was used for SPECT studies with 99m Tc-hexamethylpropylene-amine-oxime in 23 consecutive patients (mean age: 74.2±6.5) with mild (Mini-Mental Status Examination score ≥15, mean 20.3±3), probable AD. Counts from a hippocampal region in each hemisphere were referred to the average thalamic counts. Results: Hippocampal perfusion significantly correlated with the MMSE score with similar statistical significance (p<0.01) between the two reconstruction methods. Correlation between hippocampal perfusion and the SRT score was better with the CG method (r=0.50 for both hemispheres, p<0.01) than with
Clinical correlative evaluation of an iterative method for reconstruction of brain SPECT images
Energy Technology Data Exchange (ETDEWEB)
Nobili, Flavio E-mail: fnobili@smartino.ge.it; Vitali, Paolo; Calvini, Piero; Bollati, Francesca; Girtler, Nicola; Delmonte, Marta; Mariani, Giuliano; Rodriguez, Guido
2001-08-01
Background: Brain SPECT and PET investigations have showed discrepancies in Alzheimer's disease (AD) when considering data deriving from deeply located structures, such as the mesial temporal lobe. These discrepancies could be due to a variety of factors, including substantial differences in gamma-cameras and underlying technology. Mesial temporal structures are deeply located within the brain and the commonly used Filtered Back-Projection (FBP) technique does not fully take into account either the physical parameters of gamma-cameras or geometry of collimators. In order to overcome these limitations, alternative reconstruction methods have been proposed, such as the iterative method of the Conjugate Gradients with modified matrix (CG). However, the clinical applications of these methods have so far been only anecdotal. The present study was planned to compare perfusional SPECT data as derived from the conventional FBP method and from the iterative CG method, which takes into account the geometrical and physical characteristics of the gamma-camera, by a correlative approach with neuropsychology. Methods: Correlations were compared between perfusion of the hippocampal region, as achieved by both the FBP and the CG reconstruction methods, and a short-memory test (Selective Reminding Test, SRT), specifically addressing one of its function. A brain-dedicated camera (CERASPECT) was used for SPECT studies with {sup 99m}Tc-hexamethylpropylene-amine-oxime in 23 consecutive patients (mean age: 74.2{+-}6.5) with mild (Mini-Mental Status Examination score {>=}15, mean 20.3{+-}3), probable AD. Counts from a hippocampal region in each hemisphere were referred to the average thalamic counts. Results: Hippocampal perfusion significantly correlated with the MMSE score with similar statistical significance (p<0.01) between the two reconstruction methods. Correlation between hippocampal perfusion and the SRT score was better with the CG method (r=0.50 for both hemispheres, p<0
Accident or homicide--virtual crime scene reconstruction using 3D methods.
Buck, Ursula; Naether, Silvio; Räss, Beat; Jackowski, Christian; Thali, Michael J
2013-02-10
The analysis and reconstruction of forensically relevant events, such as traffic accidents, criminal assaults and homicides are based on external and internal morphological findings of the injured or deceased person. For this approach high-tech methods are gaining increasing importance in forensic investigations. The non-contact optical 3D digitising system GOM ATOS is applied as a suitable tool for whole body surface and wound documentation and analysis in order to identify injury-causing instruments and to reconstruct the course of event. In addition to the surface documentation, cross-sectional imaging methods deliver medical internal findings of the body. These 3D data are fused into a whole body model of the deceased. Additional to the findings of the bodies, the injury inflicting instruments and incident scene is documented in 3D. The 3D data of the incident scene, generated by 3D laser scanning and photogrammetry, is also included into the reconstruction. Two cases illustrate the methods. In the fist case a man was shot in his bedroom and the main question was, if the offender shot the man intentionally or accidentally, as he declared. In the second case a woman was hit by a car, driving backwards into a garage. It was unclear if the driver drove backwards once or twice, which would indicate that he willingly injured and killed the woman. With this work, we demonstrate how 3D documentation, data merging and animation enable to answer reconstructive questions regarding the dynamic development of patterned injuries, and how this leads to a real data based reconstruction of the course of event. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Accelerated iterative Methods of PET image reconstruction using ordered subsets technology
International Nuclear Information System (INIS)
Liu Li; Yin Yin
2004-01-01
Purpose: Positron Emission Tomography (PET) is one of the most advanced medical imaging techniques in the world. The traditional practical image reconstruction algorithm of PET is Filtered- Backproiecfion (FBP). The iterative Methods based on statistical model and Least- Square principle were usually used in research stage, for their slow convergent speeds. Recently a new approach-- Ordered Subsets Expectation Maximization (OS-EM) has been used in clinic nuclear tomography, which can greatly reduce the computing time while maintaining the better' spatial resolution of the well-known Maximum Likelihood Expectation Maximization (ML-EM)iterative reconstruction method. The advantage of OS-EM over ML-EM is due to the usage of' the Ordered Subsets technology (OS), which provides a speedup factor of about L (the number of subsets). In principle, OS technology can be used in all iterative image algorithms. In our work, the Ordered Subsets Least Square (OS-LS) was introduced, and the reconstructed results were compared with OS-EM both by simulated data and real PET data. Methods: 64 x 64 Jaszczak-like model was constructed, with maximum 70 and minimum 10, see Fig l(a), 64-bin by 32-angle sinogram was simulated. And the real PET transmission sonogram data (160*192 in size) of a thorax phantom was also used to demonstrate the accelerating effect of OS in OS-EM and OS-LS. When the number of subset L equals to 1, OS-EM and OS-LS reduce to the traditional ML-EM and LS respectively. Results: The reconstructed images by 1 iteration OS-EM and OS-LS are shown in Fig1(b-e) with From Fig 1, we see the accelerating effect of OS is obvious both in OS-EM and OS-LS, and the image reconstructed by OS-EM is still better than that of OS-LS with less noise. The image reconstructed by OS-EM is still better than that of OS-LS with less noise Fig 2 shows the reconstructed images of a thorax phantom (128*128 in size) by OS-EM and OS-LS, of a real PET transmission sinogram, which was the
Zhang, Haiyan; Zhang, Liyi; Sun, Yunshan; Zhang, Jingyu
2015-01-01
Reducing X-ray tube current is one of the widely used methods for decreasing the radiation dose. Unfortunately, the signal-to-noise ratio (SNR) of the projection data degrades simultaneously. To improve the quality of reconstructed images, a dictionary learning based penalized weighted least-squares (PWLS) approach is proposed for sinogram denoising. The weighted least-squares considers the statistical characteristic of noise and the penalty models the sparsity of sinogram based on dictionary learning. Then reconstruct CT image using filtered back projection (FBP) algorithm from the denoised sinogram. The proposed method is particularly suitable for the projection data with low SNR. Experimental results show that the proposed method can get high-quality CT images when the signal to noise ratio of projection data declines sharply.
Neural network CT image reconstruction method for small amount of projection data
International Nuclear Information System (INIS)
Ma, X.F.; Fukuhara, M.; Takeda, T.
2000-01-01
This paper presents a new method for two-dimensional image reconstruction by using a multi-layer neural network. Though a conventionally used object function of such a neural network is composed of a sum of squared errors of the output data, we define an object function composed of a sum of squared residuals of an integral equation. By employing an appropriate numerical line integral for this integral equation, we can construct a neural network which can be used for CT image reconstruction for cases with small amount of projection data. We applied this method to some model problems and obtained satisfactory results. This method is especially useful for analyses of laboratory experiments or field observations where only a small amount of projection data is available in comparison with the well-developed medical applications
Neural network CT image reconstruction method for small amount of projection data
Ma, X F; Takeda, T
2000-01-01
This paper presents a new method for two-dimensional image reconstruction by using a multi-layer neural network. Though a conventionally used object function of such a neural network is composed of a sum of squared errors of the output data, we define an object function composed of a sum of squared residuals of an integral equation. By employing an appropriate numerical line integral for this integral equation, we can construct a neural network which can be used for CT image reconstruction for cases with small amount of projection data. We applied this method to some model problems and obtained satisfactory results. This method is especially useful for analyses of laboratory experiments or field observations where only a small amount of projection data is available in comparison with the well-developed medical applications.
Zhang, Dai; Hao, Shiqi; Zhao, Qingsong; Zhao, Qi; Wang, Lei; Wan, Xiongfeng
2018-03-01
Existing wavefront reconstruction methods are usually low in resolution, restricted by structure characteristics of the Shack Hartmann wavefront sensor (SH WFS) and the deformable mirror (DM) in the adaptive optics (AO) system, thus, resulting in weak homodyne detection efficiency for free space optical (FSO) communication. In order to solve this problem, we firstly validate the feasibility of liquid crystal spatial light modulator (LC SLM) using in an AO system. Then, wavefront reconstruction method based on wavelet fractal interpolation is proposed after self-similarity analysis of wavefront distortion caused by atmospheric turbulence. Fast wavelet decomposition is operated to multiresolution analyze the wavefront phase spectrum, during which soft threshold denoising is carried out. The resolution of estimated wavefront phase is then improved by fractal interpolation. Finally, fast wavelet reconstruction is taken to recover wavefront phase. Simulation results reflect the superiority of our method in homodyne detection. Compared with minimum variance estimation (MVE) method based on interpolation techniques, the proposed method could obtain superior homodyne detection efficiency with lower operation complexity. Our research findings have theoretical significance in the design of coherent FSO communication system.
Residual-based a posteriori error estimation for multipoint flux mixed finite element methods
Du, Shaohong
2015-10-26
A novel residual-type a posteriori error analysis technique is developed for multipoint flux mixed finite element methods for flow in porous media in two or three space dimensions. The derived a posteriori error estimator for the velocity and pressure error in L-norm consists of discretization and quadrature indicators, and is shown to be reliable and efficient. The main tools of analysis are a locally postprocessed approximation to the pressure solution of an auxiliary problem and a quadrature error estimate. Numerical experiments are presented to illustrate the competitive behavior of the estimator.
Standard Test Method for Measuring Heat Flux Using a Water-Cooled Calorimeter
American Society for Testing and Materials. Philadelphia
2005-01-01
1.1 This test method covers the measurement of a steady heat flux to a given water-cooled surface by means of a system energy balance. 1.2 The values stated in SI units are to be regarded as standard. No other units of measurement are included in this standard. 1.3 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.
Reconstructing uniformly attenuated rotating slant-hole SPECT projection data using the DBH method
International Nuclear Information System (INIS)
Huang Qiu; Gullberg, Grant T; Xu Jingyan; Tsui, Benjamin M W
2009-01-01
This work applies a previously developed analytical algorithm to the reconstruction problem in a rotating multi-segment slant-hole (RMSSH) SPECT system. The RMSSH collimator has greater detection efficiency than the parallel-hole collimator with comparable spatial resolution at the expense of limited common volume-of-view (CVOV) and is therefore suitable for detecting low-contrast lesions in breast, cardiac and brain imaging. The absorption of gamma photons in both the human breast and brain can be assu- med to follow an exponential rule with a constant attenuation coefficient. In this work, the RMSSH SPECT data of a digital NCAT phantom with breast attachment are modeled as the uniformly attenuated Radon transform of the activity distribution. These data are reconstructed using an analytical algorithm called the DBH method, which is an acronym for the procedure of differentiation backprojection followed by a finite weighted inverse Hilbert transform. The projection data are first differentiated along a specific direction in the projection space and then backprojected to the image space. The result from this first step is equal to a one-dimensional finite weighted Hilbert transform of the object; this transform is then numerically inverted to obtain the reconstructed image. With the limited CVOV of the RMSSH collimator, the detector captures gamma photon emissions from the breast and from parts of the torso. The simulation results show that the DBH method is capable of exactly reconstructing the activity within a well-defined region-of-interest (ROI) within the breast if the activity is confined to the breast or if the activity outside the CVOV is uniformly attenuated for each measured projection, while a conventional filtered backprojection algorithm only reconstructs the high frequency components of the activity function in the same geometry.
Hirahara, Noriyuki; Monma, Hiroyuki; Shimojo, Yoshihide; Matsubara, Takeshi; Hyakudomi, Ryoji; Yano, Seiji; Tanaka, Tsuneo
2011-01-01
Abstract Here we report the method of anastomosis based on double stapling technique (hereinafter, DST) using a trans-oral anvil delivery system (EEATM OrVilTM) for reconstructing the esophagus and lifted jejunum following laparoscopic total gastrectomy or proximal gastric resection. As a basic technique, laparoscopic total gastrectomy employed Roux-en-Y reconstruction, laparoscopic proximal gastrectomy employed double tract reconstruction, and end-to-side anastomosis was used for the cut-off...
International Nuclear Information System (INIS)
Hayward, Robert M.; Rahnema, Farzad; Zhang, Dingkang
2013-01-01
Highlights: ► A new hybrid stochastic–deterministic transport theory method to couple with diffusion theory. ► The method is implemented in 2D hexagonal geometry. ► The new method produces excellent results when compared with Monte Carlo reference solutions. ► The method is fast, solving all test cases in less than 12 s. - Abstract: A new hybrid stochastic–deterministic transport theory method, which is designed to couple with diffusion theory, is presented. The new method is an extension of the incident flux response expansion method, and it combines the speed of diffusion theory with the accuracy of transport theory. With ease of use in mind, the new method is derived in such a way that it can be implemented with only minimal modifications to an existing diffusion theory method. A new angular expansion, which is necessary for the diffusion theory coupling, is developed in 2D and 3D. The method is implemented in 2D hexagonal geometry, and an HTTR benchmark problem is used to test its accuracy in a standalone configuration. It is found that the new method produces excellent results (with average relative error in partial current less than 0.033%) when compared with Monte Carlo reference solutions. Furthermore, the method is fast, solving all test cases in less than 12 s
Lartizien, Carole; Kinahan, Paul E.; Comtat, Claude; Lin, Michael; Swensson, Richard G.; Trebossen, Regine; Bendriem, Bernard
2000-04-01
This work presents initial results from observer detection performance studies using the same volume visualization software tools that are used in clinical PET oncology imaging. Research into the FORE+OSEM and FORE+AWOSEM statistical image reconstruction methods tailored to whole- body 3D PET oncology imaging have indicated potential improvements in image SNR compared to currently used analytic reconstruction methods (FBP). To assess the resulting impact of these reconstruction methods on the performance of human observers in detecting and localizing tumors, we use a non- Monte Carlo technique to generate multiple statistically accurate realizations of 3D whole-body PET data, based on an extended MCAT phantom and with clinically realistic levels of statistical noise. For each realization, we add a fixed number of randomly located 1 cm diam. lesions whose contrast is varied among pre-calibrated values so that the range of true positive fractions is well sampled. The observer is told the number of tumors and, similar to the AFROC method, asked to localize all of them. The true positive fraction for the three algorithms (FBP, FORE+OSEM, FORE+AWOSEM) as a function of lesion contrast is calculated, although other protocols could be compared. A confidence level for each tumor is also recorded for incorporation into later AFROC analysis.
A reconstruction method based on AL0FGD for compressed sensing in border monitoring WSN system.
Directory of Open Access Journals (Sweden)
Yan Wang
Full Text Available In this paper, to monitor the border in real-time with high efficiency and accuracy, we applied the compressed sensing (CS technology on the border monitoring wireless sensor network (WSN system and proposed a reconstruction method based on approximately l0 norm and fast gradient descent (AL0FGD for CS. In the frontend of the system, the measurement matrix was used to sense the border information in a compressed manner, and then the proposed reconstruction method was applied to recover the border information at the monitoring terminal. To evaluate the performance of the proposed method, the helicopter sound signal was used as an example in the experimental simulation, and three other typical reconstruction algorithms 1split Bregman algorithm, 2iterative shrinkage algorithm, and 3smoothed approximate l0 norm (SL0, were employed for comparison. The experimental results showed that the proposed method has a better performance in recovering the helicopter sound signal in most cases, which could be used as a basis for further study of the border monitoring WSN system.
A comparison of reconstruction methods for undersampled atomic force microscopy images.
Luo, Yufan; Andersson, Sean B
2015-12-18
Non-raster scanning and undersampling of atomic force microscopy (AFM) images is a technique for improving imaging rate and reducing the amount of tip-sample interaction needed to produce an image. Generation of the final image can be done using a variety of image processing techniques based on interpolation or optimization. The choice of reconstruction method has a large impact on the quality of the recovered image and the proper choice depends on the sample under study. In this work we compare interpolation through the use of inpainting algorithms with reconstruction based on optimization through the use of the basis pursuit algorithm commonly used for signal recovery in compressive sensing. Using four different sampling patterns found in non-raster AFM, namely row subsampling, spiral scanning, Lissajous scanning, and random scanning, we subsample data from existing images and compare reconstruction performance against the original image. The results illustrate that inpainting generally produces superior results when the image contains primarily low frequency content while basis pursuit is better when the images have mixed, but sparse, frequency content. Using support vector machines, we then classify images based on their frequency content and sparsity and, from this classification, develop a fast decision strategy to select a reconstruction algorithm to be used on subsampled data. The performance of the classification and decision test are demonstrated on test AFM images.
A comparison of reconstruction methods for undersampled atomic force microscopy images
International Nuclear Information System (INIS)
Luo, Yufan; Andersson, Sean B
2015-01-01
Non-raster scanning and undersampling of atomic force microscopy (AFM) images is a technique for improving imaging rate and reducing the amount of tip–sample interaction needed to produce an image. Generation of the final image can be done using a variety of image processing techniques based on interpolation or optimization. The choice of reconstruction method has a large impact on the quality of the recovered image and the proper choice depends on the sample under study. In this work we compare interpolation through the use of inpainting algorithms with reconstruction based on optimization through the use of the basis pursuit algorithm commonly used for signal recovery in compressive sensing. Using four different sampling patterns found in non-raster AFM, namely row subsampling, spiral scanning, Lissajous scanning, and random scanning, we subsample data from existing images and compare reconstruction performance against the original image. The results illustrate that inpainting generally produces superior results when the image contains primarily low frequency content while basis pursuit is better when the images have mixed, but sparse, frequency content. Using support vector machines, we then classify images based on their frequency content and sparsity and, from this classification, develop a fast decision strategy to select a reconstruction algorithm to be used on subsampled data. The performance of the classification and decision test are demonstrated on test AFM images. (paper)
A feasible method for clinical delivery verification and dose reconstruction in tomotherapy
International Nuclear Information System (INIS)
Kapatoes, J.M.; Olivera, G.H.; Ruchala, K.J.; Smilowitz, J.B.; Reckwerdt, P.J.; Mackie, T.R.
2001-01-01
Delivery verification is the process in which the energy fluence delivered during a treatment is verified. This verified energy fluence can be used in conjunction with an image in the treatment position to reconstruct the full three-dimensional dose deposited. A method for delivery verification that utilizes a measured database of detector signal is described in this work. This database is a function of two parameters, radiological path-length and detector-to-phantom distance, both of which are computed from a CT image taken at the time of delivery. Such a database was generated and used to perform delivery verification and dose reconstruction. Two experiments were conducted: a simulated prostate delivery on an inhomogeneous abdominal phantom, and a nasopharyngeal delivery on a dog cadaver. For both cases, it was found that the verified fluence and dose results using the database approach agreed very well with those using previously developed and proven techniques. Delivery verification with a measured database and CT image at the time of treatment is an accurate procedure for tomotherapy. The database eliminates the need for any patient-specific, pre- or post-treatment measurements. Moreover, such an approach creates an opportunity for accurate, real-time delivery verification and dose reconstruction given fast image reconstruction and dose computation tools
Directory of Open Access Journals (Sweden)
Meng Lu
2013-01-01
Full Text Available Thickness of tundish cover flux (TCF plays an important role in continuous casting (CC steelmaking process. Traditional measurement method of TCF thickness is single/double wire methods, which have several problems such as personal security, easily affected by operators, and poor repeatability. To solve all these problems, in this paper, we specifically designed and built an instrumentation and presented a novel method to measure the TCF thickness. The instrumentation was composed of a measurement bar, a mechanical device, a high-definition industrial camera, a Siemens S7-200 programmable logic controller (PLC, and a computer. Our measurement method was based on the computer vision algorithms, including image denoising method, monocular range measurement method, scale invariant feature transform (SIFT, and image gray gradient detection method. Using the present instrumentation and method, images in the CC tundish can be collected by camera and transferred to computer to do imaging processing. Experiments showed that our instrumentation and method worked well at scene of steel plants, can accurately measure the thickness of TCF, and overcome the disadvantages of traditional measurement methods, or even replace the traditional ones.
SU-D-206-03: Segmentation Assisted Fast Iterative Reconstruction Method for Cone-Beam CT
International Nuclear Information System (INIS)
Wu, P; Mao, T; Gong, S; Wang, J; Niu, T; Sheng, K; Xie, Y
2016-01-01
Purpose: Total Variation (TV) based iterative reconstruction (IR) methods enable accurate CT image reconstruction from low-dose measurements with sparse projection acquisition, due to the sparsifiable feature of most CT images using gradient operator. However, conventional solutions require large amount of iterations to generate a decent reconstructed image. One major reason is that the expected piecewise constant property is not taken into consideration at the optimization starting point. In this work, we propose an iterative reconstruction method for cone-beam CT (CBCT) using image segmentation to guide the optimization path more efficiently on the regularization term at the beginning of the optimization trajectory. Methods: Our method applies general knowledge that one tissue component in the CT image contains relatively uniform distribution of CT number. This general knowledge is incorporated into the proposed reconstruction using image segmentation technique to generate the piecewise constant template on the first-pass low-quality CT image reconstructed using analytical algorithm. The template image is applied as an initial value into the optimization process. Results: The proposed method is evaluated on the Shepp-Logan phantom of low and high noise levels, and a head patient. The number of iterations is reduced by overall 40%. Moreover, our proposed method tends to generate a smoother reconstructed image with the same TV value. Conclusion: We propose a computationally efficient iterative reconstruction method for CBCT imaging. Our method achieves a better optimization trajectory and a faster convergence behavior. It does not rely on prior information and can be readily incorporated into existing iterative reconstruction framework. Our method is thus practical and attractive as a general solution to CBCT iterative reconstruction. This work is supported by the Zhejiang Provincial Natural Science Foundation of China (Grant No. LR16F010001), National High-tech R
Ogawa, Takahiro; Haseyama, Miki
2013-03-01
A missing texture reconstruction method based on an error reduction (ER) algorithm, including a novel estimation scheme of Fourier transform magnitudes is presented in this brief. In our method, Fourier transform magnitude is estimated for a target patch including missing areas, and the missing intensities are estimated by retrieving its phase based on the ER algorithm. Specifically, by monitoring errors converged in the ER algorithm, known patches whose Fourier transform magnitudes are similar to that of the target patch are selected from the target image. In the second approach, the Fourier transform magnitude of the target patch is estimated from those of the selected known patches and their corresponding errors. Consequently, by using the ER algorithm, we can estimate both the Fourier transform magnitudes and phases to reconstruct the missing areas.
Leverington, David W.; Teller, James T.; Mann, Jason D.
2002-06-01
Digital reconstructions of late Quaternary landscapes can be produced using a geographic information system (GIS) method that subtracts interpolated isobase values from modern elevations and bathymetry. The principal utility of the GIS method for reconstructing late Quaternary landscapes is in the relative ease and rapidity with which high-resolution, quantitative, and georeferenced databases of paleo-topography can be generated. These databases can be used for many purposes, including the generation of paleo-topographic maps, the estimation of the areas and volumes of individual water bodies and landforms, and the approximation of paleo-shoreline positions. GIS-based estimates of the dimensions of water bodies and landforms can be used to help constrain hydrological and climatic models of the late Quaternary.
The Nagoya cosmic-ray muon spectrometer 3, part 4: Track reconstruction method
Shibata, S.; Kamiya, Y.; Iijima, K.; Iida, S.
1985-01-01
One of the greatest problems in measuring particle trajectories with an optical or visual detector system, is the reconstruction of trajectories in real space from their recorded images. In the Nagoya cosmic-ray muon spectrometer, muon tracks are detected by wide gap spark chambers and their images are recorded on the photographic film through an optical system of 10 mirrors and two cameras. For the spatial reconstruction, 42 parameters of the optical system should be known to determine the configuration of this system. It is almost impossible to measure this many parameters directly with usual techniques. In order to solve this problem, the inverse transformation method was applied. In this method, all the optical parameters are determined from the locations of fiducial marks in real space and the locations of their images on the photographic film by the non-linear least square fitting.
Sproson, D. A. J.; Brooks, I. M.; Norris, S. J.
2012-09-01
The eddy covariance technique is the most direct of the methods that have been used to measure the flux of sea-spray aerosol between the ocean and atmosphere, but has been applied in only a handful of studies. However, unless the aerosol is dried before the eddy covariance measurements are made, the hygroscopic nature of sea-spray may combine with a relative humidity flux to result in a bias in the calculated aerosol flux. "Bulk" methods have been presented to account for this bias, however they rely on assumptions of the shape of the aerosol spectra which may not be valid for near-surface measurements of sea-spray. Here we describe a method of correcting aerosol spectra for relative humidity induced size variations at the high frequency (10 Hz) measurement timescale, where counting statistics are poor and the spectral shape cannot be well represented by a simple power law. Such a correction allows the effects of hygroscopicity and relative humidity flux on the aerosol flux to be explicitly evaluated and compared to the bulk corrections, both in their original form and once reformulated to better represent the measured mean aerosol spectra. In general, the bulk corrections - particularly when reformulated for the measured mean aerosol spectra - perform relatively well, producing flux corrections of the right sign and approximate magnitude. However, there are times when the bulk methods either significantly over- or underestimate the required flux correction. We thus conclude that, where possible, relative humidity corrections should be made at the measurement frequency.
A Lift-Off-Tolerant Magnetic Flux Leakage Testing Method for Drill Pipes at Wellhead.
Wu, Jianbo; Fang, Hui; Li, Long; Wang, Jie; Huang, Xiaoming; Kang, Yihua; Sun, Yanhua; Tang, Chaoqing
2017-01-21
To meet the great needs for MFL (magnetic flux leakage) inspection of drill pipes at wellheads, a lift-off-tolerant MFL testing method is proposed and investigated in this paper. Firstly, a Helmholtz coil magnetization method and the whole MFL testing scheme are proposed. Then, based on the magnetic field focusing effect of ferrite cores, a lift-off-tolerant MFL sensor is developed and tested. It shows high sensitivity at a lift-off distance of 5.0 mm. Further, the follow-up high repeatability MFL probing system is designed and manufactured, which was embedded with the developed sensors. It can track the swing movement of drill pipes and allow the pipe ends to pass smoothly. Finally, the developed system is employed in a drilling field for drill pipe inspection. Test results show that the proposed method can fulfill the requirements for drill pipe inspection at wellheads, which is of great importance in drill pipe safety.
A Lift-Off-Tolerant Magnetic Flux Leakage Testing Method for Drill Pipes at Wellhead
Directory of Open Access Journals (Sweden)
Jianbo Wu
2017-01-01
Full Text Available To meet the great needs for MFL (magnetic flux leakage inspection of drill pipes at wellheads, a lift-off-tolerant MFL testing method is proposed and investigated in this paper. Firstly, a Helmholtz coil magnetization method and the whole MFL testing scheme are proposed. Then, based on the magnetic field focusing effect of ferrite cores, a lift-off-tolerant MFL sensor is developed and tested. It shows high sensitivity at a lift-off distance of 5.0 mm. Further, the follow-up high repeatability MFL probing system is designed and manufactured, which was embedded with the developed sensors. It can track the swing movement of drill pipes and allow the pipe ends to pass smoothly. Finally, the developed system is employed in a drilling field for drill pipe inspection. Test results show that the proposed method can fulfill the requirements for drill pipe inspection at wellheads, which is of great importance in drill pipe safety.
International Nuclear Information System (INIS)
Gao, H
2016-01-01
Purpose: This work is to develop a general framework, namely filtered iterative reconstruction (FIR) method, to incorporate analytical reconstruction (AR) method into iterative reconstruction (IR) method, for enhanced CT image quality. Methods: FIR is formulated as a combination of filtered data fidelity and sparsity regularization, and then solved by proximal forward-backward splitting (PFBS) algorithm. As a result, the image reconstruction decouples data fidelity and image regularization with a two-step iterative scheme, during which an AR-projection step updates the filtered data fidelity term, while a denoising solver updates the sparsity regularization term. During the AR-projection step, the image is projected to the data domain to form the data residual, and then reconstructed by certain AR to a residual image which is in turn weighted together with previous image iterate to form next image iterate. Since the eigenvalues of AR-projection operator are close to the unity, PFBS based FIR has a fast convergence. Results: The proposed FIR method is validated in the setting of circular cone-beam CT with AR being FDK and total-variation sparsity regularization, and has improved image quality from both AR and IR. For example, AIR has improved visual assessment and quantitative measurement in terms of both contrast and resolution, and reduced axial and half-fan artifacts. Conclusion: FIR is proposed to incorporate AR into IR, with an efficient image reconstruction algorithm based on PFBS. The CBCT results suggest that FIR synergizes AR and IR with improved image quality and reduced axial and half-fan artifacts. The authors was partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000), and the Shanghai Pujiang Talent Program (#14PJ1404500).
Kesteren, van A.J.H.; Hartogensis, O.K.; Dinther, van D.; Moene, A.F.; Bruin, de H.A.R.
2013-01-01
This study introduces four methods for determining turbulent water vapour and carbon dioxide flux densities, the evapotranspiration and CO2 flux respectively. These methods combine scintillometer measurements with point-sampling measurements of scalar quantities and consequently have a faster
Grant, K.; Rohling, E. J.; Amies, J.
2017-12-01
Sea-level (SL) reconstructions over glacial-interglacial timeframes are critical for understanding the equilibrium response of ice sheets to sustained warming. In particular, continuous and high-resolution SL records are essential for accurately quantifying `natural' rates of SL rise. Global SL changes are well-constrained since the last glacial maximum ( 20,000 years ago, ky) by radiometrically-dated corals and paleoshoreline data, and fairly well-constrained over the last glacial cycle ( 150 ky). Prior to that, however, studies of ice-volume:SL relationships tend to rely on benthic δ18O, as geomorphological evidence is far more sparse and less reliably dated. An alternative SL reconstruction method (the `marginal basin' approach) was developed for the Red Sea over 500 ky, and recently attempted for the Mediterranean over 5 My (Rohling et al., 2014, Nature). This method exploits the strong sensitivity of seawater δ18O in these basins to SL changes in the relatively narrow and shallow straits which connect the basins with the open ocean. However, the initial Mediterranean SL method did not resolve sea-level highstands during Northern Hemisphere insolation maxima, when African monsoon run-off - strongly depleted in δ18O - reached the Mediterranean. Here, we present improvements to the `marginal basin' sea-level reconstruction method. These include a new `Med-Red SL stack', which combines new probabilistic Mediterranean and Red Sea sea-level stacks spanning the last 500 ky. We also show how a box model-data comparison of water-column δ18O changes over a monsoon interval allows us to quantify the monsoon versus SL δ18O imprint on Mediterranean foraminiferal carbonate δ18O records. This paves the way for a more accurate and fully continuous SL reconstruction extending back through the Pliocene.
Loomis, E N; Grim, G P; Wilde, C; Wilson, D C; Morgan, G; Wilke, M; Tregillis, I; Merrill, F; Clark, D; Finch, J; Fittinghoff, D; Bower, D
2010-10-01
Development of analysis techniques for neutron imaging at the National Ignition Facility is an important and difficult task for the detailed understanding of high-neutron yield inertial confinement fusion implosions. Once developed, these methods must provide accurate images of the hot and cold fuels so that information about the implosion, such as symmetry and areal density, can be extracted. One method under development involves the numerical inversion of the pinhole image using knowledge of neutron transport through the pinhole aperture from Monte Carlo simulations. In this article we present results of source reconstructions based on simulated images that test the methods effectiveness with regard to pinhole misalignment.
Directory of Open Access Journals (Sweden)
Kravtsenyuk Olga V
2007-01-01
Full Text Available The possibility of improving the spatial resolution of diffuse optical tomograms reconstructed by the photon average trajectories (PAT method is substantiated. The PAT method recently presented by us is based on a concept of an average statistical trajectory for transfer of light energy, the photon average trajectory (PAT. The inverse problem of diffuse optical tomography is reduced to a solution of an integral equation with integration along a conditional PAT. As a result, the conventional algorithms of projection computed tomography can be used for fast reconstruction of diffuse optical images. The shortcoming of the PAT method is that it reconstructs the images blurred due to averaging over spatial distributions of photons which form the signal measured by the receiver. To improve the resolution, we apply a spatially variant blur model based on an interpolation of the spatially invariant point spread functions simulated for the different small subregions of the image domain. Two iterative algorithms for solving a system of linear algebraic equations, the conjugate gradient algorithm for least squares problem and the modified residual norm steepest descent algorithm, are used for deblurring. It is shown that a gain in spatial resolution can be obtained.
Pulsed magnetic flux leakage method for hairline crack detection and characterization
Okolo, Chukwunonso K.; Meydan, Turgut
2018-04-01
The Magnetic Flux leakage (MFL) method is a well-established branch of electromagnetic Non-Destructive Testing (NDT), extensively used for evaluating defects both on the surface and far-surface of pipeline structures. However the conventional techniques are not capable of estimating their approximate size, location and orientation, hence an additional transducer is required to provide the extra information needed. This research is aimed at solving the inevitable problem of granular bond separation which occurs during manufacturing, leaving pipeline structures with miniature cracks. It reports on a quantitative approach based on the Pulsed Magnetic Flux Leakage (PMFL) method, for the detection and characterization of the signals produced by tangentially oriented rectangular surface and far-surface hairline cracks. This was achieved through visualization and 3D imaging of the leakage field. The investigation compared finite element numerical simulation with experimental data. Experiments were carried out using a 10mm thick low carbon steel plate containing artificial hairline cracks with various depth sizes, and different features were extracted from the transient signal. The influence of sensor lift-off and pulse width variation on the magnetic field distribution which affects the detection capability of various hairline cracks located at different depths in the specimen is explored. The findings show that the proposed technique can be used to classify both surface and far-surface hairline cracks and can form the basis for an enhanced hairline crack detection and characterization for pipeline health monitoring.
International Nuclear Information System (INIS)
Knob, P.J.
1982-07-01
This work is concerned with the detection of flux disturbances in pebble bed high temperature reactors by means of flux measurements in the side reflector. Included among the disturbances studied are xenon oscillations, rod group insertions, and individual rod insertions. Using the three-dimensional diffusion code CITATION, core calculations for both a very small reactor (KAHTER) and a large reactor (PNP-3000) were carried out to determine the neutron fluxes at the detector positions. These flux values were then used in flux mapping codes for reconstructing the flux distribution in the core. As an extension of the already existing two-dimensional MOFA code, which maps azimuthal disturbances, a new three-dimensional flux mapping code ZELT was developed for handling axial disturbances as well. It was found that both flux mapping programs give satisfactory results for small and large pebble bed reactors alike. (orig.) [de
Developing a framework for evaluating tallgrass prairie reconstruction methods and management
Larson, Diane L.; Ahlering, Marissa; Drobney, Pauline; Esser, Rebecca; Larson, Jennifer L.; Viste-Sparkman, Karen
2018-01-01
The thousands of hectares of prairie reconstructed each year in the tallgrass prairie biome can provide a valuable resource for evaluation of seed mixes, planting methods, and post-planting management if methods used and resulting characteristics of the prairies are recorded and compiled in a publicly accessible database. The objective of this study was to evaluate the use of such data to understand the outcomes of reconstructions over a 10-year period at two U.S. Fish and Wildlife Service refuges. Variables included number of species planted, seed source (combine-harvest or combine-harvest plus hand-collected), fire history, and planting method and season. In 2015 we surveyed vegetation on 81 reconstructions and calculated proportion of planted species observed; introduced species richness; native species richness, evenness and diversity; and mean coefficient of conservatism. We conducted exploratory analyses to learn how implied communities based on seed mix compared with observed vegetation; which seeding or management variables were influential in the outcome of the reconstructions; and consistency of responses between the two refuges. Insights from this analysis include: 1) proportion of planted species observed in 2015 declined as planted richness increased, but lack of data on seeding rate per species limited conclusions about value of added species; 2) differing responses to seeding and management between the two refuges suggest the importance of geographic variability that could be addressed using a public database; and 3) variables such as fire history are difficult to quantify consistently and should be carefully evaluated in the context of a public data repository.
Prediction of critical heat flux in fuel assemblies using a CHF table method
Energy Technology Data Exchange (ETDEWEB)
Chun, Tae Hyun; Hwang, Dae Hyun; Bang, Je Geon [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of); Baek, Won Pil; Chang, Soon Heung [Korea Advance Institute of Science and Technology, Taejon (Korea, Republic of)
1997-12-31
A CHF table method has been assessed in this study for rod bundle CHF predictions. At the conceptual design stage for a new reactor, a general critical heat flux (CHF) prediction method with a wide applicable range and reasonable accuracy is essential to the thermal-hydraulic design and safety analysis. In many aspects, a CHF table method (i.e., the use of a round tube CHF table with appropriate bundle correction factors) can be a promising way to fulfill this need. So the assessment of the CHF table method has been performed with the bundle CHF data relevant to pressurized water reactors (PWRs). For comparison purposes, W-3R and EPRI-1 were also applied to the same data base. Data analysis has been conducted with the subchannel code COBRA-IV-I. The CHF table method shows the best predictions based on the direct substitution method. Improvements of the bundle correction factors, especially for the spacer grid and cold wall effects, are desirable for better predictions. Though the present assessment is somewhat limited in both fuel geometries and operating conditions, the CHF table method clearly shows potential to be a general CHF predictor. 8 refs., 3 figs., 3 tabs. (Author)
System Characterizations and Optimized Reconstruction Methods for Novel X-ray Imaging Modalities
Guan, Huifeng
In the past decade there have been many new emerging X-ray based imaging technologies developed for different diagnostic purposes or imaging tasks. However, there exist one or more specific problems that prevent them from being effectively or efficiently employed. In this dissertation, four different novel X-ray based imaging technologies are discussed, including propagation-based phase-contrast (PB-XPC) tomosynthesis, differential X-ray phase-contrast tomography (D-XPCT), projection-based dual-energy computed radiography (DECR), and tetrahedron beam computed tomography (TBCT). System characteristics are analyzed or optimized reconstruction methods are proposed for these imaging modalities. In the first part, we investigated the unique properties of propagation-based phase-contrast imaging technique when combined with the X-ray tomosynthesis. Fourier slice theorem implies that the high frequency components collected in the tomosynthesis data can be more reliably reconstructed. It is observed that the fringes or boundary enhancement introduced by the phase-contrast effects can serve as an accurate indicator of the true depth position in the tomosynthesis in-plane image. In the second part, we derived a sub-space framework to reconstruct images from few-view D-XPCT data set. By introducing a proper mask, the high frequency contents of the image can be theoretically preserved in a certain region of interest. A two-step reconstruction strategy is developed to mitigate the risk of subtle structures being oversmoothed when the commonly used total-variation regularization is employed in the conventional iterative framework. In the thirt part, we proposed a practical method to improve the quantitative accuracy of the projection-based dual-energy material decomposition. It is demonstrated that applying a total-projection-length constraint along with the dual-energy measurements can achieve a stabilized numerical solution of the decomposition problem, thus overcoming the
International Nuclear Information System (INIS)
Kheymits, M D; Leonov, A A; Zverev, V G; Galper, A M; Arkhangelskaya, I V; Arkhangelskiy, A I; Yurkin, Yu T; Bakaldin, A V; Suchkov, S I; Topchiev, N P; Dalkarov, O D
2016-01-01
The GAMMA-400 gamma-ray space-based telescope has as its main goals to measure cosmic γ-ray fluxes and the electron-positron cosmic-ray component produced, theoretically, in dark-matter-particles decay or annihilation processes, to search for discrete γ-ray sources and study them in detail, to examine the energy spectra of diffuse γ-rays — both galactic and extragalactic — and to study gamma-ray bursts (GRBs) and γ-rays from the active Sun. Scientific goals of GAMMA-400 telescope require fine angular resolution. The telescope is of a pair-production type. In the converter-tracker, the incident gamma-ray photon converts into electron-positron pair in the tungsten layer and then the tracks are detected by silicon- strip position-sensitive detectors. Multiple scattering processes become a significant obstacle in the incident-gamma direction reconstruction for energies below several gigaelectronvolts. The method of utilising this process to improve the resolution is proposed in the presented work. (paper)
A limited-angle CT reconstruction method based on anisotropic TV minimization
International Nuclear Information System (INIS)
Chen Zhiqiang; Jin Xin; Li Liang; Wang Ge
2013-01-01
This paper presents a compressed sensing (CS)-inspired reconstruction method for limited-angle computed tomography (CT). Currently, CS-inspired CT reconstructions are often performed by minimizing the total variation (TV) of a CT image subject to data consistency. A key to obtaining high image quality is to optimize the balance between TV-based smoothing and data fidelity. In the case of the limited-angle CT problem, the strength of data consistency is angularly varying. For example, given a parallel beam of x-rays, information extracted in the Fourier domain is mostly orthogonal to the direction of x-rays, while little is probed otherwise. However, the TV minimization process is isotropic, suggesting that it is unfit for limited-angle CT. Here we introduce an anisotropic TV minimization method to address this challenge. The advantage of our approach is demonstrated in numerical simulation with both phantom and real CT images, relative to the TV-based reconstruction. (paper)
Research on image matching method of big data image of three-dimensional reconstruction
Zhang, Chunsen; Qiu, Zhenguo; Zhu, Shihuan; Wang, Xiqi; Xu, Xiaolei; Zhong, Sidong
2015-12-01
Image matching is the main flow of a three-dimensional reconstruction. With the development of computer processing technology, seeking the image to be matched from the large date image sets which acquired from different image formats, different scales and different locations has put forward a new request for image matching. To establish the three dimensional reconstruction based on image matching from big data images, this paper put forward a new effective matching method based on visual bag of words model. The main technologies include building the bag of words model and image matching. First, extracting the SIFT feature points from images in the database, and clustering the feature points to generate the bag of words model. We established the inverted files based on the bag of words. The inverted files can represent all images corresponding to each visual word. We performed images matching depending on the images under the same word to improve the efficiency of images matching. Finally, we took the three-dimensional model with those images. Experimental results indicate that this method is able to improve the matching efficiency, and is suitable for the requirements of large data reconstruction.
A novel reconstruction method for giant incisional hernia: Hybrid laparoscopic technique
Directory of Open Access Journals (Sweden)
G Ozturk
2015-01-01
Full Text Available Background and Objectives: Laparoscopic reconstruction of ventral hernia is a popular technique today. Patients with large defects have various difficulties of laparoscopic approach. In this study, we aimed to present a new reconstruction technique that combines laparoscopic and open approach in giant incisional hernias. Materials and Methods: Between January 2006 and August 2012, 28 patients who were operated consequently for incisional hernia with defect size over 10 cm included in this study and separated into two groups. Group 1 (n = 12 identifies patients operated with standard laparoscopic approach, whereas group 2 (n = 16 labels laparoscopic technique combined with open approach. Patients were evaluated in terms of age, gender, body mass index (BMI, mean operation time, length of hospital stay, surgical site infection (SSI and recurrence rate. Results: There are 12 patients in group 1 and 16 patients in group 2. Mean length of hospital stay and SSI rates are similar in both groups. Postoperative seroma formation was observed in six patients for group 1 and in only 1 patient for group 2. Group 1 had 1 patient who suffered from recurrence where group 2 had no recurrence. Discussion: Laparoscopic technique combined with open approach may safely be used as an alternative method for reconstruction of giant incisional hernias.
Schneider, Simon; Thomas, Christine; Dokht, Ramin M. H.; Gu, Yu Jeffrey; Chen, Yunfeng
2018-02-01
Due to uneven earthquake source and receiver distributions, our abilities to isolate weak signals from interfering phases and reconstruct missing data are fundamental to improving the resolution of seismic imaging techniques. In this study, we introduce a modified frequency-wavenumber (fk) domain based approach using a `Projection Onto Convex Sets' (POCS) algorithm. POCS takes advantage of the sparsity of the dominating energies of phase arrivals in the fk domain, which enables an effective detection and reconstruction of the weak seismic signals. Moreover, our algorithm utilizes the 2-D Fourier transform to perform noise removal, interpolation and weak-phase extraction. To improve the directional resolution of the reconstructed data, we introduce a band-stop 2-D Fourier filter to remove the energy of unwanted, interfering phases in the fk domain, which significantly increases the robustness of the signal of interest. The effectiveness and benefits of this method are clearly demonstrated using both simulated and actual broadband recordings of PP precursors from an array located in Tanzania. When used properly, this method could significantly enhance the resolution of weak crust and mantle seismic phases.
International Nuclear Information System (INIS)
Pill-Hoon Choung
1999-01-01
Although there are various applications of allogenic bone grafts, a new technique of prevascularized lyophilized allogenic bone grafting for maxillo-mandibular reconstruction will be presented. Allogenic bone has been made by author's protocol for jaw defects as a powder, chip or block bone type. The author used lyophilized allogenic bone grafts for discontinuity defects as a block bone. In those cases, neovascularization and resorption of the allogenic bone were important factors for success of grafting. To overcome the problems, the author designed the technique of prefabricated vascularization of allogenic bone, which was lyophilized cranium, with an application of bovine BMP or not. Lyophilized cranial bone was designed for the defects and was put into the scalp. After confirming a hot spot via scintigram several months later, vascularized allogenic bone was harvested pedicled on the parietotemporal fascia based on the superficial temporal artery and vein. Vascularized allogenic cranial bone was rotated into the defect and fixed rigidly. Postoperatively, there was no severe resorption and functional disturbance of the mandible. In this technique, BMP seems to be an important role to help osteogenesis and neovascularization. Eight patients underwent prefabricated vascularization of allogenic bone grafts. Among them, four cases of reconstruction in mandibular discontinuity defects and one case of reconstruction in maxillectomy defect underwent this method, which will be presented with good results. This method may be an alternative technique of microvascular free bone graft
Analytical methods for quantifying greenhouse gas flux in animal production systems.
Powers, W; Capelari, M
2016-08-01
Given increased interest by all stakeholders to better understand the contribution of animal agriculture to climate change, it is important that appropriate methodologies be used when measuring greenhouse gas (GHG) emissions from animal agriculture. Similarly, a fundamental understanding of the differences between methods is necessary to appropriately compare data collected using different approaches and design meaningful experiments. Sources of carbon dioxide, methane, and nitrous oxide emissions in animal production systems includes the animals, feed storage areas, manure deposition and storage areas, and feed and forage production fields. These 3 gases make up the primary GHG emissions from animal feeding operations. Each of the different GHG may be more or less prominent from each emitting source. Similarly, the species dictates the importance of methane emissions from the animals themselves. Measures of GHG flux from animals are often made using respiration chambers, head boxes, tracer gas techniques, or in vitro gas production techniques. In some cases, a combination of techniques are used (i.e., head boxes in combination with tracer gas). The prominent methods for measuring GHG emissions from housing include the use of tracer gas techniques or direct or indirect ventilation measures coupled with concentration measures of gases of interest. Methods for collecting and measuring GHG emissions from manure storage and/or production lots include the use of downwind measures, often using photoacoustic or open path Fourier transform infrared spectroscopy, combined with modeling techniques or the use of static chambers or flux hood methods. Similar methods can be deployed for determining GHG emissions from fields. Each method identified has its own benefits and challenges to use for the stated application. Considerations for use include intended goal, equipment investment and maintenance, frequency and duration of sampling needed to achieve desired representativeness
DEFF Research Database (Denmark)
Lu, Kaiyuan; Rasmussen, Peter Omand; Ritchie, Ewen
2009-01-01
Knowledge of actual flux linkage versus current profiles plays an important role in design verification and performance prediction for switched reluctance motors (SRM's) and permanent magnet motors (PMM's). Various measurement methods have been proposed and discussed so far but each method has its...... the described AC method on an SRM and on a PM motor. For these two motors, the measured flux-linkage-current curves are compared to those measured using other methods. The comparison results show good effectiveness of the proposed AC method for both the SRM and the PM motor....
Bayesian network reconstruction using systems genetics data: comparison of MCMC methods.
Tasaki, Shinya; Sauerwine, Ben; Hoff, Bruce; Toyoshiba, Hiroyoshi; Gaiteri, Chris; Chaibub Neto, Elias
2015-04-01
Reconstructing biological networks using high-throughput technologies has the potential to produce condition-specific interactomes. But are these reconstructed networks a reliable source of biological interactions? Do some network inference methods offer dramatically improved performance on certain types of networks? To facilitate the use of network inference methods in systems biology, we report a large-scale simulation study comparing the ability of Markov chain Monte Carlo (MCMC) samplers to reverse engineer Bayesian networks. The MCMC samplers we investigated included foundational and state-of-the-art Metropolis-Hastings and Gibbs sampling approaches, as well as novel samplers we have designed. To enable a comprehensive comparison, we simulated gene expression and genetics data from known network structures under a range of biologically plausible scenarios. We examine the overall quality of network inference via different methods, as well as how their performance is affected by network characteristics. Our simulations reveal that network size, edge density, and strength of gene-to-gene signaling are major parameters that differentiate the performance of various samplers. Specifically, more recent samplers including our novel methods outperform traditional samplers for highly interconnected large networks with strong gene-to-gene signaling. Our newly developed samplers show comparable or superior performance to the top existing methods. Moreover, this performance gain is strongest in networks with biologically oriented topology, which indicates that our novel samplers are suitable for inferring biological networks. The performance of MCMC samplers in this simulation framework can guide the choice of methods for network reconstruction using systems genetics data. Copyright © 2015 by the Genetics Society of America.
Central Russia agroecosystem monitoring with CO2 fluxes analysis by eddy covariance method
Directory of Open Access Journals (Sweden)
Joulia Meshalkina
2015-07-01
Full Text Available The eddy covariance (EC technique as a powerful statistics-based method of measurement and calculation the vertical turbulent fluxes of greenhouses gases within atmospheric boundary layers provides the continuous, long-term flux information integrated at the ecosystem scale. An attractive way to compare the agricultural practices influences on GHG fluxes is to divide a crop area into subplots managed in different ways. The research has been carried out in the Precision Farming Experimental Field of the Russian Timiryazev State Agricultural University (RTSAU, Moscow in 2013 under the support of RF Government grant # 11.G34.31.0079, EU grant # 603542 LUС4С (7FP and RF Ministry of education and science grant # 14-120-14-4266-ScSh. Arable Umbric Albeluvisols have around 1% of SOC, 5.4 pH (KCl and NPK medium-enhanced contents in sandy loam topsoil. The CO2 flux seasonal monitoring has been done by two eddy covariance stations located at the distance of 108 m. The LI-COR instrumental equipment was the same for the both stations. The stations differ only by current crop version: barley or vetch and oats. At both sites, diurnal patterns of NEE among different months were very similar in shape but varied slightly in amplitude. NEE values were about zero during spring time. CO2 fluxes have been intensified after crop emerging from values of 3 to 7 µmol/s∙m2 for emission, and from 5 to 20 µmol/s∙m2 for sink. Stabilization of the fluxes has come at achieving plants height of 10-12 cm. Average NEE was negative only in June and July. Maximum uptake was observed in June with average values about 8 µmol CO2 m−2 s−1. Although different kind of crops were planted on the fields A and B, GPP dynamics was quite similar for both sites: after reaching the peak values at the mid of June, GPP decreased from 4 to 0.5 g C CO2 m-2 d-1 at the end of July. The difference in crops harvesting time that was equal two weeks did not significantly influence the daily
Impact of reconstruction methods and pathological factors on survival after pancreaticoduodenectomy
Directory of Open Access Journals (Sweden)
Salah Binziad
2013-01-01
Full Text Available Background: Surgery remains the mainstay of therapy for pancreatic head (PH and periampullary carcinoma (PC and provides the only chance of cure. Improvements of surgical technique, increased surgical experience and advances in anesthesia, intensive care and parenteral nutrition have substantially decreased surgical complications and increased survival. We evaluate the effects of reconstruction type, complications and pathological factors on survival and quality of life. Materials and Methods: This is a prospective study to evaluate the impact of various reconstruction methods of the pancreatic remnant after pancreaticoduodenectomy and the pathological characteristics of PC patients over 3.5 years. Patient characteristics and descriptive analysis in the three variable methods either with or without stent were compared with Chi-square test. Multivariate analysis was performed with the logistic regression analysis test and multinomial logistic regression analysis test. Survival rate was analyzed by use Kaplan-Meier test. Results: Forty-one consecutive patients with PC were enrolled. There were 23 men (56.1% and 18 women (43.9%, with a median age of 56 years (16 to 70 years. There were 24 cases of PH cancer, eight cases of PC, four cases of distal CBD cancer and five cases of duodenal carcinoma. Nine patients underwent duct-to-mucosa pancreatico jejunostomy (PJ, 17 patients underwent telescoping pancreatico jejunostomy (PJ and 15 patients pancreaticogastrostomy (PG. The pancreatic duct was stented in 30 patients while in 11 patients, the duct was not stented. The PJ duct-to-mucosa caused significantly less leakage, but longer operative and reconstructive times. Telescoping PJ was associated with the shortest hospital stay. There were 5 postoperative mortalities, while postoperative morbidities included pancreatic fistula-6 patients, delayed gastric emptying in-11, GI fistula-3, wound infection-12, burst abdomen-6 and pulmonary infection-2. Factors
International Nuclear Information System (INIS)
D'Orazio, A; Karimipour, A; Nezhad, A H; Shirani, E
2014-01-01
Laminar mixed convective heat transfer in two-dimensional rectangular inclined driven cavity is studied numerically by means of a double population thermal Lattice Boltzmann method. Through the top moving lid the heat flux enters the cavity whereas it leaves the system through the bottom wall; side walls are adiabatic. The counter-slip internal energy density boundary condition, able to simulate an imposed non zero heat flux at the wall, is applied, in order to demonstrate that it can be effectively used to simulate heat transfer phenomena also in case of moving walls. Results are analyzed over a range of the Richardson numbers and tilting angles of the enclosure, encompassing the dominating forced convection, mixed convection, and dominating natural convection flow regimes. As expected, heat transfer rate increases as increases the inclination angle, but this effect is significant for higher Richardson numbers, when buoyancy forces dominate the problem; for horizontal cavity, average Nusselt number decreases with the increase of Richardson number because of the stratified field configuration
A comparison of recent methods for modelling mercury fluxes at the air-water interface
Directory of Open Access Journals (Sweden)
Fantozzi L.
2013-04-01
Full Text Available The atmospheric pathway of the global mercury flux is known to be the primary source of mercury contamination to most threatened aquatic ecosystems. Notwithstanding, the emission of mercury from surface water to the atmosphere is as much as 50% of total annual emissions of this metal into the atmosphere. In recent years, much effort has been made in theoretical and experimental researches to quantify the total mass flux of mercury to the atmosphere. In this study the most recent atmospheric modelling methods and the information obtained from them are presented and compared using experimental data collected during the Oceanographic Campaign Fenice 2011 (25 October – 8 November 2011, performed on board the Research Vessel (RV Urania of the CNR in the framework of the MEDOCEANOR ongoing program. A strategy for future numerical model development is proposed which is intended to gain a better knowledge of the long-term effects of meteo-climatic drivers on mercury evasional processes, and would provide key information on gaseous Hg exchange rates at the air-water interface.
A novel approach to evaluate soil heat flux calculation: An analytical review of nine methods
Gao, Zhongming; Russell, Eric S.; Missik, Justine E. C.; Huang, Maoyi; Chen, Xingyuan; Strickland, Chris E.; Clayton, Ray; Arntzen, Evan; Ma, Yulong; Liu, Heping
2017-07-01
There are no direct methods to evaluate calculated soil heat flux (SHF) at the surface (G0). Instead, validation and cross evaluation of methods for calculating G0 usually rely on the conventional calorimetric method or the degree of the surface energy balance closure. However, there is uncertainty in the calorimetric method itself, and factors apart from G0 also contribute to nonclosure of the surface energy balance. Here we used a novel approach to evaluate nine different methods for calculating SHF, including the calorimetric method and methods based on analytical solutions of the heat diffusion equation. The SHF (Gz) measured by a self-calibrating SHF plate at a depth of z = 5 cm below the surface (hereafter Gm_5cm) was deployed as a reference. Each SHF calculation method was assessed by comparing the calculated Gz at the same depth (hereafter Gc_5cm) with Gm_5cm. The calorimetric method and simple measurement method performed best in determining Gc_5cm but still underestimated Gm_5cm by 19% during the daytime. Possible causes for this underestimation include errors and uncertainties in SHF measurements and soil thermal properties, as well as the phase lag between Gc_5cm and Gm_5cm. Our results indicate that the calorimetric method achieves the most accurate SHF estimates if self-calibrating SHF plates are deployed at two depths (e.g., 5 cm and 10 cm), soil temperature and water content measurements are made in a few depths between the two plates, and soil thermal properties are accurately quantified.
Iriana, Windy; Tonokura, Kenichi; Kawasaki, Masahiro; Inoue, Gen; Kusin, Kitso; Limin, Suwido H.
2016-09-01
Evaluation of CO2 flux from peatland soil respiration is important to understand the effect of land use change on the global carbon cycle and climate change and particularly to support carbon emission reduction policies. However, quantitative estimation of emitted CO2 fluxes in Indonesia is constrained by existing field data. Current methods for CO2 measurement are limited by high initial cost, manpower, and the difficulties associated with construction issues. Measurement campaigns were performed using a newly developed nocturnal temperature-inversion trap method, which measures the amount of CO2 trapped beneath the nocturnal inversion layer, in the dry season of 2013 at a drained tropical peatland near Palangkaraya, Central Kalimantan, Indonesia. This method is cost-effective and data processing is easier than other flux estimation methods. We compared CO2 fluxes measured using this method with the published data from the existing eddy covariance and closed chamber methods. The maximum value of our measurement results was 10% lower than maximum value of eddy covariance method and average value was 6% higher than average of chamber method in drained tropical peatlands. In addition, the measurement results shows good correlation with groundwater table. The results of this comparison suggest that this methodology for the CO2 flux measurement is useful for field research in tropical peatlands.
Terahertz digital holography using angular spectrum and dual wavelength reconstruction methods.
Heimbeck, Martin S; Kim, Myung K; Gregory, Don A; Everitt, Henry O
2011-05-09
Terahertz digital off-axis holography is demonstrated using a Mach-Zehnder interferometer with a highly coherent, frequency tunable, continuous wave terahertz source emitting around 0.7 THz and a single, spatially-scanned Schottky diode detector. The reconstruction of amplitude and phase objects is performed digitally using the angular spectrum method in conjunction with Fourier space filtering to reduce noise from the twin image and DC term. Phase unwrapping is achieved using the dual wavelength method, which offers an automated approach to overcome the 2π phase ambiguity. Potential applications for nondestructive test and evaluation of visually opaque dielectric and composite objects are discussed. © 2011 Optical Society of America
FBP and BPF reconstruction methods for circular X-ray tomography with off-center detector
International Nuclear Information System (INIS)
Schaefer, Dirk; Grass, Michael; Haar, Peter van de
2011-01-01
Purpose: Circular scanning with an off-center planar detector is an acquisition scheme that allows to save detector area while keeping a large field of view (FOV). Several filtered back-projection (FBP) algorithms have been proposed earlier. The purpose of this work is to present two newly developed back-projection filtration (BPF) variants and evaluate the image quality of these methods compared to the existing state-of-the-art FBP methods. Methods: The first new BPF algorithm applies redundancy weighting of overlapping opposite projections before differentiation in a single projection. The second one uses the Katsevich-type differentiation involving two neighboring projections followed by redundancy weighting and back-projection. An averaging scheme is presented to mitigate streak artifacts inherent to circular BPF algorithms along the Hilbert filter lines in the off-center transaxial slices of the reconstructions. The image quality is assessed visually on reconstructed slices of simulated and clinical data. Quantitative evaluation studies are performed with the Forbild head phantom by calculating root-mean-squared-deviations (RMSDs) to the voxelized phantom for different detector overlap settings and by investigating the noise resolution trade-off with a wire phantom in the full detector and off-center scenario. Results: The noise-resolution behavior of all off-center reconstruction methods corresponds to their full detector performance with the best resolution for the FDK based methods with the given imaging geometry. With respect to RMSD and visual inspection, the proposed BPF with Katsevich-type differentiation outperforms all other methods for the smallest chosen detector overlap of about 15 mm. The best FBP method is the algorithm that is also based on the Katsevich-type differentiation and subsequent redundancy weighting. For wider overlap of about 40-50 mm, these two algorithms produce similar results outperforming the other three methods. The clinical
Gradient heat flux measurement as monitoring method for the diesel engine
Sapozhnikov, S. Z.; Mityakov, V. Yu; Mityakov, A. V.; Vintsarevich, A. V.; Pavlov, A. V.; Nalyotov, I. D.
2017-11-01
The usage of gradient heat flux measurement for monitoring of heat flux on combustion chamber surface and optimization of diesel work process is proposed. Heterogeneous gradient heat flux sensors can be used at various regimes for an appreciable length of time. Fuel injection timing is set by the position of the maximum point on the angular heat flux diagram however, the value itself of the heat flux may not be considered. The development of such an approach can be productive for remote monitoring of work process in the cylinders of high-power marine engines.
Yoshikawa, K.; Ueyama, M.; Takagi, K.; Kominami, Y.
2015-12-01
Methane (CH4) budget in forest ecosystems have not been accurately quantified due to limited measurements and considerable spatiotemporal heterogeneity. In order to quantify CH4 fluxes at temperate forest at various spatiotemporal scales, we have continuously measured CH4 fluxes at two upland forests based on the micrometeorological hyperbolic relaxed eddy accumulation (HREA) and automated dynamic closed chamber methods.The measurements have been conducted at Teshio experimental forest (TSE) since September 2013 and Yamashiro forest meteorology research site (YMS) since November 2014. Three automated chambers were installed on each site. Our system can measure CH4 flux by the micrometeorological HREA, vertical concentration profile at four heights, and chamber measurements by a laser-based gas analyzer (FGGA-24r-EP, Los Gatos Research Inc., USA).Seasonal variations of canopy-scale CH4 fluxes were different in each site. CH4 was consumed during the summer, but was emitted during the fall and winter in TSE; consequently, the site acted as a net annual CH4 source. CH4 was steadily consumed during the winter, but CH4 fluxes fluctuated between absorption and emission during the spring and summer in YMS. YMS acted as a net annual CH4 sink. CH4 uptake at the canopy scale generally decreased with rising soil temperature and increased with drying condition for both sites. CH4 flux measured by most of chambers showed the consistent sensitivity examined for the canopy scale to the environmental variables. CH4 fluxes from a few chambers located at a wet condition were independent of variations in soil temperature and moisture at both sites. Magnitude of soil CH4 uptake was higher than the canopy-scale CH4 uptake. Our results showed that the canopy-scale CH4 fluxes were totally different with the plot-scale CH4 fluxes by chambers, suggesting the considerable spatial heterogeneity in CH4 flux at the temperate forests.
Wensveen, Paul J; Thomas, Len; Miller, Patrick J O
2015-01-01
Detailed information about animal location and movement is often crucial in studies of natural behaviour and how animals respond to anthropogenic activities. Dead-reckoning can be used to infer such detailed information, but without additional positional data this method results in uncertainty that grows with time. Combining dead-reckoning with new Fastloc-GPS technology should provide good opportunities for reconstructing georeferenced fine-scale tracks, and should be particularly useful for marine animals that spend most of their time under water. We developed a computationally efficient, Bayesian state-space modelling technique to estimate humpback whale locations through time, integrating dead-reckoning using on-animal sensors with measurements of whale locations using on-animal Fastloc-GPS and visual observations. Positional observation models were based upon error measurements made during calibrations. High-resolution 3-dimensional movement tracks were produced for 13 whales using a simple process model in which errors caused by water current movements, non-location sensor errors, and other dead-reckoning errors were accumulated into a combined error term. Positional uncertainty quantified by the track reconstruction model was much greater for tracks with visual positions and few or no GPS positions, indicating a strong benefit to using Fastloc-GPS for track reconstruction. Compared to tracks derived only from position fixes, the inclusion of dead-reckoning data greatly improved the level of detail in the reconstructed tracks of humpback whales. Using cross-validation, a clear improvement in the predictability of out-of-set Fastloc-GPS data was observed compared to more conventional track reconstruction methods. Fastloc-GPS observation errors during calibrations were found to vary by number of GPS satellites received and by orthogonal dimension analysed; visual observation errors varied most by distance to the whale. By systematically accounting for the
A Temporoparietal Fascia Pocket Method in Elevation of Reconstructed Auricle for Microtia.
Kurabayashi, Takashi; Asato, Hirotaka; Suzuki, Yasutoshi; Kaji, Nobuyuki; Mitoma, Yoko
2017-04-01
In two-stage procedures for reconstruction of microtia, an axial flap of temporoparietal fascia is widely used to cover the costal cartilage blocks placed behind the framework. Although a temporoparietal fascia flap is undoubtedly reliable, use of the flap is associated with some morbidity and comes at the expense of the option for salvage surgery. The authors devised a simplified procedure for covering the cartilage blocks by creating a pocket in the postauricular temporoparietal fascia. In this procedure, the constructed auricle is elevated from the head superficially to the temporoparietal fascia, and a pocket is created under the temporoparietal fascia and the capsule of the auricle framework. Then, cartilage blocks are inserted into the pocket and fixed. A total of 38 reconstructed ears in 38 patients with microtia ranging in age from 9 to 19 years were elevated using the authors' method from 2002 to 2014 and followed for at least 5 months. To evaluate the long-term stability of the method, two-way analysis of variance (p fascia flap method versus a temporoparietal fascia pocket method) over long-term follow-up. Good projection of the auricles and creation of well-defined temporoauricular sulci were achieved. Furthermore, the sulci had a tendency to hold their steep profile over a long period. The temporoparietal fascia pocket method is simple but produces superior results. Moreover, pocket creation is less invasive and has the benefit of sparing temporoparietal fascia flap elevation. Therapeutic, IV.
Statistical image reconstruction methods for simultaneous emission/transmission PET scans
International Nuclear Information System (INIS)
Erdogan, H.; Fessler, J.A.
1996-01-01
Transmission scans are necessary for estimating the attenuation correction factors (ACFs) to yield quantitatively accurate PET emission images. To reduce the total scan time, post-injection transmission scans have been proposed in which one can simultaneously acquire emission and transmission data using rod sources and sinogram windowing. However, since the post-injection transmission scans are corrupted by emission coincidences, accurate correction for attenuation becomes more challenging. Conventional methods (emission subtraction) for ACF computation from post-injection scans are suboptimal and require relatively long scan times. We introduce statistical methods based on penalized-likelihood objectives to compute ACFs and then use them to reconstruct lower noise PET emission images from simultaneous transmission/emission scans. Simulations show the efficacy of the proposed methods. These methods improve image quality and SNR of the estimates as compared to conventional methods
Reconstruction methods for sound visualization based on acousto-optic tomography
DEFF Research Database (Denmark)
Torras Rosell, Antoni; Lylloff, Oliver; Barrera Figueroa, Salvador
2013-01-01
used in inverse problems, e.g., the singular value decomposition and the conjugate gradient methods. A generic formulation for describing the acousto-optic measurement as an inverse problem is thus derived, and the performance of the numerical methods is assessed by means of simulations......The visualization of acoustic fields using acousto-optic tomography has recently proved to yield satisfactory results in the audible frequency range. The current implementation of this visualization technique uses a laser Doppler vibrometer (LDV) to measure the acousto-optic effect, that is...... tomographic techniques. The filtered back projection (FBP) method is the most popular reconstruction algorithm used for tomography in many fields of science. The present study takes the performance of the FBP method in sound visualization as a reference and investigates the use of alternative methods commonly...
Directory of Open Access Journals (Sweden)
Bakhtiari Jalal
2012-12-01
Full Text Available Abstract Background Laparoscopic gastrectomy is a new and technically challenging surgical procedure with potential benefit. The objective of this study was to investigate clinical and para-clinical consequences following Roux-en-Y and Jejunal Loop interposition reconstructive techniques for subtotal gastrectomy using laparoscopic assisted surgery. Results Following resection of the stomach attachments through a laparoscopic approach, stomach was removed and reconstruction was performed with either standard Roux-en-Y (n = 5 or Jejunal Loop interposition (n = 5 methods. Weight changes were monitored on a daily basis and blood samples were collected on Days 0, 7 and 21 post surgery. A fecal sample was collected on Day 28 after surgery to evaluate fat content. One month post surgery, positive contrast radiography was conducted at 5, 10, 20, 40, 60 and 90 minutes after oral administration of barium sulfate, to evaluate the postoperative complications. There was a gradual decline in body weight in both experimental groups after surgery (P 0.05. Fecal fat content increased in the Roux-en-Y compared to the Jejunal loop interposition technique (P 0.05. Conclusion Roux-en-Y and Jejunal loop interposition techniques might be considered as suitable approaches for reconstructing gastro-intestinal tract following gastrectomy in dogs. The results of this study warrant further investigation with a larger number of animals.
International Nuclear Information System (INIS)
Kollár, László E; Lucas, Gary P; Zhang, Zhichao
2014-01-01
An analytical method is developed for the reconstruction of velocity profiles using measured potential distributions obtained around the boundary of a multi-electrode electromagnetic flow meter (EMFM). The method is based on the discrete Fourier transform (DFT), and is implemented in Matlab. The method assumes the velocity profile in a section of a pipe as a superposition of polynomials up to sixth order. Each polynomial component is defined along a specific direction in the plane of the pipe section. For a potential distribution obtained in a uniform magnetic field, this direction is not unique for quadratic and higher-order components; thus, multiple possible solutions exist for the reconstructed velocity profile. A procedure for choosing the optimum velocity profile is proposed. It is applicable for single-phase or two-phase flows, and requires measurement of the potential distribution in a non-uniform magnetic field. The potential distribution in this non-uniform magnetic field is also calculated for the possible solutions using weight values. Then, the velocity profile with the calculated potential distribution which is closest to the measured one provides the optimum solution. The reliability of the method is first demonstrated by reconstructing an artificial velocity profile defined by polynomial functions. Next, velocity profiles in different two-phase flows, based on results from the literature, are used to define the input velocity fields. In all cases, COMSOL Multiphysics is used to model the physical specifications of the EMFM and to simulate the measurements; thus, COMSOL simulations produce the potential distributions on the internal circumference of the flow pipe. These potential distributions serve as inputs for the analytical method. The reconstructed velocity profiles show satisfactory agreement with the input velocity profiles. The method described in this paper is most suitable for stratified flows and is not applicable to axisymmetric flows in
Critical flux determination by flux-stepping
DEFF Research Database (Denmark)
Beier, Søren; Jonsson, Gunnar Eigil
2010-01-01
In membrane filtration related scientific literature, often step-by-step determined critical fluxes are reported. Using a dynamic microfiltration device, it is shown that critical fluxes determined from two different flux-stepping methods are dependent upon operational parameters such as step......, such values are more or less useless in itself as critical flux predictors, and constant flux verification experiments have to be conducted to check if the determined critical fluxes call predict sustainable flux regimes. However, it is shown that using the step-by-step predicted critical fluxes as start...
Kuiper, Justin J; Zimmerman, M Bridget; Pagedar, Nitin A; Carter, Keith D; Allen, Richard C; Shriver, Erin M
2016-08-01
This article compares the perception of health and beauty of patients after exenteration reconstruction with free flap, eyelid-sparing, split-thickness skin graft, or with a prosthesis. Cross-sectional evaluation was performed through a survey sent to all students enrolled at the University of Iowa Carver College of Medicine. The survey included inquiries about observer comfort, perceived patient health, difficulty of social interactions, and which patient appearance was least bothersome. Responses were scored from 0 to 4 for each method of reconstruction and an orbital prosthesis. A Friedman test was used to compare responses among each method of repair and the orbital prosthesis for each of the four questions, and if this was significant, then post-hoc pairwise comparison was performed with p values adjusted using Bonferroni's method. One hundred and thirty two students responded to the survey and 125 completed all four questions. Favorable response for all questions was highest for the orbital prosthesis and lowest for the split-thickness skin graft. Patient appearance with an orbital prosthesis had significantly higher scores compared to patient appearance with each of the other methods for all questions (p value < 0.0001). Second highest scores were for the free flap, which were higher than eyelid-sparing and significantly higher compared to split-thickness skin grafting (p value: Question 1: < 0.0001; Question 2: 0.0005; Question 3: 0.006; and Question 4: 0.019). The orbital prosthesis was the preferred post-operative appearance for the exenterated socket for each question. Free flap was the preferred appearance for reconstruction without an orbital prosthesis. Split-thickness skin graft was least preferred for all questions.
Distenfeld, Carl H.
1978-01-01
A method for measuring the dose-equivalent for exposure to an unknown and/or time varing neutron flux which comprises simultaneously exposing a plurality of neutron detecting elements of different types to a neutron flux and combining the measured responses of the various detecting elements by means of a function, whose value is an approximate measure of the dose-equivalent, which is substantially independent of the energy spectra of the flux. Also, a personnel neutron dosimeter, which is useful in carrying out the above method, comprising a plurality of various neutron detecting elements in a single housing suitable for personnel to wear while working in a radiation area.
Energy Technology Data Exchange (ETDEWEB)
Zhao, Weizhao; Ginsberg, M. (Univ. of Miami, FL (United States). Cerebral Vascular Disease Research Center); Young, T.Y. (Univ. of Miami, Coral Gables, FL (United States). Dept. of Electrical and Computer Engineering)
1993-12-01
Quantitative autoradiography is a powerful radio-isotopic-imaging method for neuroscientists to study local cerebral blood flow and glucose-metabolic rate at rest, in response to physiologic activation of the visual, auditory, somatosensory, and motor systems, and in pathologic conditions. Most autoradiographic studies analyze glucose utilization and blood flow in two-dimensional (2-D) coronal sections. With modern digital computer and image-processing techniques, a large number of closely spaced coronal sections can be stacked appropriately to form a three-dimensional (3-d) image. 3-D autoradiography allows investigators to observe cerebral sections and surfaces from any viewing angle. A fundamental problem in 3-D reconstruction is the alignment (registration) of the coronal sections. A new alignment method based on disparity analysis is presented which can overcome many of the difficulties encountered by previous methods. The disparity analysis method can deal with asymmetric, damaged, or tilted coronal sections under the same general framework, and it can be used to match coronal sections of different sizes and shapes. Experimental results on alignment and 3-D reconstruction are presented.
Energy Technology Data Exchange (ETDEWEB)
Nakajima, Shin; Kato, Amami; Yoshimine, Toshiki; Taneda, Mamoru; Hayakawa, Toru (Osaka Univ. (Japan). Faculty of Medicine)
1994-03-01
Authors have developed a new, practical method to reconstruct cerebral surface anatomical images for better surgical orientation and surgical planning. Using a personal computer and a commercially available image handling software, an area encompassing the surface gyri and sulci is selected from the most superficial slice of T1-weighted MR images, after which this selected area, on adjusting the alignment, is overlayed onto the next superficial slice. By repeating this procedure for 4 to 7 times, the brain surface image obtained clearly displays the gyri and sulci. A vascular image of the cerebral surface can also be obtained by this same method by using T2-weighted images or MR angiograms. Then, by combining both the brain surface and vascular images, an anatomically reconstructed image of the cerebral surface is achieved. The outlines of the lesion or ventricles can also be added, if necessary, and the entire procedure takes an hour or less. The authors believe that this method is superior to conventional surface anatomy scanning for discriminating anatomical structures close to a lesion. This surface anatomical imaging method has been used for the surgical planning and its use helped to minimize surgical damage to the eloquent areas. (author).
Implementation of a fast running full core pin power reconstruction method in DYN3D
International Nuclear Information System (INIS)
Gomez-Torres, Armando Miguel; Sanchez-Espinoza, Victor Hugo; Kliem, Sören; Gommlich, Andre
2014-01-01
Highlights: • New pin power reconstruction (PPR) method for the nodal diffusion code DYN3D. • Flexible PPR method applicable to a single, a group or to all fuel assemblies (square, hex). • Combination of nodal with pin-wise solutions (non-conform geometry). • PPR capabilities shown for REA of a Minicore (REA) PWR whole core. - Abstract: This paper presents a substantial extension of the pin power reconstruction (PPR) method used in the reactor dynamics code DYN3D with the aim to better describe the heterogeneity within the fuel assembly during reactor simulations. The flexibility of the new implemented PPR permits the local spatial refinement of one fuel assembly, of a cluster of fuel assemblies, of a quarter or eight of a core or even of a whole core. The application of PPR in core regions of interest will pave the way for the coupling with sub-channel codes enabling the prediction of local safety parameters. One of the main advantages of considering regions and not only a hot fuel assembly (FA) is the fact that the cross flow within this region can be taken into account by the subchannel code. The implementation of the new PPR method has been tested analysing a rod ejection accident (REA) in a PWR minicore consisting of 3 × 3 FA. Finally, the new capabilities of DNY3D are demonstrated by the analysing a boron dilution transient in a PWR MOX core and the pin power of a VVER-1000 reactor at stationary conditions
Implementation of a fast running full core pin power reconstruction method in DYN3D
Energy Technology Data Exchange (ETDEWEB)
Gomez-Torres, Armando Miguel [Instituto Nacional de Investigaciones Nucleares, Department of Nuclear Systems, Carretera Mexico – Toluca s/n, La Marquesa, 52750 Ocoyoacac (Mexico); Sanchez-Espinoza, Victor Hugo, E-mail: victor.sanchez@kit.edu [Karlsruhe Institute of Technology, Institute for Neutron Physics and Reactor Technology, Hermann-vom-Helmhotz-Platz 1, D-76344 Eggenstein-Leopoldshafen (Germany); Kliem, Sören; Gommlich, Andre [Helmholtz-Zentrum Dresden-Rossendorf, Bautzner Landstraße 400, 01328 Dresden (Germany)
2014-07-01
Highlights: • New pin power reconstruction (PPR) method for the nodal diffusion code DYN3D. • Flexible PPR method applicable to a single, a group or to all fuel assemblies (square, hex). • Combination of nodal with pin-wise solutions (non-conform geometry). • PPR capabilities shown for REA of a Minicore (REA) PWR whole core. - Abstract: This paper presents a substantial extension of the pin power reconstruction (PPR) method used in the reactor dynamics code DYN3D with the aim to better describe the heterogeneity within the fuel assembly during reactor simulations. The flexibility of the new implemented PPR permits the local spatial refinement of one fuel assembly, of a cluster of fuel assemblies, of a quarter or eight of a core or even of a whole core. The application of PPR in core regions of interest will pave the way for the coupling with sub-channel codes enabling the prediction of local safety parameters. One of the main advantages of considering regions and not only a hot fuel assembly (FA) is the fact that the cross flow within this region can be taken into account by the subchannel code. The implementation of the new PPR method has been tested analysing a rod ejection accident (REA) in a PWR minicore consisting of 3 × 3 FA. Finally, the new capabilities of DNY3D are demonstrated by the analysing a boron dilution transient in a PWR MOX core and the pin power of a VVER-1000 reactor at stationary conditions.
Han, Zhaolong; Li, Jiasong; Singh, Manmohan; Wu, Chen; Liu, Chih-hao; Wang, Shang; Idugboe, Rita; Raghunathan, Raksha; Sudheendran, Narendran; Aglyamov, Salavat R.; Twa, Michael D.; Larin, Kirill V.
2015-01-01
We present a systematic analysis of the accuracy of five different methods for extracting the biomechanical properties of soft samples using optical coherence elastography (OCE). OCE is an emerging noninvasive technique, which allows assessing biomechanical properties of tissues with a micrometer spatial resolution. However, in order to accurately extract biomechanical properties from OCE measurements, application of proper mechanical model is required. In this study, we utilize tissue-mimicking phantoms with controlled elastic properties and investigate the feasibilities of four available methods for reconstructing elasticity (Young’s modulus) based on OCE measurements of an air-pulse induced elastic wave. The approaches are based on the shear wave equation (SWE), the surface wave equation (SuWE), Rayleigh-Lamb frequency equation (RLFE), and finite element method (FEM), Elasticity values were compared with uniaxial mechanical testing. The results show that the RLFE and the FEM are more robust in quantitatively assessing elasticity than the other simplified models. This study provides a foundation and reference for reconstructing the biomechanical properties of tissues from OCE data, which is important for the further development of noninvasive elastography methods. PMID:25860076
A new near-lossless EEG compression method using ANN-based reconstruction technique.
Hejrati, Behzad; Fathi, Abdolhossein; Abdali-Mohammadi, Fardin
2017-08-01
Compression algorithm is an essential part of Telemedicine systems, to store and transmit large amount of medical signals. Most of existing compression methods utilize fixed transforms such as discrete cosine transform (DCT) and wavelet and usually cannot efficiently extract signal redundancy especially for non-stationary signals such as electroencephalogram (EEG). In this paper, we first propose learning-based adaptive transform using combination of DCT and artificial neural network (ANN) reconstruction technique. This adaptive ANN-based transform is applied to the DCT coefficients of EEG data to reduce its dimensionality and also to estimate the original DCT coefficients of EEG in the reconstruction phase. To develop a new near lossless compression method, the difference between the original DCT coefficients and estimated ones are also quantized. The quantized error is coded using Arithmetic coding and sent along with the estimated DCT coefficients as compressed data. The proposed method was applied to various datasets and the results show higher compression rate compared to the state-of-the-art methods. Copyright © 2017 Elsevier Ltd. All rights reserved.
Jha, Abhinav K.; Song, Na; Caffo, Brian; Frey, Eric C.
2015-03-01
Quantitative single-photon emission computed tomography (SPECT) imaging is emerging as an important tool in clinical studies and biomedical research. There is thus a need for optimization and evaluation of systems and algorithms that are being developed for quantitative SPECT imaging. An appropriate objective method to evaluate these systems is by comparing their performance in the end task that is required in quantitative SPECT imaging, such as estimating the mean activity concentration in a volume of interest (VOI) in a patient image. This objective evaluation can be performed if the true value of the estimated parameter is known, i.e. we have a gold standard. However, very rarely is this gold standard known in human studies. Thus, no-gold-standard techniques to optimize and evaluate systems and algorithms in the absence of gold standard are required. In this work, we developed a no-gold-standard technique to objectively evaluate reconstruction methods used in quantitative SPECT when the parameter to be estimated is the mean activity concentration in a VOI. We studied the performance of the technique with realistic simulated image data generated from an object database consisting of five phantom anatomies with all possible combinations of five sets of organ uptakes, where each anatomy consisted of eight different organ VOIs. Results indicate that the method pro- vided accurate ranking of the reconstruction methods. We also demonstrated the application of consistency checks to test the no-gold-standard output.
Energy Technology Data Exchange (ETDEWEB)
Mory, Cyril, E-mail: cyril.mory@philips.com [Université de Lyon, CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Lyon 1, F-69621 Villeurbanne Cedex (France); Philips Research Medisys, 33 rue de Verdun, 92156 Suresnes (France); Auvray, Vincent; Zhang, Bo [Philips Research Medisys, 33 rue de Verdun, 92156 Suresnes (France); Grass, Michael; Schäfer, Dirk [Philips Research, Röntgenstrasse 24–26, D-22335 Hamburg (Germany); Chen, S. James; Carroll, John D. [Department of Medicine, Division of Cardiology, University of Colorado Denver, 12605 East 16th Avenue, Aurora, Colorado 80045 (United States); Rit, Simon [Université de Lyon, CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Lyon 1 (France); Centre Léon Bérard, 28 rue Laënnec, F-69373 Lyon (France); Peyrin, Françoise [Université de Lyon, CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Lyon 1, F-69621 Villeurbanne Cedex (France); X-ray Imaging Group, European Synchrotron, Radiation Facility, BP 220, F-38043 Grenoble Cedex (France); Douek, Philippe; Boussel, Loïc [Université de Lyon, CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Lyon 1 (France); Hospices Civils de Lyon, 28 Avenue du Doyen Jean Lépine, 69500 Bron (France)
2014-02-15
Purpose: Reconstruction of the beating heart in 3D + time in the catheter laboratory using only the available C-arm system would improve diagnosis, guidance, device sizing, and outcome control for intracardiac interventions, e.g., electrophysiology, valvular disease treatment, structural or congenital heart disease. To obtain such a reconstruction, the patient's electrocardiogram (ECG) must be recorded during the acquisition and used in the reconstruction. In this paper, the authors present a 4D reconstruction method aiming to reconstruct the heart from a single sweep 10 s acquisition. Methods: The authors introduce the 4D RecOnstructiOn using Spatial and TEmporal Regularization (short 4D ROOSTER) method, which reconstructs all cardiac phases at once, as a 3D + time volume. The algorithm alternates between a reconstruction step based on conjugate gradient and four regularization steps: enforcing positivity, averaging along time outside a motion mask that contains the heart and vessels, 3D spatial total variation minimization, and 1D temporal total variation minimization. Results: 4D ROOSTER recovers the different temporal representations of a moving Shepp and Logan phantom, and outperforms both ECG-gated simultaneous algebraic reconstruction technique and prior image constrained compressed sensing on a clinical case. It generates 3D + time reconstructions with sharp edges which can be used, for example, to estimate the patient's left ventricular ejection fraction. Conclusions: 4D ROOSTER can be applied for human cardiac C-arm CT, and potentially in other dynamic tomography areas. It can easily be adapted to other problems as regularization is decoupled from projection and back projection.
Hummelink, S.; Hofer, S.; Hameeteman, M.; Hoogeveen, Y.; Slump, Cornelis H.; Ulrich, D.J.O.; Schultze Kool, L.J.
Introduction: In a deep inferior epigastric perforator (DIEP) flap breast reconstruction, computed tomography angiography (CTA) is currently considered as the gold standard in preoperative imaging for this procedure. Unidirectional Doppler ultrasound (US) is frequently used; however, this method
Hummelink, S.L.; Hameeteman, M.; Hoogeveen, Y.L.; Slump, C.H.; Ulrich, D.J.O.; Schultze Kool, L.J.
2015-01-01
INTRODUCTION: In a deep inferior epigastric perforator (DIEP) flap breast reconstruction, computed tomography angiography (CTA) is currently considered as the gold standard in preoperative imaging for this procedure. Unidirectional Doppler ultrasound (US) is frequently used; however, this method
Yuan, Zhen; Jiang, Huabei
2007-05-10
What we believe to be a novel 3D diffuse optical tomography scheme is developed to reconstruct images of both absorption and scattering coefficients of finger joint systems. Compared with our previous reconstruction method, the improved 3D algorithm employs both modified Newton methods and an enhanced initial value optimization scheme to recover the optical properties of highly heterogeneous media. The developed approach is tested using simulated, phantom, and in vivo measurement data. The recovered results suggest that the improved approach is able to provide quantitatively better images than our previous algorithm for optical tomography reconstruction.
A Novel Method for Magnetic Resonance Ocular Imaging Using Super-Resolution Reconstruction
Directory of Open Access Journals (Sweden)
LI Yu-zhou
2017-12-01
Full Text Available Magnetic resonance imaging (MRI is a noninvasive intraocular tumor detection method without ionizing radiation. However, resolution limitation and motion artifacts are difficult to overcome in the imaging process. Conventional scanning methods inevitably introduce motion artifacts, or require the subjects to cooperate for accurate eye fixation, increasing the difficulty of imaging and giving the subject uncomfortable experiences. In this work, a new MRI method based on super-resolution theory is proposed, which uses a specialized orbit coil to scan a series of dynamic images of the eyeball, such that the acquisition resolution in different directions is complementary. High-resolution eyeball images with minimal motion artifacts could then be obtained after pre-processing, registration, super-resolution reconstruction and other operations. The study showed that the method proposed can be used to obtain clear eyeball images without the requirement of eye fixation.
Novel Direction Of Arrival Estimation Method Based on Coherent Accumulation Matrix Reconstruction
Directory of Open Access Journals (Sweden)
Li Lei
2015-04-01
Full Text Available Based on coherent accumulation matrix reconstruction, a novel Direction Of Arrival (DOA estimation decorrelation method of coherent signals is proposed using a small sample. First, the Signal to Noise Ratio (SNR is improved by performing coherent accumulation operation on an array of observed data. Then, according to the structure characteristics of the accumulated snapshot vector, the equivalent covariance matrix, whose rank is the same as the number of array elements, is constructed. The rank of this matrix is proved to be determined just by the number of incident signals, which realize the decorrelation of coherent signals. Compared with spatial smoothing method, the proposed method performs better by effectively avoiding aperture loss with high-resolution characteristics and low computational complexity. Simulation results demonstrate the efficiency of the proposed method.
A Dictionary Learning Method with Total Generalized Variation for MRI Reconstruction
Directory of Open Access Journals (Sweden)
Hongyang Lu
2016-01-01
Full Text Available Reconstructing images from their noisy and incomplete measurements is always a challenge especially for medical MR image with important details and features. This work proposes a novel dictionary learning model that integrates two sparse regularization methods: the total generalized variation (TGV approach and adaptive dictionary learning (DL. In the proposed method, the TGV selectively regularizes different image regions at different levels to avoid oil painting artifacts largely. At the same time, the dictionary learning adaptively represents the image features sparsely and effectively recovers details of images. The proposed model is solved by variable splitting technique and the alternating direction method of multiplier. Extensive simulation experimental results demonstrate that the proposed method consistently recovers MR images efficiently and outperforms the current state-of-the-art approaches in terms of higher PSNR and lower HFEN values.
Liu, Xueqi; Wang, Hong-Wei
2011-03-28
Single particle electron microscopy (EM) reconstruction has recently become a popular tool to get the three-dimensional (3D) structure of large macromolecular complexes. Compared to X-ray crystallography, it has some unique advantages. First, single particle EM reconstruction does not need to crystallize the protein sample, which is the bottleneck in X-ray crystallography, especially for large macromolecular complexes. Secondly, it does not need large amounts of protein samples. Compared with milligrams of proteins necessary for crystallization, single particle EM reconstruction only needs several micro-liters of protein solution at nano-molar concentrations, using the negative staining EM method. However, despite a few macromolecular assemblies with high symmetry, single particle EM is limited at relatively low resolution (lower than 1 nm resolution) for many specimens especially those without symmetry. This technique is also limited by the size of the molecules under study, i.e. 100 kDa for negatively stained specimens and 300 kDa for frozen-hydrated specimens in general. For a new sample of unknown structure, we generally use a heavy metal solution to embed the molecules by negative staining. The specimen is then examined in a transmission electron microscope to take two-dimensional (2D) micrographs of the molecules. Ideally, the protein molecules have a homogeneous 3D structure but exhibit different orientations in the micrographs. These micrographs are digitized and processed in computers as "single particles". Using two-dimensional alignment and classification techniques, homogenous molecules in the same views are clustered into classes. Their averages enhance the signal of the molecule's 2D shapes. After we assign the particles with the proper relative orientation (Euler angles), we will be able to reconstruct the 2D particle images into a 3D virtual volume. In single particle 3D reconstruction, an essential step is to correctly assign the proper orientation
International Nuclear Information System (INIS)
Rezac, K.; Klir, D.; Kubes, P.; Kravarik, J.
2009-01-01
We present the reconstruction of neutron energy spectra from time-of-flight signals. This technique is useful in experiments with the time of neutron production in the range of about tens or hundreds of nanoseconds. The neutron signals were obtained by a common hard X-ray and neutron fast plastic scintillation detectors. The reconstruction is based on the Monte Carlo method which has been improved by simultaneous usage of neutron detectors placed on two opposite sides from the neutron source. Although the reconstruction from detectors placed on two opposite sides is more difficult and a little bit inaccurate (it followed from several presumptions during the inclusion of both sides of detection), there are some advantages. The most important advantage is smaller influence of scattered neutrons on the reconstruction. Finally, we describe the estimation of the error of this reconstruction.
Tang, Cuong Q; Humphreys, Aelys M; Fontaneto, Diego; Barraclough, Timothy G; Paradis, Emmanuel
2014-10-01
Coalescent-based species delimitation methods combine population genetic and phylogenetic theory to provide an objective means for delineating evolutionarily significant units of diversity. The generalised mixed Yule coalescent (GMYC) and the Poisson tree process (PTP) are methods that use ultrametric (GMYC or PTP) or non-ultrametric (PTP) gene trees as input, intended for use mostly with single-locus data such as DNA barcodes. Here, we assess how robust the GMYC and PTP are to different phylogenetic reconstruction and branch smoothing methods. We reconstruct over 400 ultrametric trees using up to 30 different combinations of phylogenetic and smoothing methods and perform over 2000 separate species delimitation analyses across 16 empirical data sets. We then assess how variable diversity estimates are, in terms of richness and identity, with respect to species delimitation, phylogenetic and smoothing methods. The PTP method generally generates diversity estimates that are more robust to different phylogenetic methods. The GMYC is more sensitive, but provides consistent estimates for BEAST trees. The lower consistency of GMYC estimates is likely a result of differences among gene trees introduced by the smoothing step. Unresolved nodes (real anomalies or methodological artefacts) affect both GMYC and PTP estimates, but have a greater effect on GMYC estimates. Branch smoothing is a difficult step and perhaps an underappreciated source of bias that may be widespread among studies of diversity and diversification. Nevertheless, careful choice of phylogenetic method does produce equivalent PTP and GMYC diversity estimates. We recommend simultaneous use of the PTP model with any model-based gene tree (e.g. RAxML) and GMYC approaches with BEAST trees for obtaining species hypotheses.
Methods of reconstruction of perineal wounds after abdominoperineal resection. Literature review
Directory of Open Access Journals (Sweden)
S. S. Gordeev
2017-01-01
Full Text Available The problem of wound closure after abdominoperineal resection to treat oncological diseases remains unsolved. Formation of a primary suture in the perineal wound can lead to multiple postoperative complications: seroma, abscess, wound disruption with subsequent perineal hernia. Chemoradiation therapy as a standard for locally advanced rectal or anal cancer doesn’t improve results of treatment of perineal wounds and increases duration of their healing. Currently, surgeons have several reconstructive and plastic techniques to improve both direct and long-term functional treatment results. In the article, the most common methods of allo- and autotransplantation are considered, benefits and deficiencies of various techniques are evaluated and analyzed.
International Nuclear Information System (INIS)
Dong, Xiangyuan; Guo, Shuqing
2008-01-01
In this paper, a novel image reconstruction method for electrical capacitance tomography (ECT) based on the combined series and parallel model is presented. A regularization technique is used to obtain a stabilized solution of the inverse problem. Also, the adaptive coefficient of the combined model is deduced by numerical optimization. Simulation results indicate that it can produce higher quality images when compared to the algorithm based on the parallel or series models for the cases tested in this paper. It provides a new algorithm for ECT application
A new method for estimating heat flux in superheater and reheater tubes
Energy Technology Data Exchange (ETDEWEB)
Purbolaksono, J. [Department of Mechanical Engineering, Universiti Tenaga Nasional, km 7 Jalan Kajang-Puchong, Kajang 43009, Selangor (Malaysia)], E-mail: judha@uniten.edu.my; Khinani, A.; Rashid, A.Z.; Ali, A.A. [Department of Mechanical Engineering, Universiti Tenaga Nasional, km 7 Jalan Kajang-Puchong, Kajang 43009, Selangor (Malaysia); Ahmad, J. [Kapar Energy Ventures Sdn Bhd, Jalan Tok Muda, Kapar 42200, Selangor (Malaysia); Nordin, N.F. [TNB Research Sdn Bhd, No. 1 Lorong Air Hitam, Kajang 43000, Selangor (Malaysia)
2009-10-15
In this paper a procedure on how to estimate the heat flux in superheater and reheater tubes utilizing the empirical formula and the finite element modeling is proposed. An iterative procedure consisting of empirical formulae and numerical simulation is used to determine heat flux as both temperature and scale thickness increase over period of time. Estimation results of the heat flux over period of time for two different design temperatures of the steam and different heat transfer parameters are presented.
International Nuclear Information System (INIS)
Kobayashi, Fujio; Yamaguchi, Shoichiro
1982-01-01
A method of the reconstruction of computed tomographic images was proposed to reduce the exposure dose to X-ray. The method is the small number of X-ray projection method by accelerative gradient method. The procedures of computation are described. The algorithm of these procedures is simple, the convergence of the computation is fast, and the required memory capacity is small. Numerical simulation was carried out to conform the validity of this method. A sample of simple shape was considered, projection data were given, and the images were reconstructed from 6 views. Good results were obtained, and the method is considered to be useful. (Kato, T.)
Choi, Joonsung; Seo, Hyunseok; Lim, Yongwan; Han, Yeji; Park, HyunWook
2015-03-01
To obtain three-dimensional (3D) MR angiography having high contrast between vessel and stationary background tissue, a novel technique called sliding time of flight (TOF) is proposed. The proposed method relies on the property that flow-related enhancement (FRE) is maximized at the blood-entering slice in an imaging slab. For the proposed sliding TOF, a sliding stack-of-stars sampling and a dynamic MR image reconstruction algorithm were developed. To verify the performance of the proposed method, in vivo study was performed and the results were compared with multiple overlapping thin 3D slab acquisition (MOTSA) and sliding interleaved ky (SLINKY). In MOTSA and SLINKY, the variation of FRE resulted in severe venetian blind (MOTSA) or ghost (SLINKY) artifacts, while the vessel-contrast increased as the flip angle of radiofrequency (RF) pulses increased. On the other hand, the proposed method could provide high-contrast angiograms with reduced FRE-related artifacts. The sliding TOF can provide 3D angiography without image artifacts even if high flip angle RF pulses with thick slab excitation are used. Although remains of subsampling artifacts can be present in the reconstructed images, they can be reduced by MIP operation and resolved further by regularization techniques. © 2014 Wiley Periodicals, Inc.
A new method to reconstruct the ionospheric convection patterns in the polar cap
Directory of Open Access Journals (Sweden)
P. L. Israelevich
1999-06-01
Full Text Available A new method to reconstruct the instantaneous convection pattern in the Earth's polar ionosphere is suggested. Plasma convection in the polar cap ionosphere is described as a hydrodynamic incompressible flow. This description is valid in the region where the electric currents are field aligned (and hence, the Lorentz body force vanishes. The problem becomes two-dimensional, and may be described by means of stream function. The flow pattern may be found as a solution of the boundary value problem for stream function. Boundary conditions should be provided by measurements of the electric field or plasma velocity vectors along the satellite orbits. It is shown that the convection pattern may be reconstructed with a reasonable accuracy by means of this method, by using only the minimum number of satellite crossings of the polar cap. The method enables us to obtain a reasonable estimate of the convection pattern without knowledge of the ionospheric conductivity.Key words. Ionosphere (modelling and forecasting; plasma convection; polar ionosphere
A new method to reconstruct the ionospheric convection patterns in the polar cap
Directory of Open Access Journals (Sweden)
P. L. Israelevich
Full Text Available A new method to reconstruct the instantaneous convection pattern in the Earth's polar ionosphere is suggested. Plasma convection in the polar cap ionosphere is described as a hydrodynamic incompressible flow. This description is valid in the region where the electric currents are field aligned (and hence, the Lorentz body force vanishes. The problem becomes two-dimensional, and may be described by means of stream function. The flow pattern may be found as a solution of the boundary value problem for stream function. Boundary conditions should be provided by measurements of the electric field or plasma velocity vectors along the satellite orbits. It is shown that the convection pattern may be reconstructed with a reasonable accuracy by means of this method, by using only the minimum number of satellite crossings of the polar cap. The method enables us to obtain a reasonable estimate of the convection pattern without knowledge of the ionospheric conductivity.
Key words. Ionosphere (modelling and forecasting; plasma convection; polar ionosphere
International Nuclear Information System (INIS)
Guedouar, R.; Zarrad, B.
2010-01-01
For algebraic reconstruction techniques both forward and back projection operators are needed. The ability to perform accurate reconstruction relies fundamentally on the forward projection and back projection methods which are usually, the transpose of each other. Even though the mis-matched pairs may introduce additional errors during the iterative process, the usefulness of mis-matched projector/back projector pairs has been proved in image reconstruction. This work investigates the performance of matched and mis-matched reconstruction pairs using popular forward projectors and their transposes when used in reconstruction tasks with additive simultaneous iterative reconstruction techniques (ASIRT) in a parallel beam approach. Simulated noiseless phantoms are used to compare the performance of the investigated pairs in terms of the root mean squared errors (RMSE) which are calculated between reconstructed slices and the reference in different regions. Results show that mis-matched projection/back projection pairs can promise more accuracy of reconstructed images than matched ones. The forward projection operator performance seems independent of the choice of the back projection operator and vice versa.
Angelis, Georgios I; Matthews, Julian C; Kotasidis, Fotis A; Markiewicz, Pawel J; Lionheart, William R; Reader, Andrew J
2014-11-01
Estimation of nonlinear micro-parameters is a computationally demanding and fairly challenging process, since it involves the use of rather slow iterative nonlinear fitting algorithms and it often results in very noisy voxel-wise parametric maps. Direct reconstruction algorithms can provide parametric maps with reduced variance, but usually the overall reconstruction is impractically time consuming with common nonlinear fitting algorithms. In this work we employed a recently proposed direct parametric image reconstruction algorithm to estimate the parametric maps of all micro-parameters of a two-tissue compartment model, used to describe the kinetics of [[Formula: see text]F]FDG. The algorithm decouples the tomographic and the kinetic modelling problems, allowing the use of previously developed post-reconstruction methods, such as the generalised linear least squares (GLLS) algorithm. Results on both clinical and simulated data showed that the proposed direct reconstruction method provides considerable quantitative and qualitative improvements for all micro-parameters compared to the conventional post-reconstruction fitting method. Additionally, region-wise comparison of all parametric maps against the well-established filtered back projection followed by post-reconstruction non-linear fitting, as well as the direct Patlak method, showed substantial quantitative agreement in all regions. The proposed direct parametric reconstruction algorithm is a promising approach towards the estimation of all individual microparameters of any compartment model. In addition, due to the linearised nature of the GLLS algorithm, the fitting step can be very efficiently implemented and, therefore, it does not considerably affect the overall reconstruction time.
Kravchenko, V. F.; Ponomaryov, V. I.; Pustovoit, V. I.; Sadovnychiy, S. N.
2017-08-01
A novel method for the reconstruction of disparity maps (DMs) with robust properties to nonideal registration conditions, reflections, and noise in stereo color image pairs has been substantiated for the first time. The novel approach proposes a scheme for image DM reconstruction where Jaccard distance metric is used as a proximity criterion in stereo image pair matching. A physical interpretation of the method that allows the quality of the formed DMs to be improved significantly is given. A processing block diagram has been developed in accordance with the novel approach. Simulations of the novel DM reconstruction method have shown an advantage of the proposed DM reconstruction scheme in terms of generally recognized criteria, such as the structural similarity index measure and the bad matching pixels, and when visually comparing the formed DMs.
Evaluation of the reconstruction method and effect of partial volume in brain scintiscanning
International Nuclear Information System (INIS)
Pinheiro, Monica Araujo
2016-01-01
Alzheimer's disease is a neurodegenerative disorder, on which occurs a progressive and irreversible destruction of neurons. According to the World Health Organization (WHO) 35.6 million people are living with dementia, being recommended that governments prioritize early diagnosis techniques. Laboratory and psychological tests for cognitive assessment are conducted and further complemented by neurological imaging from nuclear medicine exams in order to establish an accurate diagnosis. The image quality evaluation and reconstruction process effects are important tools in clinical routine. In the present work, these quality parameters were studied, and the effects of partial volume (PVE) for lesions of different sizes and geometries that are attributed to the limited resolution of the equipment. In dementia diagnosis, this effect can be confused with intake losses due to cerebral cortex atrophy. The evaluation was conducted by two phantoms of different shapes as suggested by (a) American College of Radiology (ACR) and (b) National Electrical Manufacturers Association (NEMA) for Contrast, Contrast-to-Noise Ratio (CNR) and Recovery Coefficient (RC) calculation versus lesions shape and size. Technetium-99m radionuclide was used in a local brain scintigraphy protocol, for proportions lesion to background of 2:1, 4:1, 6:1, 8:1 and 10:1. Fourteen reconstruction methods were used for each concentration applying different filters and algorithms. Before the analysis of all image properties, the conclusion is that the predominant effect is the partial volume, leading to errors of measurement of more than 80%. Furthermore, it was demonstrate that the most effective method of reconstruction is FBP with Metz filter, providing better contrast and contrast to noise ratio results. In addition, this method shows the best Recovery Coefficients correction for each lesion. The ACR phantom showed the best results assigned to a more precise reconstruction of a cylinder, which does not
Energy Technology Data Exchange (ETDEWEB)
Klymenko, Oleksiy V.; Svir, Irina [Mathematical and Computer Modelling Laboratory, Kharkov National University of Radioelectronics, 14 Lenin Avenue, Kharkov 61166 (Ukraine); Oleinick, Alexander I. [Mathematical and Computer Modelling Laboratory, Kharkov National University of Radioelectronics, 14 Lenin Avenue, Kharkov 61166 (Ukraine); Departement de Chimie, Ecole Normale Superieure, UMR CNRS 8640 ' ' PASTEUR' ' , 24 rue Lhomond, 75231 Paris Cedex 05 (France); Amatore, Christian [Departement de Chimie, Ecole Normale Superieure, UMR CNRS 8640 ' ' PASTEUR' ' , 24 rue Lhomond, 75231 Paris Cedex 05 (France)
2007-12-20
We propose a theoretical method for reconstructing the shape of a hydrodynamic flow profile occurring locally within a rectangular microfluidic channel based on experimental currents measured at double microband electrodes embedded in one channel wall and operating in the generator-collector regime. The ranges of geometrical and flow parameters providing best conditions for the flow profile determination are indicated. The solution of convection-diffusion equation (direct problem) is achieved through the application of the specifically designed conformal mapping of spatial coordinates and an exponentially expanding time grid for obtaining accurate concentration and current distributions. The inverse problem (the problem of flow profile determination) is approached using a variational formulation whose solution is obtained by the Ritz's method. The method may be extended for any number of electrodes in the channel and/or different operating regimes of the system (e.g. generator-generator). (author)
Multiframe super resolution reconstruction method based on light field angular images
Zhou, Shubo; Yuan, Yan; Su, Lijuan; Ding, Xiaomin; Wang, Jichao
2017-12-01
The plenoptic camera can directly obtain 4-dimensional light field information from a 2-dimensional sensor. However, based on the sampling theorem, the spatial resolution is greatly limited by the microlenses. In this paper, we present a method of reconstructing high-resolution images from the angular images. First, the ray tracing method is used to model the telecentric-based light field imaging process. Then, we analyze the subpixel shifts between the angular images extracted from the defocused light field data and the blur in the angular images. According to the analysis above, we construct the observation model from the ideal high-resolution image to the angular images. Applying the regularized super resolution method, we can obtain the super resolution result with a magnification ratio of 8. The results demonstrate the effectiveness of the proposed observation model.
Matros, Evan; Albornoz, Claudia R; Rensberger, Michael; Weimer, Katherine; Garfein, Evan S
2014-06-01
There is increased clinical use of computer-assisted design (CAD) and computer-assisted modeling (CAM) for osseous flap reconstruction, particularly in the head and neck region. Limited information exists about methods to optimize the application of this new technology and for cases in which it may be advantageous over existing methods of osseous flap shaping. A consecutive series of osseous reconstructions planned with CAD/CAM over the past 5 years was analyzed. Conceptual considerations and refinements in the CAD/CAM process were evaluated. A total of 48 reconstructions were performed using CAD/CAM. The majority of cases were performed for head and neck tumor reconstruction or related complications whereas the remainder (4%) were performed for penetrating trauma. Defect location was the mandible (85%), maxilla (12.5%), and pelvis (2%). Reconstruction was performed immediately in 73% of the cases and delayed in 27% of the cases. The mean number of osseous flap bone segments used in reconstruction was 2.41. Areas of optimization include the following: mandible cutting guide placement, osteotomy creation, alternative planning, and saw blade optimization. Identified benefits of CAD/CAM over current techniques include the following: delayed timing, anterior mandible defects, specimen distortion, osteotomy creation in three dimensions, osteotomy junction overlap, plate adaptation, and maxillary reconstruction. Experience with CAD/CAM for osseous reconstruction has identified tools for technique optimization and cases where this technology may prove beneficial over existing methods. Knowledge of these facts may contribute to improved use and main-stream adoption of CAD/CAM virtual surgical planning by reconstructive surgeons. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Measurement of Intestinal and Peripheral Cholesterol Fluxes by a Dual-Tracer Balance Method
Ronda, Onne A H O; van Dijk, Theo H; Verkade, H J; Groen, Albert K
2016-01-01
Long-term elevated plasma cholesterol levels put individuals at risk for developing atherosclerosis. Plasma cholesterol levels are determined by the balance between cholesterol input and output fluxes. Here we describe in detail the methodology to determine the different cholesterol fluxes in mice.
SUMOFLUX: A Generalized Method for Targeted 13C Metabolic Flux Ratio Analysis.
Directory of Open Access Journals (Sweden)
Maria Kogadeeva
2016-09-01
Full Text Available Metabolic fluxes are a cornerstone of cellular physiology that emerge from a complex interplay of enzymes, carriers, and nutrients. The experimental assessment of in vivo intracellular fluxes using stable isotopic tracers is essential if we are to understand metabolic function and regulation. Flux estimation based on 13C or 2H labeling relies on complex simulation and iterative fitting; processes that necessitate a level of expertise that ordinarily preclude the non-expert user. To overcome this, we have developed SUMOFLUX, a methodology that is broadly applicable to the targeted analysis of 13C-metabolic fluxes. By combining surrogate modeling and machine learning, we trained a predictor to specialize in estimating flux ratios from measurable 13C-data. SUMOFLUX targets specific flux features individually, which makes it fast, user-friendly, applicable to experimental design and robust in terms of experimental noise and exchange flux magnitude. Collectively, we predict that SUMOFLUX's properties realistically pave the way to high-throughput flux analyses.
An optimized photoelectron track reconstruction method for photoelectric X-ray polarimeters
Kitaguchi, Takao; Black, Kevin; Enoto, Teruaki; Fukazawa, Yasushi; Hayato, Asami; Hill, Joanne E.; Iwakiri, Wataru B.; Jahoda, Keith; Kaaret, Philip; McCurdy, Ross; Mizuno, Tsunefumi; Nakano, Toshio; Tamagawa, Toru
2018-02-01
We present a data processing algorithm for angular reconstruction and event selection applied to 2-D photoelectron track images from X-ray polarimeters. The method reconstructs the initial emission angle of a photoelectron from the initial portion of the track, which is obtained by continuously cutting a track until the image moments or number of pixels fall below tunable thresholds. In addition, event selection which rejects round tracks quantified with eccentricity and circularity is performed so that polarimetry sensitivity considering a trade-off between the modulation factor and signal acceptance is maximized. The modulation factors with applying track selection are 26 . 6 ± 0 . 4, 46 . 1 ± 0 . 4, 62 . 3 ± 0 . 4, and 61 . 8 ± 0 . 3% at 2.7, 4.5, 6.4, and 8.0 keV, respectively, using the same data previously analyzed by Iwakiri et al. (2016), where the corresponding numbers are 26 . 9 ± 0 . 4, 43 . 4 ± 0 . 4, 54 . 4 ± 0 . 3, and 59 . 1 ± 0 . 3%. The method improves polarimeter sensitivity by 5%-10% at the high energy end of the band previously presented (Iwakiri et al. 2016).
Study of reconstruction methods for a time projection chamber with GEM gas amplification system
International Nuclear Information System (INIS)
Diener, R.
2006-12-01
A new e + e - linear collider with an energy range up to 1TeV is planned in an international collaboration: the International Linear Collider (ILC). This collider will be able to do precision measurements of the Higgs particle and of physics beyond the Standard Model. In the Large Detector Concept (LDC) - which is one proposal for a detector at the ILC - a Time Projection Chamber (TPC) is foreseen as the main tracking device. To meet the requirements on the resolution and to be able to work in the environment at the ILC, the application of new gas amplification technologies in the TPC is necessary. One option is an amplification system based on Gas Electron Multipliers (GEMs). Due to the - in comparison with older technologies - small spatial width of the signals, this technology poses new requirements on the readout structures and the reconstruction methods. In this work, the performance and the systematics of different reconstruction methods have been studied, based on data measured with a TPC prototype in high magnetic fields of up to 4T and data from a Monte Carlo simulation. The latest results of the achievable point resolution are presented and their limitations have been investigated. (orig.)
A new method for reconstruction of cross-sections using Tucker decomposition
Luu, Thi Hieu; Maday, Yvon; Guillo, Matthieu; Guérin, Pierre
2017-09-01
The full representation of a d-variate function requires exponentially storage size as a function of dimension d and high computational cost. In order to reduce these complexities, function approximation methods (called reconstruction in our context) are proposed, such as: interpolation, approximation, etc. The traditional interpolation model like the multilinear one, has this dimensionality problem. To deal with this problem, we propose a new model based on the Tucker format - a low-rank tensor approximation method, called here the Tucker decomposition. The Tucker decomposition is built as a tensor product of one-dimensional spaces where their one-variate basis functions are constructed by an extension of the Karhunen-Loève decomposition into high-dimensional space. Using this technique, we can acquire, direction by direction, the most important information of the function and convert it into a small number of basis functions. Hence, the approximation for a given function needs less data than that of the multilinear model. Results of a test case on the neutron cross-section reconstruction demonstrate that the Tucker decomposition achieves a better accuracy while using less data than the multilinear interpolation.
Community Phylogenetics: Assessing Tree Reconstruction Methods and the Utility of DNA Barcodes
Boyle, Elizabeth E.; Adamowicz, Sarah J.
2015-01-01
Studies examining phylogenetic community structure have become increasingly prevalent, yet little attention has been given to the influence of the input phylogeny on metrics that describe phylogenetic patterns of co-occurrence. Here, we examine the influence of branch length, tree reconstruction method, and amount of sequence data on measures of phylogenetic community structure, as well as the phylogenetic signal (Pagel’s λ) in morphological traits, using Trichoptera larval communities from Churchill, Manitoba, Canada. We find that model-based tree reconstruction methods and the use of a backbone family-level phylogeny improve estimations of phylogenetic community structure. In addition, trees built using the barcode region of cytochrome c oxidase subunit I (COI) alone accurately predict metrics of phylogenetic community structure obtained from a multi-gene phylogeny. Input tree did not alter overall conclusions drawn for phylogenetic signal, as significant phylogenetic structure was detected in two body size traits across input trees. As the discipline of community phylogenetics continues to expand, it is important to investigate the best approaches to accurately estimate patterns. Our results suggest that emerging large datasets of DNA barcode sequences provide a vast resource for studying the structure of biological communities. PMID:26110886
Three-dimensional reconstruction volume: a novel method for volume measurement in kidney cancer.
Durso, Timothy A; Carnell, Jonathan; Turk, Thomas T; Gupta, Gopal N
2014-06-01
The role of volumetric estimation is becoming increasingly important in the staging, management, and prognostication of benign and cancerous conditions of the kidney. We evaluated the use of three-dimensional reconstruction volume (3DV) in determining renal parenchymal volumes (RPV) and renal tumor volumes (RTV). We compared 3DV with the currently available methods of volume assessment and determined its interuser reliability. RPV and RTV were assessed in 28 patients who underwent robot-assisted laparoscopic partial nephrectomy for kidney cancer. Patients with a preoperative creatinine level of kidney pre- and postsurgery overestimated 3D reconstruction volumes by 15% to 102% and 12% to 101%, respectively. In addition, volumes obtained from 3DV displayed high interuser reliability regardless of experience. 3DV provides a highly reliable way of assessing kidney volumes. Given that 3DV takes into account visible anatomy, the differences observed using previously published methods can be attributed to the failure of geometry to accurately approximate kidney or tumor shape. 3DV provides a more accurate, reproducible, and clinically useful tool for urologists looking to improve patient care using analysis related to volume.
Study of reconstruction methods for a time projection chamber with GEM gas amplification system
Energy Technology Data Exchange (ETDEWEB)
Diener, R.
2006-12-15
A new e{sup +}e{sup -} linear collider with an energy range up to 1TeV is planned in an international collaboration: the International Linear Collider (ILC). This collider will be able to do precision measurements of the Higgs particle and of physics beyond the Standard Model. In the Large Detector Concept (LDC) - which is one proposal for a detector at the ILC - a Time Projection Chamber (TPC) is foreseen as the main tracking device. To meet the requirements on the resolution and to be able to work in the environment at the ILC, the application of new gas amplification technologies in the TPC is necessary. One option is an amplification system based on Gas Electron Multipliers (GEMs). Due to the - in comparison with older technologies - small spatial width of the signals, this technology poses new requirements on the readout structures and the reconstruction methods. In this work, the performance and the systematics of different reconstruction methods have been studied, based on data measured with a TPC prototype in high magnetic fields of up to 4T and data from a Monte Carlo simulation. The latest results of the achievable point resolution are presented and their limitations have been investigated. (orig.)
Akhmadaliev, S Z; Ambrosini, G; Amorim, A; Anderson, K; Andrieux, M L; Aubert, Bernard; Augé, E; Badaud, F; Baisin, L; Barreiro, F; Battistoni, G; Bazan, A; Bazizi, K; Belymam, A; Benchekroun, D; Berglund, S R; Berset, J C; Blanchot, G; Bogush, A A; Bohm, C; Boldea, V; Bonivento, W; Bosman, M; Bouhemaid, N; Breton, D; Brette, P; Bromberg, C; Budagov, Yu A; Burdin, S V; Calôba, L P; Camarena, F; Camin, D V; Canton, B; Caprini, M; Carvalho, J; Casado, M P; Castillo, M V; Cavalli, D; Cavalli-Sforza, M; Cavasinni, V; Chadelas, R; Chalifour, M; Chekhtman, A; Chevalley, J L; Chirikov-Zorin, I E; Chlachidze, G; Citterio, M; Cleland, W E; Clément, C; Cobal, M; Cogswell, F; Colas, Jacques; Collot, J; Cologna, S; Constantinescu, S; Costa, G; Costanzo, D; Crouau, M; Daudon, F; David, J; David, M; Davidek, T; Dawson, J; De, K; de La Taille, C; Del Peso, J; Del Prete, T; de Saintignon, P; Di Girolamo, B; Dinkespiler, B; Dita, S; Dodd, J; Dolejsi, J; Dolezal, Z; Downing, R; Dugne, J J; Dzahini, D; Efthymiopoulos, I; Errede, D; Errede, S; Evans, H; Eynard, G; Fassi, F; Fassnacht, P; Ferrari, A; Ferrer, A; Flaminio, Vincenzo; Fournier, D; Fumagalli, G; Gallas, E; Gaspar, M; Giakoumopoulou, V; Gianotti, F; Gildemeister, O; Giokaris, N; Glagolev, V; Glebov, V Yu; Gomes, A; González, V; González de la Hoz, S; Grabskii, V; Graugès-Pous, E; Grenier, P; Hakopian, H H; Haney, M; Hébrard, C; Henriques, A; Hervás, L; Higón, E; Holmgren, Sven Olof; Hostachy, J Y; Hoummada, A; Huston, J; Imbault, D; Ivanyushenkov, Yu M; Jézéquel, S; Johansson, E K; Jon-And, K; Jones, R; Juste, A; Kakurin, S; Karyukhin, A N; Khokhlov, Yu A; Khubua, J I; Klioukhine, V I; Kolachev, G M; Kopikov, S V; Kostrikov, M E; Kozlov, V; Krivkova, P; Kukhtin, V V; Kulagin, M; Kulchitskii, Yu A; Kuzmin, M V; Labarga, L; Laborie, G; Lacour, D; Laforge, B; Lami, S; Lapin, V; Le Dortz, O; Lefebvre, M; Le Flour, T; Leitner, R; Leltchouk, M; Li, J; Liablin, M V; Linossier, O; Lissauer, D; Lobkowicz, F; Lokajícek, M; Lomakin, Yu F; López-Amengual, J M; Lund-Jensen, B; Maio, A; Makowiecki, D S; Malyukov, S N; Mandelli, L; Mansoulié, B; Mapelli, Livio P; Marin, C P; Marrocchesi, P S; Marroquim, F; Martin, P; Maslennikov, A L; Massol, N; Mataix, L; Mazzanti, M; Mazzoni, E; Merritt, F S; Michel, B; Miller, R; Minashvili, I A; Miralles, L; Mnatzakanian, E A; Monnier, E; Montarou, G; Mornacchi, Giuseppe; Moynot, M; Muanza, G S; Nayman, P; Némécek, S; Nessi, Marzio; Nicoleau, S; Niculescu, M; Noppe, J M; Onofre, A; Pallin, D; Pantea, D; Paoletti, R; Park, I C; Parrour, G; Parsons, J; Pereira, A; Perini, L; Perlas, J A; Perrodo, P; Pilcher, J E; Pinhão, J; Plothow-Besch, Hartmute; Poggioli, Luc; Poirot, S; Price, L; Protopopov, Yu; Proudfoot, J; Puzo, P; Radeka, V; Rahm, David Charles; Reinmuth, G; Renzoni, G; Rescia, S; Resconi, S; Richards, R; Richer, J P; Roda, C; Rodier, S; Roldán, J; Romance, J B; Romanov, V; Romero, P; Rossel, F; Rusakovitch, N A; Sala, P; Sanchis, E; Sanders, H; Santoni, C; Santos, J; Sauvage, D; Sauvage, G; Sawyer, L; Says, L P; Schaffer, A C; Schwemling, P; Schwindling, J; Seguin-Moreau, N; Seidl, W; Seixas, J M; Selldén, B; Seman, M; Semenov, A; Serin, L; Shaldaev, E; Shochet, M J; Sidorov, V; Silva, J; Simaitis, V J; Simion, S; Sissakian, A N; Snopkov, R; Söderqvist, J; Solodkov, A A; Soloviev, A; Soloviev, I V; Sonderegger, P; Soustruznik, K; Spanó, F; Spiwoks, R; Stanek, R; Starchenko, E A; Stavina, P; Stephens, R; Suk, M; Surkov, A; Sykora, I; Takai, H; Tang, F; Tardell, S; Tartarelli, F; Tas, P; Teiger, J; Thaler, J; Thion, J; Tikhonov, Yu A; Tisserant, S; Tokar, S; Topilin, N D; Trka, Z; Turcotte, M; Valkár, S; Varanda, M J; Vartapetian, A H; Vazeille, F; Vichou, I; Vinogradov, V; Vorozhtsov, S B; Vuillemin, V; White, A; Wielers, M; Wingerter-Seez, I; Wolters, H; Yamdagni, N; Yosef, C; Zaitsev, A; Zitoun, R; Zolnierowski, Y
2002-01-01
This paper discusses hadron energy reconstruction for the ATLAS barrel prototype combined calorimeter (consisting of a lead-liquid argon electromagnetic part and an iron-scintillator hadronic part) in the framework of the nonparametrical method. The nonparametrical method utilizes only the known e/h ratios and the electron calibration constants and does not require the determination of any parameters by a minimization technique. Thus, this technique lends itself to an easy use in a first level trigger. The reconstructed mean values of the hadron energies are within +or-1% of the true values and the fractional energy resolution is [(58+or-3)%/ square root E+(2.5+or-0.3)%](+)(1.7+or-0.2)/E. The value of the e/h ratio obtained for the electromagnetic compartment of the combined calorimeter is 1.74+or-0.04 and agrees with the prediction that e/h >1.66 for this electromagnetic calorimeter. Results of a study of the longitudinal hadronic shower development are also presented. The data have been taken in the H8 beam...
Least-square NUFFT methods applied to 2-D and 3-D radially encoded MR image reconstruction.
Song, Jiayu; Liu, Yanhui; Gewalt, Sally L; Cofer, Gary; Johnson, G Allan; Liu, Qing Huo
2009-04-01
Radially encoded MRI has gained increasing attention due to its motion insensitivity and reduced artifacts. However, because its samples are collected nonuniformly in the k-space, multidimensional (especially 3-D) radially sampled MRI image reconstruction is challenging. The objective of this paper is to develop a reconstruction technique in high dimensions with on-the-fly kernel calculation. It implements general multidimensional nonuniform fast Fourier transform (NUFFT) algorithms and incorporates them into a k-space image reconstruction framework. The method is then applied to reconstruct from the radially encoded k-space data, although the method is applicable to any non-Cartesian patterns. Performance comparisons are made against the conventional Kaiser-Bessel (KB) gridding method for 2-D and 3-D radially encoded computer-simulated phantoms and physically scanned phantoms. The results show that the NUFFT reconstruction method has better accuracy-efficiency tradeoff than the KB gridding method when the kernel weights are calculated on the fly. It is found that for a particular conventional kernel function, using its corresponding deapodization function as a scaling factor in the NUFFT framework has the potential to improve accuracy. In particular, when a cosine scaling factor is used, the NUFFT method is faster than KB gridding method since a closed-form solution is available and is less computationally expensive than the KB kernel (KB griding requires computation of Bessel functions). The NUFFT method has been successfully applied to 2-D and 3-D in vivo studies on small animals.
Energy Technology Data Exchange (ETDEWEB)
Lewicki, J.L.; Bergfeld, D.; Cardellini, C.; Chiodini, G.; Granieri, D.; Varley, N.; Werner, C.
2004-04-27
We present a comparative study of soil CO{sub 2} flux (F{sub CO2}) measured by five groups (Groups 1-5) at the IAVCEI-CCVG Eighth Workshop on Volcanic Gases on Masaya volcano, Nicaragua. Groups 1-5 measured F{sub CO2} using the accumulation chamber method at 5-m spacing within a 900 m{sup 2} grid during a morning (AM) period. These measurements were repeated by Groups 1-3 during an afternoon (PM) period. All measured F{sub CO2} ranged from 218 to 14,719 g m{sup -2}d{sup -1}. Arithmetic means and associated CO{sub 2} emission rate estimates for the AM data sets varied between groups by {+-}22%. The variability of the five measurements made at each grid point ranged from {+-}5 to 167% and increased with the arithmetic mean. Based on a comparison of measurements made by Groups 1-3 during AM and PM times, this variability is likely due in large part to natural temporal variability of gas flow, rather than to measurement error. We compared six geostatistical methods (arithmetic and minimum variance unbiased estimator means of uninterpolated data, and arithmetic means of data interpolated by the multiquadric radial basis function, ordinary kriging, multi-Gaussian kriging, and sequential Gaussian simulation methods) to estimate the mean and associated CO{sub 2} emission rate of one data set and to map the spatial F{sub CO2} distribution. While the CO{sub 2} emission rates estimated using the different techniques only varied by {+-}1.1%, the F{sub CO2} maps showed important differences. We suggest that the sequential Gaussian simulation method yields the most realistic representation of the spatial distribution of F{sub CO2} and is most appropriate for volcano monitoring applications.
Directory of Open Access Journals (Sweden)
H. Hasegawa
2004-04-01
Full Text Available A recently developed technique for reconstructing approximately two-dimensional (∂/∂z≈0, time-stationary magnetic field structures in space is applied to two magnetopause traversals on the dawnside flank by the four Cluster spacecraft, when the spacecraft separation was about 2000km. The method consists of solving the Grad-Shafranov equation for magnetohydrostatic structures, using plasma and magnetic field data measured along a single spacecraft trajectory as spatial initial values. We assess the usefulness of this single-spacecraft-based technique by comparing the magnetic field maps produced from one spacecraft with the field vectors that other spacecraft actually observed. For an optimally selected invariant (z-axis, the correlation between the field components predicted from the reconstructed map and the corresponding measured components reaches more than 0.97. This result indicates that the reconstruction technique predicts conditions at the other spacecraft locations quite well.
The optimal invariant axis is relatively close to the intermediate variance direction, computed from minimum variance analysis of the measured magnetic field, and is generally well determined with respect to rotations about the maximum variance direction but less well with respect to rotations about the minimum variance direction. In one of the events, field maps recovered individually for two of the spacecraft, which crossed the magnetopause with an interval of a few tens of seconds, show substantial differences in configuration. By comparing these field maps, time evolution of the magnetopause structures, such as the formation of magnetic islands, motion of the structures, and thickening of the magnetopause current layer, is discussed.
Key words. Magnetospheric physics (Magnetopause, cusp, and boundary layers – Space plasma physics (Experimental and mathematical techniques, Magnetic reconnection
Directory of Open Access Journals (Sweden)
H. Hasegawa
2004-04-01
Full Text Available A recently developed technique for reconstructing approximately two-dimensional (∂/∂z≈0, time-stationary magnetic field structures in space is applied to two magnetopause traversals on the dawnside flank by the four Cluster spacecraft, when the spacecraft separation was about 2000km. The method consists of solving the Grad-Shafranov equation for magnetohydrostatic structures, using plasma and magnetic field data measured along a single spacecraft trajectory as spatial initial values. We assess the usefulness of this single-spacecraft-based technique by comparing the magnetic field maps produced from one spacecraft with the field vectors that other spacecraft actually observed. For an optimally selected invariant (z-axis, the correlation between the field components predicted from the reconstructed map and the corresponding measured components reaches more than 0.97. This result indicates that the reconstruction technique predicts conditions at the other spacecraft locations quite well. The optimal invariant axis is relatively close to the intermediate variance direction, computed from minimum variance analysis of the measured magnetic field, and is generally well determined with respect to rotations about the maximum variance direction but less well with respect to rotations about the minimum variance direction. In one of the events, field maps recovered individually for two of the spacecraft, which crossed the magnetopause with an interval of a few tens of seconds, show substantial differences in configuration. By comparing these field maps, time evolution of the magnetopause structures, such as the formation of magnetic islands, motion of the structures, and thickening of the magnetopause current layer, is discussed. Key words. Magnetospheric physics (Magnetopause, cusp, and boundary layers – Space plasma physics (Experimental and mathematical techniques, Magnetic reconnection
Samoilova, Svetlana V; Balin, Yurii S; Krekova, Margarita M; Winker, David M
2005-06-10
Inversion of polarization lidar sensing data based on the form of the lidar sensing equation with allowance for contributions from multiple-scattering calls for a priori information on the scattering phase matrix. In the present study the parameters of the Stokes vectors for various propagation media, including those with the scattering phase matrices that vary along the measuring range, are investigated. It is demonstrated that, in spaceborne lidar sensing, a simple parameterization of the multiple-scattering contribution is applicable and the polarization signal's characteristics depend mainly on the lidar and depolarization ratios, whereas differences in the angular dependences of the matrix components are no longer determining factors. An algorithm for simultaneous reconstruction of the profiles of the backscattering coefficient and depolarization and lidar ratios in an inhomogeneous medium is suggested. Specific features of the methods are analyzed for the examples of interpretation of lidar signal profiles calculated by the Monte Carlo method and are measured experimentally.
DEFF Research Database (Denmark)
Svendsen, Peter Limkilde; Andersen, Ole Baltazar; Nielsen, Allan Aasbjerg
2012-01-01
Ocean satellite altimetry has provided global sets of sea level data for the last two decades, allowing determination of spatial patterns in global sea level. For reconstructions going back further than this period, tide gauge data can be used as a proxy for the model. We examine different methods...... to spatial distribution, and tide gauge data are available around the Arctic Ocean, which may be important for a later high-latitude reconstruction....
MO-DE-207A-11: Sparse-View CT Reconstruction Via a Novel Non-Local Means Method
International Nuclear Information System (INIS)
Chen, Z; Qi, H; Wu, S; Xu, Y; Zhou, L
2016-01-01
Purpose: Sparse-view computed tomography (CT) reconstruction is an effective strategy to reduce the radiation dose delivered to patients. Due to its insufficiency of measurements, traditional non-local means (NLM) based reconstruction methods often lead to over-smoothness in image edges. To address this problem, an adaptive NLM reconstruction method based on rotational invariance (RIANLM) is proposed. Methods: The method consists of four steps: 1) Initializing parameters; 2) Algebraic reconstruction technique (ART) reconstruction using raw projection data; 3) Positivity constraint of the image reconstructed by ART; 4) Update reconstructed image by using RIANLM filtering. In RIANLM, a novel similarity metric that is rotational invariance is proposed and used to calculate the distance between two patches. In this way, any patch with similar structure but different orientation to the reference patch would win a relatively large weight to avoid over-smoothed image. Moreover, the parameter h in RIANLM which controls the decay of the weights is adaptive to avoid over-smoothness, while it in NLM is not adaptive during the whole reconstruction process. The proposed method is named as ART-RIANLM and validated on Shepp-Logan phantom and clinical projection data. Results: In our experiments, the searching neighborhood size is set to 15 by 15 and the similarity window is set to 3 by 3. For the simulated case with a resolution of 256 by 256 Shepp-Logan phantom, the ART-RIANLM produces higher SNR (35.38dB<24.00dB) and lower MAE (0.0006<0.0023) reconstructed image than ART-NLM. The visual inspection demonstrated that the proposed method could suppress artifacts or noises more effectively and preserve image edges better. Similar results were found for clinical data case. Conclusion: A novel ART-RIANLM method for sparse-view CT reconstruction is presented with superior image. Compared to the conventional ART-NLM method, the SNR and MAE from ART-RIANLM increases 47% and decreases 74
Jeong, Seung Jun; Hong, Chung Ki
2008-06-01
We present an effective method for the pixel-size-maintained reconstruction of images on arbitrarily tilted planes in digital holography. The method is based on the plane wave expansion of the diffraction wave fields and the three-axis rotation of the wave vectors. The images on the tilted planes are reconstructed without loss of the frequency contents of the hologram and have the same pixel sizes. Our method shows good results in the extreme cases of large tilting angles and in the region closer than the paraxial case. The effectiveness of the method is demonstrated by both simulation and experiment.
Comparison of myocardial potassium and thallium flux as studied by tracer methods.
Nitsch, J; Steinbeck, G; Lüderitz, B
1980-06-01
Although myocardial scintigraphy with 201thallium is widely applied in humans, the behavior of thallium at the cellular level is still under discussion. We compared the transmembrane fluxes of potassium and thallium in the isolated papillary muscle of guinea pigs. A qualitative conformity exists between potassium and thallium fluxes with respect to heart rate and temperature. Quantative comparison revealed a decreased flux rate of thallium for efflux when compared with potassium. The time dependence of thallium influx indicates that thallium scintigraphy of the myocardium reflects mainly an extracellular distribution.
Optimized method for ureteric reconstruction in a mouse kidney transplant model.
Wang, Zane Z; Wang, Chuanmin; Cunningham, Eithne C; Allen, Richard D M; Sharland, Alexandra F; Bishop, George Alexander
2014-06-01
Murine kidney transplantation is an important model for studies of transplantation immunobiology. The most challenging aspect of the difficult surgical procedure is the ureteric anastomosis. Two different approaches to ureteric reconstruction are compared here. Method 1, Patch: this involves anastomosis of the donor ureter together with a patch of donor bladder to recipient bladder. Method 2, Implant: this utilizes a 5-0 suture to pull the ureter through the bladder wall. The ureter's peripheral tissue is then fixed to the bladder wall at the implant site with 10-0 micro-sutures. In animals transplanted with the patch method, the initial success rate, defined as survival up to the third post-operative day, was 79% (n = 62), whereas the initial success rate for the implant method was 86.1% (n = 101; P = 0.28). The death rate from unknown and/or unspecified causes in the initial period was 16.1% (10/62) for the patch method, and 8.9% (9/101) for the implant method (P = 0.21). The average donor/recipient operation time with the implant method was 14.8 ± 2.2/61.4 ± 4.7 min (76 min per transplant), whereas operation time with the patch method was 28.3 ± 2.4/77.8 ± 5.5 min (106 min per transplant; P method resulted in a lower rate of urinary leak compared with the patch method (1.1% versus 10.2%; P = 0.02). The ureteric implant method for mouse kidney transplantation is a reliable approach with at least as high a success rate as the bladder patch method and with a shorter operation time. © 2013 Royal Australasian College of Surgeons.
International Nuclear Information System (INIS)
Viana, Rodrigo Sartorelo Salemi
2014-01-01
The NSECT (Neutron Stimulated Emission Computed Tomography) figures as a new spectrographic technique able to evaluate in vivo the concentration of elements using the inelastic scattering reaction (n,n'). Since its introduction, several improvements have been proposed with the aim of investigating applications for clinical diagnosis and reduction of absorbed dose associated with CT acquisition. In this context, two new diagnostic applications are presented using spectroscopic and tomographic approaches from NSECT. A new methodology has also been proposed to optimize the sinogram sampling that is directly related to the quality of the reconstruction by the irradiation protocol. The studies were developed based on simulations with MCNP5 code. Diagnosis of Renal Cell Carcinoma (RCC) and the detection of breast microcalcifications were evaluated in studies conducted using a human phantom. The obtained results demonstrate the ability of the NSECT technique to detect changes in the composition of the modeled tissues as a function of the development of evaluated pathologies. The proposed method for optimizing sinograms was able to analytically simulate the composition of the irradiated medium allowing the assessment of quality of reconstruction and effective dose in terms of the sampling rate. However, future research must be conducted to quantify the sensitivity of detection according to the selected elements. (author)
Energy Technology Data Exchange (ETDEWEB)
Liu, Wenyang [Department of Bioengineering, University of California, Los Angeles, Los Angeles, California 90095 (United States); Cheung, Yam [Department of Radiation Oncology, University of Texas Southwestern, Dallas, Texas 75390 (United States); Sawant, Amit [Department of Radiation Oncology, University of Texas Southwestern, Dallas, Texas, 75390 and Department of Radiation Oncology, University of Maryland, College Park, Maryland 20742 (United States); Ruan, Dan, E-mail: druan@mednet.ucla.edu [Department of Bioengineering, University of California, Los Angeles, Los Angeles, California 90095 and Department of Radiation Oncology, University of California, Los Angeles, Los Angeles, California 90095 (United States)
2016-05-15
Purpose: To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. Methods: The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. Results: On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced
Automated gas bubble imaging at sea floor - a new method of in situ gas flux quantification
Thomanek, K.; Zielinski, O.; Sahling, H.; Bohrmann, G.
2010-06-01
Photo-optical systems are common in marine sciences and have been extensively used in coastal and deep-sea research. However, due to technical limitations in the past photo images had to be processed manually or semi-automatically. Recent advances in technology have rapidly improved image recording, storage and processing capabilities which are used in a new concept of automated in situ gas quantification by photo-optical detection. The design for an in situ high-speed image acquisition and automated data processing system is reported ("Bubblemeter"). New strategies have been followed with regards to back-light illumination, bubble extraction, automated image processing and data management. This paper presents the design of the novel method, its validation procedures and calibration experiments. The system will be positioned and recovered from the sea floor using a remotely operated vehicle (ROV). It is able to measure bubble flux rates up to 10 L/min with a maximum error of 33% for worst case conditions. The Bubblemeter has been successfully deployed at a water depth of 1023 m at the Makran accretionary prism offshore Pakistan during a research expedition with R/V Meteor in November 2007.
3D RECONSTRUCTION FROM MULTI-VIEW MEDICAL X-RAY IMAGES – REVIEW AND EVALUATION OF EXISTING METHODS
Directory of Open Access Journals (Sweden)
S. Hosseinian
2015-12-01
Full Text Available The 3D concept is extremely important in clinical studies of human body. Accurate 3D models of bony structures are currently required in clinical routine for diagnosis, patient follow-up, surgical planning, computer assisted surgery and biomechanical applications. However, 3D conventional medical imaging techniques such as computed tomography (CT scan and magnetic resonance imaging (MRI have serious limitations such as using in non-weight-bearing positions, costs and high radiation dose(for CT. Therefore, 3D reconstruction methods from biplanar X-ray images have been taken into consideration as reliable alternative methods in order to achieve accurate 3D models with low dose radiation in weight-bearing positions. Different methods have been offered for 3D reconstruction from X-ray images using photogrammetry which should be assessed. In this paper, after demonstrating the principles of 3D reconstruction from X-ray images, different existing methods of 3D reconstruction of bony structures from radiographs are classified and evaluated with various metrics and their advantages and disadvantages are mentioned. Finally, a comparison has been done on the presented methods with respect to several metrics such as accuracy, reconstruction time and their applications. With regards to the research, each method has several advantages and disadvantages which should be considered for a specific application.
INAA analysis of rocks: A routine method using Fe as an internal flux monitor
International Nuclear Information System (INIS)
Kay, R.W.; Kay, S. Mahlburg
1992-01-01
Over the past decade at Cornell, trace elements in over 2500 rocks have been analyzed by INAA. The samples, mainly volcanic rocks, have known concentrations of major elements (e.g. Si, Ti, Al, Mg, Ca, K, Fe, Na) and the last two of these (Fe and Na) are also determined by activation, using rock standards (e.g. USGS standards BCRl, BHVO, etc.). Differences between Fe determined by INAA and that determined as a part of the major element analysis are mainly attributed to volatile (H 2 O, CO 2 ) loss (especially when major element analyses were done by electron microprobe on fused powders, whereas the INAA analyses were done on the powders), and to flux variability during irradiation. Instead of reporting two values for Fe we use Fe as an internal flux monitor, with Na and the trace elements being reported relative to the given Fe value. The ratio Na/Fe is used as a sensitive check on the identity of the sample and as a monitor of alkali loss affecting the major element analysis. Other than this modification (Kay et aL 1987, also reported in Chappell and Hergt, 1989) we use an INAA method similar to mat practiced by many labs. Powdered samples (about 0.5 g) are sealed in high-purity silica tubes and irradiated in the Cornell Triga reactor. Samples are counted for a minimum of 2 hours (up to 10 hours) 7 and 40 days after irradiation. Data are reduced using a program written at Cornell, with peak and background regions that have been checked for interferences. Corrections are routinely applied for Ce (Fe), Nd (Br), Tb (Th), Eu (Ba), Lu (U), and Yb (Th) (interference is from element in parentheses). A U fission yield correction is applied to La, Ce, Nd, and Ba. A correction for Ta introduced by grinding in WC containers can be made using known Ta/W ratios in the grinding containers. The correction amounted to 10-20% of the Ta gross peak. Recently, samples have been prepared in a ceramic grinding containers; for these, no Ta correction is needed. Trace elements determined
A Coarse Alignment Method Based on Digital Filters and Reconstructed Observation Vectors
Directory of Open Access Journals (Sweden)
Xiang Xu
2017-03-01
Full Text Available In this paper, a coarse alignment method based on apparent gravitational motion is proposed. Due to the interference of the complex situations, the true observation vectors, which are calculated by the apparent gravity, are contaminated. The sources of the interference are analyzed in detail, and then a low-pass digital filter is designed in this paper for eliminating the high-frequency noise of the measurement observation vectors. To extract the effective observation vectors from the inertial sensors’ outputs, a parameter recognition and vector reconstruction method are designed, where an adaptive Kalman filter is employed to estimate the unknown parameters. Furthermore, a robust filter, which is based on Huber’s M-estimation theory, is developed for addressing the outliers of the measurement observation vectors due to the maneuver of the vehicle. A comprehensive experiment, which contains a simulation test and physical test, is designed to verify the performance of the proposed method, and the results show that the proposed method is equivalent to the popular apparent velocity method in swaying mode, but it is superior to the current methods while in moving mode when the strapdown inertial navigation system (SINS is under entirely self-contained conditions.
A Coarse Alignment Method Based on Digital Filters and Reconstructed Observation Vectors.
Xu, Xiang; Xu, Xiaosu; Zhang, Tao; Li, Yao; Wang, Zhicheng
2017-03-29
In this paper, a coarse alignment method based on apparent gravitational motion is proposed. Due to the interference of the complex situations, the true observation vectors, which are calculated by the apparent gravity, are contaminated. The sources of the interference are analyzed in detail, and then a low-pass digital filter is designed in this paper for eliminating the high-frequency noise of the measurement observation vectors. To extract the effective observation vectors from the inertial sensors' outputs, a parameter recognition and vector reconstruction method are designed, where an adaptive Kalman filter is employed to estimate the unknown parameters. Furthermore, a robust filter, which is based on Huber's M-estimation theory, is developed for addressing the outliers of the measurement observation vectors due to the maneuver of the vehicle. A comprehensive experiment, which contains a simulation test and physical test, is designed to verify the performance of the proposed method, and the results show that the proposed method is equivalent to the popular apparent velocity method in swaying mode, but it is superior to the current methods while in moving mode when the strapdown inertial navigation system (SINS) is under entirely self-contained conditions.
Kim, Jinhyuk; Kikuchi, Hiroe; Yamamoto, Yoshiharu
2013-02-01
While both ecological momentary assessment (EMA) and the day reconstruction method (DRM) have been used to overcome recall bias, a full systematic comparison of these methods has not been conducted. This study was aimed to investigate the differences and correlations between momentary fatigue and mood states recorded by EMA and reconstructed ones recorded by simultaneous DRM in healthy adults. Each of two different designs (time-based and episode-based) of EMA and DRM were simultaneously conducted. Twenty-five healthy adults recorded momentary fatigue and mood states with EMA, and then, reconstructed them with DRM. Differences between the mean and the variability of momentary and reconstructed recordings, and the correlations between them, are analysed for different EMA designs. No significant differences are found between the mean or the variability of EMA and DRM estimated over the monitoring period. However, correlations between EMA and DRM are low, albeit statistically significant. Although the overall mean and variability of EMA recordings may be accessible with DRM, detailed changes over time of momentary fatigue and mood states are not retrieved by DRM. Statement of contribution What is already known on this subject? Day reconstruction method (DRM) may be a reliable substitute strategy for the measurement of subjective symptoms instead of ecological momentary assessment (EMA). Remembering the context of daily activities with DRM is assumed to be helpful in reconstructing subjective symptoms without recall bias. What does this study add? We are not able to reconstruct our diurnal time course (i.e. detailed changes over time) of subjective symptoms (e.g. fatigue and mood states in this study) with DRM, while their approximate mean and overall variability during the study period may be accessible with DRM. Reconstructed depression by DRM could be biased when the subjects remembered whether their behaviour was active or inactive. © 2012 The British
Liu, Wenyang; Cheung, Yam; Sawant, Amit; Ruan, Dan
2016-05-01
To develop a robust and real-time surface reconstruction method on point clouds captured from a 3D surface photogrammetry system. The authors have developed a robust and fast surface reconstruction method on point clouds acquired by the photogrammetry system, without explicitly solving the partial differential equation required by a typical variational approach. Taking advantage of the overcomplete nature of the acquired point clouds, their method solves and propagates a sparse linear relationship from the point cloud manifold to the surface manifold, assuming both manifolds share similar local geometry. With relatively consistent point cloud acquisitions, the authors propose a sparse regression (SR) model to directly approximate the target point cloud as a sparse linear combination from the training set, assuming that the point correspondences built by the iterative closest point (ICP) is reasonably accurate and have residual errors following a Gaussian distribution. To accommodate changing noise levels and/or presence of inconsistent occlusions during the acquisition, the authors further propose a modified sparse regression (MSR) model to model the potentially large and sparse error built by ICP with a Laplacian prior. The authors evaluated the proposed method on both clinical point clouds acquired under consistent acquisition conditions and on point clouds with inconsistent occlusions. The authors quantitatively evaluated the reconstruction performance with respect to root-mean-squared-error, by comparing its reconstruction results against that from the variational method. On clinical point clouds, both the SR and MSR models have achieved sub-millimeter reconstruction accuracy and reduced the reconstruction time by two orders of magnitude to a subsecond reconstruction time. On point clouds with inconsistent occlusions, the MSR model has demonstrated its advantage in achieving consistent and robust performance despite the introduced occlusions. The authors have
DEFF Research Database (Denmark)
Sørensen, Lars Schiøtt
, established from a Bunsen burners pilot flame. This principal is somewhat in contrast to the more typical radiation established fluxes. For instance, the ISO 9239 (DS 2000) test method is based on a gas fired radiant panel. And in the ISO 5657 standard, the ignition properties are investigated on test...
Debruin, H.A.R.; Hartogensis, O.K.
2005-01-01
Evidence is presented that in the stable atmospheric surface layer turbulent fluxes of heat and momentum can be determined from the standard deviations of longitudinal wind velocity and temperature, ¿u and ¿T respectively, measured at a single level. An attractive aspect of this method is that it
Becker, F.; Seguin, B.
Climate being the result of many interconnected processes, it can hardly be understood without models which describe these various processes as quantitatively as possible and define the parameters which are relevant for climate studies. Among those, surface processes and therefore surface parameters are now recognized to be of great importance. Some examples are discussed in the first part, showing the great interest to measure the relevant parameters on a multi-year basis, over large areas with sufficiently dense array and on a stable basis, in order to monitor climate changes or to study the impact on climate of the modifications of some relevant parameters which are analysed. Since space observations from satellites fulfil these requirements, it is clear that they will become very soon a fundamental tool for climate studies. Unfortunately, as it is discussed in the second part, satellites do measure only spectral radiances at the top of the atmosphere and the determination of the relevant surface parameters (or fluxes) from these radiances still raises many problems which have to be solved, although many progresses have already been made. The aim of this paper is therefore to review and discuss these problems and the various ways they have been tackled until now. The first part is devoted to an overview of what needs to be measured and why, while the existing methods for determining the most important surface parameters from space observations are presented in the second part where a particular attention is given to the theoretical and experimental validations of these methods, their limits and the problems still to be solved.
A closed-chamber method to measure greenhouse gas fluxes from dry aquatic sediments
Directory of Open Access Journals (Sweden)
L. Lesmeister
2017-06-01
Full Text Available Recent research indicates that greenhouse gas (GHG emissions from dry aquatic sediments are a relevant process in the freshwater carbon cycle. However, fluxes are difficult to measure because of the often rocky substrate and the dynamic nature of the habitat. Here we tested the performance of different materials to seal a closed chamber to stony ground both in laboratory and field experiments. Using on-site material consistently resulted in elevated fluxes. The artefact was caused both by outgassing of the material and production of gas. The magnitude of the artefact was site dependent – the measured CO2 flux increased between 10 and 208 %. Errors due to incomplete sealing proved to be more severe than errors due to non-inert sealing material.Pottery clay as sealing material provided a tight seal between the chamber and the ground and no production of gases was detected. With this approach it is possible to get reliable gas fluxes from hard-substrate sites without using a permanent collar. Our test experiments confirmed that CO2 fluxes from dry aquatic sediments are similar to CO2 fluxes from terrestrial soils.
Directory of Open Access Journals (Sweden)
Mohammad Hamiruce Marhaban
2012-10-01
Full Text Available With their highly robust nature and simple design, switched reluctance machines are finding their way into numerous modern day applications. However, they produce oscillatory torque that generates torque ripple and mechanical vibrations. A double rotor structure to maximize the flux linkage and thereby increase the torque generating capability is proposed. As the machine operates close to saturation, the torque computation depends heavily on the energy conversion as the rotor rolls over the stator for a fixed pole pitch. The flux linkage characteristics are highly non-linear, hence estimation of the magnetic and mechanical parameters is extremely cumbersome. Magnetic circuit analysis by interpretation of the number of flux tubes using integration techniques at different positions of the machine to develop the flux linkage characteristics of the double rotor structure is presented. Computation of the inductances during the movement of rotor from unaligned to aligned is crucial in determining the generated torque. Relevant equations of calculations for inductance and flux linkages in the aligned, partially aligned and unaligned positions are computed. The partially aligned computation is based on the average on two intermediate positions, namely the 1/4th aligned and 3/4th aligned conditions. The static torque characteristics based on the energy conversion principles are used to compute the torque value. Results from simulation and experiments used for performance evaluation of the proposed flux tube analysis for computation of the electro-magnetic torque are presented.
Qin, Shunda; Ge, Hongxia; Cheng, Rongjun
2018-02-01
In this paper, a new lattice hydrodynamic model is proposed by taking delay feedback and flux change rate effect into account in a single lane. The linear stability condition of the new model is derived by control theory. By using the nonlinear analysis method, the mKDV equation near the critical point is deduced to describe the traffic congestion. Numerical simulations are carried out to demonstrate the advantage of the new model in suppressing traffic jam with the consideration of flux change rate effect in delay feedback model.
A DATA DRIVEN METHOD FOR FLAT ROOF BUILDING RECONSTRUCTION FROM LiDAR POINT CLOUDS
Directory of Open Access Journals (Sweden)
A. Mahphood
2017-09-01
Full Text Available 3D building modeling is one of the most important applications in photogrammetry and remote sensing. Airborne LiDAR (Light Detection And Ranging is one of the primary information sources for building modeling. In this paper, a new data-driven method is proposed for 3D building modeling of flat roofs. First, roof segmentation is implemented using region growing method. The distance between roof points and the height difference of the points are utilized in this step. Next, the building edge points are detected using a new method that employs grid data, and then roof lines are regularized using the straight line approximation. The centroid point and direction for each line are estimated in this step. Finally, 3D model is reconstructed by integrating the roof and wall models. In the end, a qualitative and quantitative assessment of the proposed method is implemented. The results show that the proposed method could successfully model the flat roof buildings using LiDAR point cloud automatically.
New method of three-dimensional reconstruction from two-dimensional MR data sets
International Nuclear Information System (INIS)
Wrazidlo, W.; Schneider, S.; Brambs, H.J.; Richter, G.M.; Kauffmann, G.W.; Geiger, B.; Fischer, C.
1989-01-01
In medical diagnosis and therapy, cross-sectional images are obtained by means of US, CT, or MR imaging. The authors propose a new solution to the problem of constructing a shape over a set of cross-sectional contours from two-dimensional (2D) MR data sets. The authors' method reduces the problem of constructing a shape over the cross sections to one of constructing a sequence of partial shapes, each of them connecting two cross sections lying on adjacent planes. The solution makes use of the Delaunay triangulation, which is isomorphic in that specific situation. The authors compute this Delaunay triangulation. Shape reconstruction is then achieved section by pruning Delaunay triangulations
Tomographic apparatus and method for reconstructing planar slices from non-absorbed radiation
International Nuclear Information System (INIS)
1976-01-01
In a tomographic apparatus and method for reconstructing two-dimensional planar slices from linear projections of non-absorbed radiation useful in the fields of medical radiology, microscopy, and non-destructive testing, a beam of radiation in the shape of a fan is passed through an object lying in the same quasi-plane as the object slice and non-absorbtion thereof is recorded on oppositely-situated detectors aligned with the source of radiation. There is relative rotation between the source-detector configuration and the object within the quasi-plane. Periodic values of the detected radiation are taken, convolved with certain functions, and back-projected to produce a two-dimensional output picture on a visual display illustrating a facsimile of the object slice. A series of two-dimensional pictures obtained simultaneously or serially can be combined to produce a three dimensional portrayal of the entire object
Nilsen, Gørill
2016-08-01
Seal hunting and whaling have played an important part of people's livelihoods throughout prehistory as evidenced by rock carvings, remains of bones, artifacts from aquatic animals and hunting tools. This paper focuses on one of the more elusive resources relating to such activities: marine mammal blubber. Although marine blubber easily decomposes, the organic material has been documented from the Mesolithic Period onwards. Of particular interest in this article are the many structures in Northern Norway from the Iron Age and in Finland on Kökar, Åland, from both the Bronze and Early Iron Ages in which these periods exhibited traits interpreted as being related to oil rendering from marine mammal blubber. The article discusses methods used in this oil production activity based on historical sources, archaeological investigations and experimental reconstruction of Iron Age slab-lined pits from Northern Norway.
Uranium distribution in Baikal sediments using SSNTD method for paleoclimate reconstruction
Zhmodik, S M; Nemirovskaya, N A; Zhatnuev, N S
1999-01-01
First data on local distribution of uranium in the core of Lake Baikal floor sediments (Academician ridge, VER-95-2, St 3 BC, 53 deg. 113'12'N/108 deg. 25'01'E) are presented in this paper. They have been obtained using (n,f)-radiography. Various forms of U-occurrence in floor sediments are shown, i.e. evenly disseminated, associated with clayey and diatomaceous components; micro- and macroinclusions of uranium bearing minerals - microlocations with uranium content 10-50 times higher than U-concentrations associated with clayey and diatomaceous components. Relative and absolute U-concentration can be determined for every mineral. Signs of various order periodicity of U-distribution in the core of Lake Baikal floor sediments have been found. Using (n,f)-radiography method of the study of Baikal floor sediment permits gathering of new information that can be used at paleoclimate reconstruction.
Energy Technology Data Exchange (ETDEWEB)
Palacios, D.; Greaves, E. D.; Sajo B, L.; Barros, H. [Universidad Simon Bolivar, Laboratorio de Fisica Nuclear, Apdo. Postal 89000, Caracas (Venezuela, Bolivarian Republic of); Ingles, R. [Universidad Nacional de San Antonio Abad del Cusco, Av. de la Cultura No. 733, Cusco (Peru)
2010-02-15
A method to determine the flux and angular distribution of thermal neutrons with the use of Lr-115 detectors was developed. The use of the Lr-115 detector involves the exposure of a pressed boric acid sample (tablet) as a target, in tight contact with the track detector, to a flux of thermalized neutrons. The self-absorption effects in thin films or foil type thermal neutron detectors can be neglected by using the Lr-115 detector and boric acid tablet setup to operate via backside irradiation. The energy window and the critical angle-residual energy curve were determined by comparisons between the experimental and simulated track parameters. A computer program was developed to calculate the detector registration efficiency, so that the thermal neutron flux can be calculated from the track densities induced in the Lr-115 detector using the derived empirical formula. The proposed setup can serves as directional detector of thermal neutrons. (Author)
[Reconstruction method of language pathways in the preoperative planning of brain tumor surgery].
Yan, Jing; Lu, Junfeng; Cheng, Jingliang; Wu, Jinsong; Zhang, Jie; Wang, Chaoyan; Nie, Yunfei; Pang, Beibei; Liu, Xianzhi
2015-05-01
To propose a clinically practical and simple fiber tracking method for language pathways, and to explore its feasibility in preoperative planning for brain tumors adjacent to the language cortex. Diffusion tensor imaging was examined in 18 healthy subjects and 13 patients with brain tumors adjacent to the language cortex between December 2013 and June 2014. The associated fibers of language pathways were reconstructed using a commercial software (Syngo workstation). Firstly, the feasibility of fiber tracking method for language pathways in healthy subjects were studied, and then its application was assessed in patients with brain tumors. The anatomic relationship between tumors and the associated fibers was analyzed. By selecting appropriate regions of interest, the associated fibers in the dorsal pathways (superior longitudinal fasciculus/arcuate fasciculus, including both direct and indirect pathways) and ventral pathways (uncinate fasciculus, middle longitudinal fasciculus, inferior longitudinal fasciculus and inferiorfronto-occipital fasciculus) were reconstructed in all 18 healthy subjects. In patients with brain tumors, the relationship between the tumors and adjacent associated fibers were divided into two types: adjacent associated fibers could be displaced or separated, and involved the superior longitudinal fasciculus/arcuate fasciculus (n=6), middle longitudinal fasciculus (n=4), uncinate fasciculus (n=3), inferior longitudinal fasciculus (n=3) and inferiorfronto-occipital fasciculus (n=2); alternatively, the adjacent associated fibers were infiltrated or destroyed, and involved the inferiorfronto-occipital fasciculus (n=10), uncinate fasciculus (n=8), middle longitudinal fasciculus (n=5), inferior longitudinal fasciculus (n=4) and superior longitudinal fasciculus/arcuate fasciculus (n=3). The associated fibers of language pathways could be visualized rapidly and in real-time by fiber tracking technology based on diffusion tensor imaging. This is
International Nuclear Information System (INIS)
Müller, Kerstin; Schwemmer, Chris; Hornegger, Joachim; Zheng Yefeng; Wang Yang; Lauritsch, Günter; Rohkohl, Christopher; Maier, Andreas K.; Schultz, Carl; Fahrig, Rebecca
2013-01-01
Purpose: For interventional cardiac procedures, anatomical and functional information about the cardiac chambers is of major interest. With the technology of angiographic C-arm systems it is possible to reconstruct intraprocedural three-dimensional (3D) images from 2D rotational angiographic projection data (C-arm CT). However, 3D reconstruction of a dynamic object is a fundamental problem in C-arm CT reconstruction. The 2D projections are acquired over a scan time of several seconds, thus the projection data show different states of the heart. A standard FDK reconstruction algorithm would use all acquired data for a filtered backprojection and result in a motion-blurred image. In this approach, a motion compensated reconstruction algorithm requiring knowledge of the 3D heart motion is used. The motion is estimated from a previously presented 3D dynamic surface model. This dynamic surface model results in a sparse motion vector field (MVF) defined at control points. In order to perform a motion compensated reconstruction, a dense motion vector field is required. The dense MVF is generated by interpolation of the sparse MVF. Therefore, the influence of different motion interpolation methods on the reconstructed image quality is evaluated. Methods: Four different interpolation methods, thin-plate splines (TPS), Shepard's method, a smoothed weighting function, and a simple averaging, were evaluated. The reconstruction quality was measured on phantom data, a porcine model as well as on in vivo clinical data sets. As a quality index, the 2D overlap of the forward projected motion compensated reconstructed ventricle and the segmented 2D ventricle blood pool was quantitatively measured with the Dice similarity coefficient and the mean deviation between extracted ventricle contours. For the phantom data set, the normalized root mean square error (nRMSE) and the universal quality index (UQI) were also evaluated in 3D image space. Results: The quantitative evaluation of all
Pin power reconstruction of HANARO fuel assembly via gamma scanning and tomography method
International Nuclear Information System (INIS)
Seo, Chul Gyo; Park, Chang Je; Cho, Nam Zin; Kim, Hark Rho
2001-01-01
To determine the pin power distribution without disassembling, HANARO fuel assemblies are gamma-scanned and then the distribution is reconstructed by using the tomography method. The iterative least squares method (ILSM) and the wavelet singular value decomposition method (WSVD) are chosen to solve the problem. An optimal convergence criterion is used to stop the iteration algorithm to overcome the potential divergence in ILSM. WSVD gives better results than ILSM, and the average values from the two methods give the best results. The RMSE (root mean square errors) to the reference data are 5.1, 6.6, 5.0, 6.5, and 6.4% and the maximum relative errors are 10.2, 13.7, 12.2, 13.6, and 14.3%, respectively. It is found that the effect of random positions of the pins is important. Although the effect can be accommodated by the iterative calculations simulating the random positions, the use of experimental equipment with a slit covering the whole range of the assembly horizontally is recommended to obtain more accurate results. We made a new apparatus using the results of this study and are conducting an experiment in order to obtain more accurate results. (author)
International Nuclear Information System (INIS)
Asazu, Akira; Hayashi, Masuo; Arai, Mami; Kumai, Yoshiaki; Akagi, Hiroyuki; Okayama, Katsuyoshi; Narumi, Yoshifumi
2013-01-01
In cerebral blood flow tests using N-Isopropyl-p-[ 123 I] Iodoamphetamine 123 I-IMP, quantitative results of greater accuracy than possible using the autoradiography (ARG) method can be obtained with attenuation and scatter correction and image reconstruction by filtered back projection (FBP). However, the cutoff frequency of the preprocessing Butterworth filter affects the quantitative value; hence, we sought an optimal cutoff frequency, derived from the correlation between the FBP method and Xenon-enhanced computed tomography (XeCT)/cerebral blood flow (CBF). In this study, we reconstructed images using ordered subsets expectation maximization (OSEM), a method of successive approximation which has recently come into wide use, and also three-dimensional (3D)-OSEM, a method by which the resolution can be corrected with the addition of collimator broad correction, to examine the effects on the regional cerebral blood flow (rCBF) quantitative value of changing the cutoff frequency, and to determine whether successive approximation is applicable to cerebral blood flow quantification. Our results showed that quantification of greater accuracy was obtained with reconstruction employing the 3D-OSEM method and using a cutoff frequency set near 0.75-0.85 cycles/cm, which is higher than the frequency used in image reconstruction by the ordinary FBP method. (author)
Water flux in animals: analysis of potential errors in the tritiated water method
International Nuclear Information System (INIS)
Nagy, K.A.; Costa, D.
1979-03-01
Laboratory studies indicate that tritiated water measurements of water flux are accurate to within -7 to +4% in mammals, but errors are larger in some reptiles. However, under conditions that can occur in field studies, errors may be much greater. Influx of environmental water vapor via lungs and skin can cause errors exceeding +-50% in some circumstances. If water flux rates in an animal vary through time, errors approach +-15% in extreme situations, but are near +-3% in more typical circumstances. Errors due to fractional evaporation of tritiated water may approach -9%. This error probably varies between species. Use of an inappropriate equation for calculating water flux from isotope data can cause errors exceeding +-100%. The following sources of error are either negligible or avoidable: use of isotope dilution space as a measure of body water volume, loss of nonaqueous tritium bound to excreta, binding of tritium with nonaqueous substances in the body, radiation toxicity effects, and small analytical errors in isotope measurements. Water flux rates measured with tritiated water should be within +-10% of actual flux rates in most situations
Water flux in animals: analysis of potential errors in the tritiated water method
Energy Technology Data Exchange (ETDEWEB)
Nagy, K.A.; Costa, D.
1979-03-01
Laboratory studies indicate that tritiated water measurements of water flux are accurate to within -7 to +4% in mammals, but errors are larger in some reptiles. However, under conditions that can occur in field studies, errors may be much greater. Influx of environmental water vapor via lungs and skin can cause errors exceeding +-50% in some circumstances. If water flux rates in an animal vary through time, errors approach +-15% in extreme situations, but are near +-3% in more typical circumstances. Errors due to fractional evaporation of tritiated water may approach -9%. This error probably varies between species. Use of an inappropriate equation for calculating water flux from isotope data can cause errors exceeding +-100%. The following sources of error are either negligible or avoidable: use of isotope dilution space as a measure of body water volume, loss of nonaqueous tritium bound to excreta, binding of tritium with nonaqueous substances in the body, radiation toxicity effects, and small analytical errors in isotope measurements. Water flux rates measured with tritiated water should be within +-10% of actual flux rates in most situations.
Shi, Xiangming; Mason, Robert P.; Charette, Matthew A.; Mazrui, Nashaat M.; Cai, Pinghe
2018-02-01
In aquatic environments, sediments are the main location of mercury methylation. Thus, accurate quantification of methylmercury (MeHg) fluxes at the sediment-water interface is vital to understanding the biogeochemical cycling of mercury, especially the toxic MeHg species, and their bioaccumulation. Traditional approaches, such as core incubations, are difficult to maintain at in-situ conditions during assays, leading to over/underestimation of benthic fluxes. Alternatively, the 224Ra/228Th disequilibrium method for tracing the transfer of dissolved substances across the sediment-water interface, has proven to be a reliable approach for quantifying benthic fluxes. In this study, the 224Ra/228Th disequilibrium and core incubation methods were compared to examine the benthic fluxes of both 224Ra and MeHg in salt marsh sediments of Barn Island, Connecticut, USA from May to August, 2016. The two methods were comparable for 224Ra but contradictory for MeHg. The radiotracer approach indicated that sediments were always the dominant source of both total mercury (THg) and MeHg. The core incubation method for MeHg produced similar results in May and August, but an opposite pattern in June and July, which suggested sediments were a sink of MeHg, contrary to the evidence of significant MeHg gradients between overlying water and porewater at the sediment-water interface. The potential reasons for such differences are discussed. Overall, we conclude that the 224Ra/228Th disequilibrium approach is preferred for estimating the benthic flux of MeHg and that sediment is indeed an important MeHg source in this marshland, and likely in other shallow coastal waters.
Directory of Open Access Journals (Sweden)
Yi-Jun Zhou
2016-03-01
Full Text Available Hemipelvic resections for primary bone tumours require reconstruction to restore weight bearing along anatomic axes. However, reconstruction of the pelvic arch remains a major surgical challenge because of the high rate of associated complications. We used the pedicle screw-rod system to reconstruct the pelvis, and the purpose of this investigation was to assess the oncology, functional outcome and complication rate following this procedure. The purpose of this study was to investigate the operative indications and technique of the pedicle screw-rod system in reconstruction of the stability of the sacroiliac joint after resection of sacroiliac joint tumours. The average MSTS (Musculoskeletal Tumour Society score was 26.5 at either three months after surgery or at the latest follow-up. Seven patients had surgery-related complications, including wound dehiscence in one, infection in two, local necrosis in four (including infection in two, sciatic nerve palsy in one and pubic symphysis subluxation in one. There was no screw loosening or deep vein thrombosis occurring in this series. Using a pedicle screw-rod after resection of a sacroiliac joint tumour is an acceptable method of pelvic reconstruction because of its reduced risk of complications and satisfactory functional outcome, as well as its feasibility of reconstruction for type IV pelvis tumour resection without elaborate preoperative customisation. Level of evidence: Level IV, therapeutic study.
Inertial-dissipation methods and turbulent fluxes at the air-ocean interface
DEFF Research Database (Denmark)
Fairall, C. W.; Larsen, Søren Ejling
1986-01-01
The use of high frequency atmospheric turbulence properties (inertial subrange spectra, structure function parameters or dissipation rates) to infer surface fluxes of momentum, sensible heat and latent heat is more practical for most ocean going platforms than direct covariance measurement....... The relationships required to deduce the fluxes from such data are examined in detail in this paper and several ambiguities and uncertainties are identified. It is noted that, over water, data on water vapor properties (the dimensionless functions for the mean profile, the structure function parameter...... and the variance transport term) are extremely sparse and the influence of sea spray is largely unknown. Special attention is given to flux estimation on the basis of the structure function formalism. Existing knowledge about the relevant similarity functions is summarized and discussed in light of the ambiguities...
International Nuclear Information System (INIS)
Li Jia; Long Pengcheng; Huang Shanqing; Li Gui; Song Gang; Luo Yuetong; Yan Feng; Wu Yican; Fds Team
2010-01-01
In radiotherapy treatment planning,in order to delivery of a high dose to the tumor accurately while maintaining an acceptably low dose to the normal tissues, particularly those adjacent to the target, it is necessary to reconstruct the three dimensional anatomical structure from planar contour information. The existing methods could not satisfy the clinical demand in terms of the speed and accuracy. By improving the isosurface extraction algorithm, we designed a fast 3D-reconstruction algorithm pipeline implemented by Visualization Tool Kit (VTK). A serial of test results from real patient image dataset show that this method could reconstruct the surface smoothly and evade the 'ladder effect' effectively. The number of points and triangles had been reduced in great extent. The rendering time had been decreased from 8 seconds to less than 3 seconds by comparing with standard iso-extraction algorithm. On the premise of preserving the original anatomical structure, this method improved the reconstruction effect, accelerated the rendering speed and it would be applied to not only accurate radiotherapy treatment planning system but also other fields that need 3D reconstruction. (authors)
Radon flux measurement methodologies
International Nuclear Information System (INIS)
Nielson, K.K.; Rogers, V.C.
1984-01-01
Five methods for measuring radon fluxes are evaluated: the accumulator can, a small charcoal sampler, a large-area charcoal sampler, the ''Big Louie'' charcoal sampler, and the charcoal tent sampler. An experimental comparison of the five flux measurement techniques was also conducted. Excellent agreement was obtained between the measured radon fluxes and fluxes predicted from radium and emanation measurements
Breast Reconstruction with Enhanced Stromal Vascular Fraction Fat Grafting: What Is the Best Method?
Gentile, Pietro; Scioli, Maria Giovanna; Orlandi, Augusto; Cervelli, Valerio
2015-06-01
Actually, there are 2 main methods to obtain stromal vascular fraction (SVF): enzymatic digestion and mechanical filtration; however, the available systems report heterogeneous and sometimes not univocal results. The aim of this study is to evaluate different procedures for SVF isolation and compare their clinical efficacy in the treatment of soft-tissue defects in plastic and reconstructive surgery. The authors evaluated Celution and Medikhan, enzymatic systems, and Fatstem and Mystem system, mechanical separation systems. Fifty patients affected by breast soft-tissue defects were treated in the Plastic and Reconstructive Surgery Department of Tor Vergata University of Rome. Four groups of 10 patients were managed with enhanced SVF fat grafts using cells obtained by Celution (Cytori Therapeutics, Inc., San Diego, Calif.), Medikhan (Medi-Khan Inc., West Hollywood, Calif.), Fatstem (Fatstem CORIOS Soc. Coop, San Giuliano Milanese, Italy), and Mystem (Mystem evo Bi-Medica, Treviolo, Italy) systems. A control group of 10 patients was treated with only centrifuged fat according to Coleman's technique. In enhanced SVF-treated patients treated with cells obtained by Celution system, we observed a 63% ± 6.2% maintenance of contour restoring after 1 year, compared with 39% ± 4.4% of control group. In patients treated with SVF obtained by Medikhan system, we observed a 39% ± 3.5% maintenance, whereas enhanced SVF with Fatstem and Mystem systems gave a 52% ± 4.6% and 43% ± 3.8% maintenance of contour restoring, respectively. SVF cell counting indicated that Celution and Fatstem were the most efficient systems to obtain SVF cells. Celution and Fatstem were the 2 best automatic systems to obtain SVF and to improve maintenance of fat volume and prevent the reabsorption.
International Nuclear Information System (INIS)
Laurent, C.; Chassery, J.M.; Peyrin, F.; Girerd, C.
1996-01-01
This paper deals with the parallel implementations of reconstruction methods in 3D tomography. 3D tomography requires voluminous data and long computation times. Parallel computing, on MIMD computers, seems to be a good approach to manage this problem. In this study, we present the different steps of the parallelization on an abstract parallel computer. Depending on the method, we use two main approaches to parallelize the algorithms: the local approach and the global approach. Experimental results on MIMD computers are presented. Two 3D images reconstructed from realistic data are showed
McElrone, Andrew J; Shapland, Thomas M; Calderon, Arturo; Fitzmaurice, Li; Paw U, Kyaw Tha; Snyder, Richard L
2013-12-12
Advanced micrometeorological methods have become increasingly important in soil, crop, and environmental sciences. For many scientists without formal training in atmospheric science, these techniques are relatively inaccessible. Surface renewal and other flux measurement methods require an understanding of boundary layer meteorology and extensive training in instrumentation and multiple data management programs. To improve accessibility of these techniques, we describe the underlying theory of surface renewal measurements, demonstrate how to set up a field station for surface renewal with eddy covariance calibration, and utilize our open-source turnkey data logger program to perform flux data acquisition and processing. The new turnkey program returns to the user a simple data table with the corrected fluxes and quality control parameters, and eliminates the need for researchers to shuttle between multiple processing programs to obtain the final flux data. An example of data generated from these measurements demonstrates how crop water use is measured with this technique. The output information is useful to growers for making irrigation decisions in a variety of agricultural ecosystems. These stations are currently deployed in numerous field experiments by researchers in our group and the California Department of Water Resources in the following crops: rice, wine and raisin grape vineyards, alfalfa, almond, walnut, peach, lemon, avocado, and corn.
Lei Liu; Feng Zhou; Xue-Ru Bai; Ming-Liang Tao; Zi-Jing Zhang
2016-04-01
Traditionally, the factorization method is applied to reconstruct the 3D geometry of a target from its sequential inverse synthetic aperture radar images. However, this method requires performing cross-range scaling to all the sub-images and thus has a large computational burden. To tackle this problem, this paper proposes a novel method for joint cross-range scaling and 3D geometry reconstruction of steadily moving targets. In this method, we model the equivalent rotational angular velocity (RAV) by a linear polynomial with time, and set its coefficients randomly to perform sub-image cross-range scaling. Then, we generate the initial trajectory matrix of the scattering centers, and solve the 3D geometry and projection vectors by the factorization method with relaxed constraints. After that, the coefficients of the polynomial are estimated from the projection vectors to obtain the RAV. Finally, the trajectory matrix is re-scaled using the estimated rotational angle, and accurate 3D geometry is reconstructed. The two major steps, i.e., the cross-range scaling and the factorization, are performed repeatedly to achieve precise 3D geometry reconstruction. Simulation results have proved the effectiveness and robustness of the proposed method.
Sandhu, Ali Imran
2016-04-10
A sparsity-regularized Born iterative method (BIM) is proposed for efficiently reconstructing two-dimensional piecewise-continuous inhomogeneous dielectric profiles. Such profiles are typically not spatially sparse, which reduces the efficiency of the sparsity-promoting regularization. To overcome this problem, scattered fields are represented in terms of the spatial derivative of the dielectric profile and reconstruction is carried out over samples of the dielectric profile\\'s derivative. Then, like the conventional BIM, the nonlinear problem is iteratively converted into a sequence of linear problems (in derivative samples) and sparsity constraint is enforced on each linear problem using the thresholded Landweber iterations. Numerical results, which demonstrate the efficiency and accuracy of the proposed method in reconstructing piecewise-continuous dielectric profiles, are presented.
Muon Energy Reconstruction Through the Multiple Scattering Method in the NO$\\mathrm{\
Energy Technology Data Exchange (ETDEWEB)
Psihas Olmedo, Silvia Fernanda [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States)
2015-01-01
Neutrino energy measurements are a crucial component in the experimental study of neutrino oscillations. These measurements are done through the reconstruction of neutrino interactions and energy measurements of their products. This thesis presents the development of a technique to reconstruct the energy of muons from neutrino interactions in the NO$\\mathrm{\
Muon Energy Reconstruction Through the Multiple Scattering Method in the NO$\\mathrm{\
Energy Technology Data Exchange (ETDEWEB)
Psihas Olmedo, Silvia Fernanda [Univ. of Minnesota, Duluth, MN (United States)
2013-06-01
Neutrino energy measurements are a crucial component in the experimental study of neutrino oscillations. These measurements are done through the reconstruction of neutrino interactions and energy measurements of their products. This thesis presents the development of a technique to reconstruct the energy of muons from neutrino interactions in the NO$\\mathrm{\
Effects of attenuation correction and reconstruction method on PET activation studies
Mesina, Catalina T.; Boellaard, Ronald; van den Heuvel, Odile A.; Veltman, Dick J.; Jongbloed, Geurt; van der Vaart, Aad W.; Lammertsma, Adriaan A.
2003-01-01
The outcome of Statistical Parametric Mapping (SPM) analyses of PET activation studies depends among others, on the quality of reconstructed data. In general, filtered back-projection (FBP) is used for reconstruction in PET activation studies. There is, however, increasing interest in iterative
Impact of Alternative Inputs and Grooming Methods on Large-R Jet Reconstruction in ATLAS
The ATLAS collaboration
2017-01-01
During Run 1 of the LHC, the optimal reconstruction algorithm for large-$R$ jets in ATLAS, characterized in terms of the ability to discriminate signal from background and robust reconstruction in the presence of pileup, was found to be anti-$k_{t}$ jets with a radius parameter of 1.0, formed from locally calibrated topological calorimeter cell clusters and groomed with the trimming algorithm to remove contributions from pileup and underlying event. Since that time, much theoretical, phenomenological, and experimental work has been performed to improve both the reconstruction of the jet inputs as well as the grooming techniques applied to reconstructed jets. In this work, an inclusive survey of both pileup mitigation algorithms applied to calorimeter cell clusters and grooming algorithms is done to study their pileup stability and ability to identify hadronically decaying W bosons within the ATLAS experiment. It is found that compared to the conventional reconstruction algorithm of large-$R$ trimmed jets form...
Xue, Songchao; Gong, Hui; Jiang, Tao; Luo, Weihua; Meng, Yuanzheng; Liu, Qian; Chen, Shangbin; Li, Anan
2014-01-01
The topology of the cerebral vasculature, which is the energy transport corridor of the brain, can be used to study cerebral circulatory pathways. Limited by the restrictions of the vascular markers and imaging methods, studies on cerebral vascular structure now mainly focus on either observation of the macro vessels in a whole brain or imaging of the micro vessels in a small region. Simultaneous vascular studies of arteries, veins and capillaries have not been achieved in the whole brain of mammals. Here, we have combined the improved gelatin-Indian ink vessel perfusion process with Micro-Optical Sectioning Tomography for imaging the vessel network of an entire mouse brain. With 17 days of work, an integral dataset for the entire cerebral vessels was acquired. The voxel resolution is 0.35×0.4×2.0 µm3 for the whole brain. Besides the observations of fine and complex vascular networks in the reconstructed slices and entire brain views, a representative continuous vascular tracking has been demonstrated in the deep thalamus. This study provided an effective method for studying the entire macro and micro vascular networks of mouse brain simultaneously. PMID:24498247
Directory of Open Access Journals (Sweden)
Songchao Xue
Full Text Available The topology of the cerebral vasculature, which is the energy transport corridor of the brain, can be used to study cerebral circulatory pathways. Limited by the restrictions of the vascular markers and imaging methods, studies on cerebral vascular structure now mainly focus on either observation of the macro vessels in a whole brain or imaging of the micro vessels in a small region. Simultaneous vascular studies of arteries, veins and capillaries have not been achieved in the whole brain of mammals. Here, we have combined the improved gelatin-Indian ink vessel perfusion process with Micro-Optical Sectioning Tomography for imaging the vessel network of an entire mouse brain. With 17 days of work, an integral dataset for the entire cerebral vessels was acquired. The voxel resolution is 0.35×0.4×2.0 µm(3 for the whole brain. Besides the observations of fine and complex vascular networks in the reconstructed slices and entire brain views, a representative continuous vascular tracking has been demonstrated in the deep thalamus. This study provided an effective method for studying the entire macro and micro vascular networks of mouse brain simultaneously.
Directory of Open Access Journals (Sweden)
Shi Jun
2015-02-01
Full Text Available Downward-looking Linear Array Synthetic Aperture Radar (LASAR has many potential applications in the topographic mapping, disaster monitoring and reconnaissance applications, especially in the mountainous area. However, limited by the sizes of platforms, its resolution in the linear array direction is always far lower than those in the range and azimuth directions. This disadvantage leads to the blurring of Three-Dimensional (3D images in the linear array direction, and restricts the application of LASAR. To date, the research on 3D SAR image enhancement has focused on the sparse recovery technique. In this case, the one-to-one mapping of Digital Elevation Model (DEM brakes down. To overcome this, an optimal DEM reconstruction method for LASAR based on the variational model is discussed in an effort to optimize the DEM and the associated scattering coefficient map, and to minimize the Mean Square Error (MSE. Using simulation experiments, it is found that the variational model is more suitable for DEM enhancement applications to all kinds of terrains compared with the Orthogonal Matching Pursuit (OMPand Least Absolute Shrinkage and Selection Operator (LASSO methods.
Duchêne, Sebastian; Lanfear, Robert
2015-09-01
Ancestral state reconstruction (ASR) is a popular method for exploring the evolutionary history of traits that leave little or no trace in the fossil record. For example, it has been used to test hypotheses about the number of evolutionary origins of key life-history traits such as oviparity, or key morphological structures such as wings. Many studies that use ASR have suggested that the number of evolutionary origins of such traits is higher than was previously thought. The scope of such inferences is increasing rapidly, facilitated by the construction of very large phylogenies and life-history databases. In this paper, we use simulations to show that the number of evolutionary origins of a trait tends to be overestimated when the phylogeny is not perfect. In some cases, the estimated number of transitions can be several fold higher than the true value. Furthermore, we show that the bias is not always corrected by standard approaches to account for phylogenetic uncertainty, such as repeating the analysis on a large collection of possible trees. These findings have important implications for studies that seek to estimate the number of origins of a trait, particularly those that use large phylogenies that are associated with considerable uncertainty. We discuss the implications of this bias, and methods to ameliorate it. © 2015 Wiley Periodicals, Inc.
A new herbarium-based method for reconstructing the phenology of plant species across large areas.
Lavoie, Claude; Lachance, Daniel
2006-04-01
Phenological data have recently emerged as particularly effective tools for studying the impact of climate change on plants, but long phenological records are rare. The lack of phenological observations can nevertheless be filled by herbarium specimens as long as some correction procedures are applied to take into account the different climatic conditions associated with sampling locations. In this study, we propose a new herbarium-based method for reconstructing the flowering dates of plant species that have been collected across large areas. Coltsfoot (Tussilago farfara L.) specimens from southern Quebec were used to test the method. Flowering dates for coltsfoot herbarium specimens were adjusted accord