WorldWideScience

Sample records for source reconstruction problem

  1. Point source reconstruction principle of linear inverse problems

    International Nuclear Information System (INIS)

    Terazono, Yasushi; Matani, Ayumu; Fujimaki, Norio; Murata, Tsutomu

    2010-01-01

    Exact point source reconstruction for underdetermined linear inverse problems with a block-wise structure was studied. In a block-wise problem, elements of a source vector are partitioned into blocks. Accordingly, a leadfield matrix, which represents the forward observation process, is also partitioned into blocks. A point source is a source having only one nonzero block. An example of such a problem is current distribution estimation in electroencephalography and magnetoencephalography, where a source vector represents a vector field and a point source represents a single current dipole. In this study, the block-wise norm, a block-wise extension of the l p -norm, was defined as the family of cost functions of the inverse method. The main result is that a set of three conditions was found to be necessary and sufficient for block-wise norm minimization to ensure exact point source reconstruction for any leadfield matrix that admit such reconstruction. The block-wise norm that satisfies the conditions is the sum of the cost of all the observations of source blocks, or in other words, the block-wisely extended leadfield-weighted l 1 -norm. Additional results are that minimization of such a norm always provides block-wisely sparse solutions and that its solutions form cones in source space

  2. Stable source reconstruction from a finite number of measurements in the multi-frequency inverse source problem

    DEFF Research Database (Denmark)

    Karamehmedovic, Mirza; Kirkeby, Adrian; Knudsen, Kim

    2018-01-01

    setting: From measurements made at a finite set of frequencies we uniquely determine and reconstruct sources in a subspace spanned by finitely many Fourier-Bessel functions. Further, we obtain a constructive criterion for identifying a minimal set of measurement frequencies sufficient for reconstruction......, and under an additional, mild assumption, the reconstruction method is shown to be stable." Our analysis is based on a singular value decomposition of the source-to-measurement forward operators and the distribution of positive zeros of the Bessel functions of the first kind. The reconstruction method...

  3. Evaluation of the influence of uncertain forward models on the EEG source reconstruction problem

    DEFF Research Database (Denmark)

    Stahlhut, Carsten; Mørup, Morten; Winther, Ole

    2009-01-01

    in the different areas of the brain when noise is present. Results Due to mismatch between the true and experimental forward model, the reconstruction of the sources is determined by the angles between the i'th forward field associated with the true source and the j'th forward field in the experimental forward...... representation of the signal. Conclusions This analysis demonstrated that caution is needed when evaluating the source estimates in different brain regions. Moreover, we demonstrated the importance of reliable forward models, which may be used as a motivation for including the forward model uncertainty...

  4. Source Signals Separation and Reconstruction Following Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    WANG Cheng

    2014-02-01

    Full Text Available For separation and reconstruction of source signals from observed signals problem, the physical significance of blind source separation modal and independent component analysis is not very clear, and its solution is not unique. Aiming at these disadvantages, a new linear and instantaneous mixing model and a novel source signals separation reconstruction solving method from observed signals based on principal component analysis (PCA are put forward. Assumption of this new model is statistically unrelated rather than independent of source signals, which is different from the traditional blind source separation model. A one-to-one relationship between linear and instantaneous mixing matrix of new model and linear compound matrix of PCA, and a one-to-one relationship between unrelated source signals and principal components are demonstrated using the concept of linear separation matrix and unrelated of source signals. Based on this theoretical link, source signals separation and reconstruction problem is changed into PCA of observed signals then. The theoretical derivation and numerical simulation results show that, in despite of Gauss measurement noise, wave form and amplitude information of unrelated source signal can be separated and reconstructed by PCA when linear mixing matrix is column orthogonal and normalized; only wave form information of unrelated source signal can be separated and reconstructed by PCA when linear mixing matrix is column orthogonal but not normalized, unrelated source signal cannot be separated and reconstructed by PCA when mixing matrix is not column orthogonal or linear.

  5. Discussion of Source Reconstruction Models Using 3D MCG Data

    Science.gov (United States)

    Melis, Massimo De; Uchikawa, Yoshinori

    In this study we performed the source reconstruction of magnetocardiographic signals generated by the human heart activity to localize the site of origin of the heart activation. The localizations were performed in a four compartment model of the human volume conductor. The analyses were conducted on normal subjects and on a subject affected by the Wolff-Parkinson-White syndrome. Different models of the source activation were used to evaluate whether a general model of the current source can be applied in the study of the cardiac inverse problem. The data analyses were repeated using normal and vector component data of the MCG. The results show that a distributed source model has the better accuracy in performing the source reconstructions, and that 3D MCG data allow finding smaller differences between the different source models.

  6. Reconstruction of multiple line source attenuation maps

    International Nuclear Information System (INIS)

    Celler, A.; Sitek, A.; Harrop, R.

    1996-01-01

    A simple configuration for a transmission source for the single photon emission computed tomography (SPECT) was proposed, which utilizes a series of collimated line sources parallel to the axis of rotation of a camera. The detector is equipped with a standard parallel hole collimator. We have demonstrated that this type of source configuration can be used to generate sufficient data for the reconstruction of the attenuation map when using 8-10 line sources spaced by 3.5-4.5 cm for a 30 x 40cm detector at 65cm distance from the sources. Transmission data for a nonuniform thorax phantom was simulated, then binned and reconstructed using filtered backprojection (FBP) and iterative methods. The optimum maps are obtained with data binned into 2-3 bins and FBP reconstruction. The activity in the source was investigated for uniform and exponential activity distributions, as well as the effect of gaps and overlaps of the neighboring fan beams. A prototype of the line source has been built and the experimental verification of the technique has started

  7. A two-way regularization method for MEG source reconstruction

    KAUST Repository

    Tian, Tian Siva; Huang, Jianhua Z.; Shen, Haipeng; Li, Zhimin

    2012-01-01

    The MEG inverse problem refers to the reconstruction of the neural activity of the brain from magnetoencephalography (MEG) measurements. We propose a two-way regularization (TWR) method to solve the MEG inverse problem under the assumptions that only a small number of locations in space are responsible for the measured signals (focality), and each source time course is smooth in time (smoothness). The focality and smoothness of the reconstructed signals are ensured respectively by imposing a sparsity-inducing penalty and a roughness penalty in the data fitting criterion. A two-stage algorithm is developed for fast computation, where a raw estimate of the source time course is obtained in the first stage and then refined in the second stage by the two-way regularization. The proposed method is shown to be effective on both synthetic and real-world examples. © Institute of Mathematical Statistics, 2012.

  8. A two-way regularization method for MEG source reconstruction

    KAUST Repository

    Tian, Tian Siva

    2012-09-01

    The MEG inverse problem refers to the reconstruction of the neural activity of the brain from magnetoencephalography (MEG) measurements. We propose a two-way regularization (TWR) method to solve the MEG inverse problem under the assumptions that only a small number of locations in space are responsible for the measured signals (focality), and each source time course is smooth in time (smoothness). The focality and smoothness of the reconstructed signals are ensured respectively by imposing a sparsity-inducing penalty and a roughness penalty in the data fitting criterion. A two-stage algorithm is developed for fast computation, where a raw estimate of the source time course is obtained in the first stage and then refined in the second stage by the two-way regularization. The proposed method is shown to be effective on both synthetic and real-world examples. © Institute of Mathematical Statistics, 2012.

  9. An analytical statistical approach to the 3D reconstruction problem

    Energy Technology Data Exchange (ETDEWEB)

    Cierniak, Robert [Czestochowa Univ. of Technology (Poland). Inst. of Computer Engineering

    2011-07-01

    The presented here approach is concerned with the reconstruction problem for 3D spiral X-ray tomography. The reconstruction problem is formulated taking into considerations the statistical properties of signals obtained in X-ray CT. Additinally, image processing performed in our approach is involved in analytical methodology. This conception significantly improves quality of the obtained after reconstruction images and decreases the complexity of the reconstruction problem in comparison with other approaches. Computer simulations proved that schematically described here reconstruction algorithm outperforms conventional analytical methods in obtained image quality. (orig.)

  10. On a full Bayesian inference for force reconstruction problems

    Science.gov (United States)

    Aucejo, M.; De Smet, O.

    2018-05-01

    In a previous paper, the authors introduced a flexible methodology for reconstructing mechanical sources in the frequency domain from prior local information on both their nature and location over a linear and time invariant structure. The proposed approach was derived from Bayesian statistics, because of its ability in mathematically accounting for experimenter's prior knowledge. However, since only the Maximum a Posteriori estimate was computed, the posterior uncertainty about the regularized solution given the measured vibration field, the mechanical model and the regularization parameter was not assessed. To answer this legitimate question, this paper fully exploits the Bayesian framework to provide, from a Markov Chain Monte Carlo algorithm, credible intervals and other statistical measures (mean, median, mode) for all the parameters of the force reconstruction problem.

  11. Iterative Reconstruction Methods for Hybrid Inverse Problems in Impedance Tomography

    DEFF Research Database (Denmark)

    Hoffmann, Kristoffer; Knudsen, Kim

    2014-01-01

    For a general formulation of hybrid inverse problems in impedance tomography the Picard and Newton iterative schemes are adapted and four iterative reconstruction algorithms are developed. The general problem formulation includes several existing hybrid imaging modalities such as current density...... impedance imaging, magnetic resonance electrical impedance tomography, and ultrasound modulated electrical impedance tomography, and the unified approach to the reconstruction problem encompasses several algorithms suggested in the literature. The four proposed algorithms are implemented numerically in two...

  12. Exact iterative reconstruction for the interior problem

    International Nuclear Information System (INIS)

    Zeng, Gengsheng L; Gullberg, Grant T

    2009-01-01

    There is a trend in single photon emission computed tomography (SPECT) that small and dedicated imaging systems are becoming popular. For example, many companies are developing small dedicated cardiac SPECT systems with different designs. These dedicated systems have a smaller field of view (FOV) than a full-size clinical system. Thus data truncation has become the norm rather than the exception in these systems. Therefore, it is important to develop region of interest (ROI) reconstruction algorithms using truncated data. This paper is a stepping stone toward this direction. This paper shows that the common generic iterative image reconstruction algorithms are able to exactly reconstruct the ROI under the conditions that the convex ROI is fully sampled and the image value in a sub-region within the ROI is known. If the ROI includes a sub-region that is outside the patient body, then the conditions can be easily satisfied.

  13. Inverse source problems in elastodynamics

    Science.gov (United States)

    Bao, Gang; Hu, Guanghui; Kian, Yavar; Yin, Tao

    2018-04-01

    We are concerned with time-dependent inverse source problems in elastodynamics. The source term is supposed to be the product of a spatial function and a temporal function with compact support. We present frequency-domain and time-domain approaches to show uniqueness in determining the spatial function from wave fields on a large sphere over a finite time interval. The stability estimate of the temporal function from the data of one receiver and the uniqueness result using partial boundary data are proved. Our arguments rely heavily on the use of the Fourier transform, which motivates inversion schemes that can be easily implemented. A Landweber iterative algorithm for recovering the spatial function and a non-iterative inversion scheme based on the uniqueness proof for recovering the temporal function are proposed. Numerical examples are demonstrated in both two and three dimensions.

  14. The problem of the architectural heritage reconstruction

    Directory of Open Access Journals (Sweden)

    Alfazhr M.A.

    2017-02-01

    Full Text Available the subject of this research is the modern technology of the architectural monuments restoration, which makes possible to increase the design and performance, as well as the durability of historical objects. Choosing the most efficient, cost-effective and durable recovery and expanding of architectural monuments technologies is a priority of historical cities. Adoption of the faster and sound monuments restoration technology is neсessay because there are a lot of historical Russian cities in need of repair and reconstruction. Therefore, it is essential that new renovation works improvement methods and technologies on the basis of the western experience in construction to be found.

  15. Honey bee-inspired algorithms for SNP haplotype reconstruction problem

    Science.gov (United States)

    PourkamaliAnaraki, Maryam; Sadeghi, Mehdi

    2016-03-01

    Reconstructing haplotypes from SNP fragments is an important problem in computational biology. There have been a lot of interests in this field because haplotypes have been shown to contain promising data for disease association research. It is proved that haplotype reconstruction in Minimum Error Correction model is an NP-hard problem. Therefore, several methods such as clustering techniques, evolutionary algorithms, neural networks and swarm intelligence approaches have been proposed in order to solve this problem in appropriate time. In this paper, we have focused on various evolutionary clustering techniques and try to find an efficient technique for solving haplotype reconstruction problem. It can be referred from our experiments that the clustering methods relying on the behaviour of honey bee colony in nature, specifically bees algorithm and artificial bee colony methods, are expected to result in more efficient solutions. An application program of the methods is available at the following link. http://www.bioinf.cs.ipm.ir/software/haprs/

  16. Time-dependent problems in quantum-mechanical state reconstruction

    International Nuclear Information System (INIS)

    Leonhardt, U.; Bardroff, P. J.

    1997-01-01

    We study the state reconstruction of wave packets that travel in time-dependent potentials. We solve the problem for explicitly time-dependent potentials. We solve the problem for explicitly time-dependent harmonic oscillators and sketch a general adaptive technique for finding the wave function that matches and observed evolution. (authors)

  17. The co phylogeny reconstruction problem is NP-complete.

    Science.gov (United States)

    Ovadia, Y; Fielder, D; Conow, C; Libeskind-Hadas, R

    2011-01-01

    The co phylogeny reconstruction problem is that of finding minimum cost explanations of differences between historical associations. The problem arises in parasitology, molecular systematics, and biogeography. Existing software tools for this problem either have worst-case exponential time or use heuristics that do not guarantee optimal solutions. To date, no polynomial time optimal algorithms have been found for this problem. In this article, we prove that the problem is NP-complete, suggesting that future research on algorithms for this problem should seek better polynomial-time approximation algorithms and heuristics rather than optimal solutions.

  18. Pathgroups, a dynamic data structure for genome reconstruction problems.

    Science.gov (United States)

    Zheng, Chunfang

    2010-07-01

    Ancestral gene order reconstruction problems, including the median problem, quartet construction, small phylogeny, guided genome halving and genome aliquoting, are NP hard. Available heuristics dedicated to each of these problems are computationally costly for even small instances. We present a data structure enabling rapid heuristic solution to all these ancestral genome reconstruction problems. A generic greedy algorithm with look-ahead based on an automatically generated priority system suffices for all the problems using this data structure. The efficiency of the algorithm is due to fast updating of the structure during run time and to the simplicity of the priority scheme. We illustrate with the first rapid algorithm for quartet construction and apply this to a set of yeast genomes to corroborate a recent gene sequence-based phylogeny. http://albuquerque.bioinformatics.uottawa.ca/pathgroup/Quartet.html chunfang313@gmail.com Supplementary data are available at Bioinformatics online.

  19. Mandible reconstruction: History, state of the art and persistent problems.

    Science.gov (United States)

    Ferreira, José J; Zagalo, Carlos M; Oliveira, Marta L; Correia, André M; Reis, Ana R

    2015-06-01

    Mandibular reconstruction has been experiencing an amazing evolution. Several different approaches are used to reconstruct this bone and therefore have a fundamental role in the recovery of oral functions. This review aims to highlight the persistent problems associated with the approaches identified, whether bone grafts or prosthetic devices are used. A brief summary of the historical evolution of the surgical procedures is presented, as well as an insight into possible future pathways. A literature review was conducted from September to December 2012 using the PubMed database. The keyword used was "mandible reconstruction." Articles published in the last three years were included as well as the relevant references from those articles and the "historical articles" were referred. This research resulted in a monograph that this article aims to summarize. Titanium plates, bone grafts, pediculate flaps, free osteomyocutaneous flaps, rapid prototyping, and tissue engineering strategies are some of the identified possibilities. The classical approaches present considerable associated morbidity donor-site-related problems. Research that results in the development of new prosthetics devices is needed. A new prosthetic approach could minimize the identified problems and offer the patients more predictable, affordable, and comfortable solutions. This review, while affirming the evolution and the good results found with the actual approaches, emphasizes the negative aspects that still subsist. Thus, it shows that mandible reconstruction is not a closed issue. On the contrary, it remains as a research field where new findings could have a direct positive impact on patients' life quality. The identification of the persistent problems reveals the characteristics to be considered in a new prosthetic device. This could overcome the current difficulties and result in more comfortable solutions. Medical teams have the responsibility to keep patients informed about the predictable

  20. Gadgetron: An Open Source Framework for Medical Image Reconstruction

    DEFF Research Database (Denmark)

    Hansen, Michael Schacht; Sørensen, Thomas Sangild

    2013-01-01

    This work presents a new open source framework for medical image reconstruction called the “Gadgetron.” The framework implements a flexible system for creating streaming data processing pipelines where data pass through a series of modules or “Gadgets” from raw data to reconstructed images...... with a set of dedicated toolboxes in shared libraries for medical image reconstruction. This includes generic toolboxes for data-parallel (e.g., GPU-based) execution of compute-intensive components. The basic framework architecture is independent of medical imaging modality, but this article focuses on its...

  1. Women and post-conflict reconstruction: Issues and sources

    OpenAIRE

    Sørensen, Birgitte

    1998-01-01

    Women and Post-Conflict Reconstruction: Issues and Sources is a review of literature dealing with political, economic and social reconstruction from a gender perspective. One of its objectives is to go beyond conventional images of women as victims of war, and to document the many different ways in which women make a contribution to the rebuilding of countries emerging from armed conflicts. Special attention is given to women's priority concerns, to their resources and capacities, and to stru...

  2. Reconstructing the Hopfield network as an inverse Ising problem

    International Nuclear Information System (INIS)

    Huang Haiping

    2010-01-01

    We test four fast mean-field-type algorithms on Hopfield networks as an inverse Ising problem. The equilibrium behavior of Hopfield networks is simulated through Glauber dynamics. In the low-temperature regime, the simulated annealing technique is adopted. Although performances of these network reconstruction algorithms on the simulated network of spiking neurons are extensively studied recently, the analysis of Hopfield networks is lacking so far. For the Hopfield network, we found that, in the retrieval phase favored when the network wants to memory one of stored patterns, all the reconstruction algorithms fail to extract interactions within a desired accuracy, and the same failure occurs in the spin-glass phase where spurious minima show up, while in the paramagnetic phase, albeit unfavored during the retrieval dynamics, the algorithms work well to reconstruct the network itself. This implies that, as an inverse problem, the paramagnetic phase is conversely useful for reconstructing the network while the retrieval phase loses all the information about interactions in the network except for the case where only one pattern is stored. The performances of algorithms are studied with respect to the system size, memory load, and temperature; sample-to-sample fluctuations are also considered.

  3. Inverse source problems for eddy current equations

    International Nuclear Information System (INIS)

    Rodríguez, Ana Alonso; Valli, Alberto; Camaño, Jessika

    2012-01-01

    We study the inverse source problem for the eddy current approximation of Maxwell equations. As for the full system of Maxwell equations, we show that a volume current source cannot be uniquely identified by knowledge of the tangential components of the electromagnetic fields on the boundary, and we characterize the space of non-radiating sources. On the other hand, we prove that the inverse source problem has a unique solution if the source is supported on the boundary of a subdomain or if it is the sum of a finite number of dipoles. We address the applicability of this result for the localization of brain activity from electroencephalography and magnetoencephalography measurements. (paper)

  4. Reconstructing source-sink dynamics in a population with a pelagic dispersal phase.

    Directory of Open Access Journals (Sweden)

    Kun Chen

    Full Text Available For many organisms, the reconstruction of source-sink dynamics is hampered by limited knowledge of the spatial assemblage of either the source or sink components or lack of information on the strength of the linkage for any source-sink pair. In the case of marine species with a pelagic dispersal phase, these problems may be mitigated through the use of particle drift simulations based on an ocean circulation model. However, when simulated particle trajectories do not intersect sampling sites, the corroboration of model drift simulations with field data is hampered. Here, we apply a new statistical approach for reconstructing source-sink dynamics that overcomes the aforementioned problems. Our research is motivated by the need for understanding observed changes in jellyfish distributions in the eastern Bering Sea since 1990. By contrasting the source-sink dynamics reconstructed with data from the pre-1990 period with that from the post-1990 period, it appears that changes in jellyfish distribution resulted from the combined effects of higher jellyfish productivity and longer dispersal of jellyfish resulting from a shift in the ocean circulation starting in 1991. A sensitivity analysis suggests that the source-sink reconstruction is robust to typical systematic and random errors in the ocean circulation model driving the particle drift simulations. The jellyfish analysis illustrates that new insights can be gained by studying structural changes in source-sink dynamics. The proposed approach is applicable for the spatial source-sink reconstruction of other species and even abiotic processes, such as sediment transport.

  5. A Solution to Hammer's X-ray Reconstruction Problem

    DEFF Research Database (Denmark)

    Gardner, Richard J.; Kiderlen, Markus

    2007-01-01

    We propose algorithms for reconstructing a planar convex body K from possibly noisy measurements of either its parallel X-rays taken in a fixed finite set of directions or its point X-rays taken at a fixed finite set of points, in known situations that guarantee a unique solution when the data is...... to K in the Hausdorff metric as k tends to infinity. This solves, for the first time in the strongest sense, Hammer’s X-ray problem published in 1963....

  6. Acoustical source reconstruction from non-synchronous sequential measurements by Fast Iterative Shrinkage Thresholding Algorithm

    Science.gov (United States)

    Yu, Liang; Antoni, Jerome; Leclere, Quentin; Jiang, Weikang

    2017-11-01

    Acoustical source reconstruction is a typical inverse problem, whose minimum frequency of reconstruction hinges on the size of the array and maximum frequency depends on the spacing distance between the microphones. For the sake of enlarging the frequency of reconstruction and reducing the cost of an acquisition system, Cyclic Projection (CP), a method of sequential measurements without reference, was recently investigated (JSV,2016,372:31-49). In this paper, the Propagation based Fast Iterative Shrinkage Thresholding Algorithm (Propagation-FISTA) is introduced, which improves CP in two aspects: (1) the number of acoustic sources is no longer needed and the only making assumption is that of a "weakly sparse" eigenvalue spectrum; (2) the construction of the spatial basis is much easier and adaptive to practical scenarios of acoustical measurements benefiting from the introduction of propagation based spatial basis. The proposed Propagation-FISTA is first investigated with different simulations and experimental setups and is next illustrated with an industrial case.

  7. Source localization in electromyography using the inverse potential problem

    Science.gov (United States)

    van den Doel, Kees; Ascher, Uri M.; Pai, Dinesh K.

    2011-02-01

    We describe an efficient method for reconstructing the activity in human muscles from an array of voltage sensors on the skin surface. MRI is used to obtain morphometric data which are segmented into muscle tissue, fat, bone and skin, from which a finite element model for volume conduction is constructed. The inverse problem of finding the current sources in the muscles is solved using a careful regularization technique which adds a priori information, yielding physically reasonable solutions from among those that satisfy the basic potential problem. Several regularization functionals are considered and numerical experiments on a 2D test model are performed to determine which performs best. The resulting scheme leads to numerical difficulties when applied to large-scale 3D problems. We clarify the nature of these difficulties and provide a method to overcome them, which is shown to perform well in the large-scale problem setting.

  8. Source localization in electromyography using the inverse potential problem

    International Nuclear Information System (INIS)

    Van den Doel, Kees; Ascher, Uri M; Pai, Dinesh K

    2011-01-01

    We describe an efficient method for reconstructing the activity in human muscles from an array of voltage sensors on the skin surface. MRI is used to obtain morphometric data which are segmented into muscle tissue, fat, bone and skin, from which a finite element model for volume conduction is constructed. The inverse problem of finding the current sources in the muscles is solved using a careful regularization technique which adds a priori information, yielding physically reasonable solutions from among those that satisfy the basic potential problem. Several regularization functionals are considered and numerical experiments on a 2D test model are performed to determine which performs best. The resulting scheme leads to numerical difficulties when applied to large-scale 3D problems. We clarify the nature of these difficulties and provide a method to overcome them, which is shown to perform well in the large-scale problem setting

  9. Dual-Source Swept-Source Optical Coherence Tomography Reconstructed on Integrated Spectrum

    Directory of Open Access Journals (Sweden)

    Shoude Chang

    2012-01-01

    Full Text Available Dual-source swept-source optical coherence tomography (DS-SSOCT has two individual sources with different central wavelengths, linewidth, and bandwidths. Because of the difference between the two sources, the individually reconstructed tomograms from each source have different aspect ratio, which makes the comparison and integration difficult. We report a method to merge two sets of DS-SSOCT raw data in a common spectrum, on which both data have the same spectrum density and a correct separation. The reconstructed tomographic image can seamlessly integrate the two bands of OCT data together. The final image has higher axial resolution and richer spectroscopic information than any of the individually reconstructed tomography image.

  10. Dynamic MRI reconstruction as a moment problem. Pt. 1

    International Nuclear Information System (INIS)

    Zwaan, M.

    1989-03-01

    This paper deals with some mathematical aspects of magnetic resonance imaging (MRI) concerning the beating heart. Some of the basic theory behind magnetic resonance is given. Of special interest is the mathematical theory concerning MRI and the ideas and problems in mathematical terms will be formulated. If one uses MRI to measure and display a so colled 'dynamic' organ, like the beating heart, the situation is more complex than the case of a static organ. Strategy is described how a cross section of a beating human heart is measured in practice and how the measurements are arranged before an image can be made. This technique is called retrospective synchronization. If the beating heart is measured and displayed with help of this method, artefacts often deteriorate the image quality. Some of these artefacts have a physical cause, while others are caused by the reconstruction algorithm. Perhaps mathematical techniques may be used to improve these algorithms hich are currently used in practice. The aim of this paper is not to solve problems, but to give an adequate mathematical formulation of the inversion problem concerning retrospective synchronization. (author). 3 refs.; 4 figs

  11. Jane: a new tool for the cophylogeny reconstruction problem

    Directory of Open Access Journals (Sweden)

    Ovadia Yaniv

    2010-02-01

    Full Text Available Abstract Background This paper describes the theory and implementation of a new software tool, called Jane, for the study of historical associations. This problem arises in parasitology (associations of hosts and parasites, molecular systematics (associations of orderings and genes, and biogeography (associations of regions and orderings. The underlying problem is that of reconciling pairs of trees subject to biologically plausible events and costs associated with these events. Existing software tools for this problem have strengths and limitations, and the new Jane tool described here provides functionality that complements existing tools. Results The Jane software tool uses a polynomial time dynamic programming algorithm in conjunction with a genetic algorithm to find very good, and often optimal, solutions even for relatively large pairs of trees. The tool allows the user to provide rich timing information on both the host and parasite trees. In addition the user can limit host switch distance and specify multiple host switch costs by specifying regions in the host tree and costs for host switches between pairs of regions. Jane also provides a graphical user interface that allows the user to interactively experiment with modifications to the solutions found by the program. Conclusions Jane is shown to be a useful tool for cophylogenetic reconstruction. Its functionality complements existing tools and it is therefore likely to be of use to researchers in the areas of parasitology, molecular systematics, and biogeography.

  12. Jane: a new tool for the cophylogeny reconstruction problem.

    Science.gov (United States)

    Conow, Chris; Fielder, Daniel; Ovadia, Yaniv; Libeskind-Hadas, Ran

    2010-02-03

    This paper describes the theory and implementation of a new software tool, called Jane, for the study of historical associations. This problem arises in parasitology (associations of hosts and parasites), molecular systematics (associations of orderings and genes), and biogeography (associations of regions and orderings). The underlying problem is that of reconciling pairs of trees subject to biologically plausible events and costs associated with these events. Existing software tools for this problem have strengths and limitations, and the new Jane tool described here provides functionality that complements existing tools. The Jane software tool uses a polynomial time dynamic programming algorithm in conjunction with a genetic algorithm to find very good, and often optimal, solutions even for relatively large pairs of trees. The tool allows the user to provide rich timing information on both the host and parasite trees. In addition the user can limit host switch distance and specify multiple host switch costs by specifying regions in the host tree and costs for host switches between pairs of regions. Jane also provides a graphical user interface that allows the user to interactively experiment with modifications to the solutions found by the program. Jane is shown to be a useful tool for cophylogenetic reconstruction. Its functionality complements existing tools and it is therefore likely to be of use to researchers in the areas of parasitology, molecular systematics, and biogeography.

  13. An inverse source problem of the Poisson equation with Cauchy data

    Directory of Open Access Journals (Sweden)

    Ji-Chuan Liu

    2017-05-01

    Full Text Available In this article, we study an inverse source problem of the Poisson equation with Cauchy data. We want to find iterative algorithms to detect the hidden source within a body from measurements on the boundary. Our goal is to reconstruct the location, the size and the shape of the hidden source. This problem is ill-posed, regularization techniques should be employed to obtain the regularized solution. Numerical examples show that our proposed algorithms are valid and effective.

  14. Orientation Estimation and Signal Reconstruction of a Directional Sound Source

    DEFF Research Database (Denmark)

    Guarato, Francesco

    , one for each call emission, were compared to those calculated through a pre-existing technique based on interpolation of sound-pressure levels at microphone locations. The application of the method to the bat calls could provide knowledge on bat behaviour that may be useful for a bat-inspired sensor......Previous works in the literature about one tone or broadband sound sources mainly deal with algorithms and methods developed in order to localize the source and, occasionally, estimate the source bearing angle (with respect to a global reference frame). The problem setting assumes, in these cases......, omnidirectional receivers collecting the acoustic signal from the source: analysis of arrival times in the recordings together with microphone positions and source directivity cues allows to get information about source position and bearing. Moreover, sound sources have been included into sensor systems together...

  15. Simultaneous reconstruction of material and transient source parameters using the invariant imbedding method

    International Nuclear Information System (INIS)

    Corones, J.; Sun, Z.

    1993-01-01

    This paper extends the time domain wave splitting and invariant imbedding method to an inhomogeneous wave equation with a source term: u xx -u tt +A(x)u x =2D(x)i'(t). The direct scattering and inverse source problems of this equation are studied. Operators J ± that map the source function into the scattered waves at the edges of the slab are defined. A system of coupled nonlinear integrodifferential equations for these scattering operator kernels is obtained. The direct scattering problem is to obtain the scattering operator kernels J ± and R + when parameters A and D are given. The inverse problem is to simultaneously reconstruct A(x) and D(x) from the scattering operator kernels R + (0,t), 0≤t≤2 and J - (0,t), 0≤t≤1. Both numerical inversion algorithms and the small time approximate reconstruction method are presented. A Green's function technique is used to derive Green's operator kernel equations for the calculation of the internal field. It provides an alternative effective and fast way to compute the scattering kernels J ± . For constant A and D the Green's operator kernels and source scattering kernels are expressed in closed form. Several numerical examples are given

  16. Two-way regularization for MEG source reconstruction via multilevel coordinate descent

    KAUST Repository

    Siva Tian, Tian

    2013-12-01

    Magnetoencephalography (MEG) source reconstruction refers to the inverse problem of recovering the neural activity from the MEG time course measurements. A spatiotemporal two-way regularization (TWR) method was recently proposed by Tian et al. to solve this inverse problem and was shown to outperform several one-way regularization methods and spatiotemporal methods. This TWR method is a two-stage procedure that first obtains a raw estimate of the source signals and then refines the raw estimate to ensure spatial focality and temporal smoothness using spatiotemporal regularized matrix decomposition. Although proven to be effective, the performance of two-stage TWR depends on the quality of the raw estimate. In this paper we directly solve the MEG source reconstruction problem using a multivariate penalized regression where the number of variables is much larger than the number of cases. A special feature of this regression is that the regression coefficient matrix has a spatiotemporal two-way structure that naturally invites a two-way penalty. Making use of this structure, we develop a computationally efficient multilevel coordinate descent algorithm to implement the method. This new one-stage TWR method has shown its superiority to the two-stage TWR method in three simulation studies with different levels of complexity and a real-world MEG data analysis. © 2013 Wiley Periodicals, Inc., A Wiley Company.

  17. Strategy for fitting source strength and reconstruction procedure in radioactive particle tracking

    International Nuclear Information System (INIS)

    Mosorov, Volodymyr

    2015-01-01

    The Radioactive Particle Tracking (RPT) technique is widely applied to study the dynamic properties of flows inside a reactor. Usually, a single radioactive particle that is neutrally buoyant with respect to the phase is used as a tracker. The particle moves inside a 3D volume of interest, and its positions are determined by an array of scintillation detectors, which count the incoming photons. The particle position coordinates are calculated by using a reconstruction procedure that solves a minimization problem between the measured counts and calibration data. Although previous studies have described the influence of specified factors on the RPT resolution and sensitivities, the question of how to choose an appropriate source strength and reconstruction procedure for the given RPT setup remains an unsolved problem. This work describes and applies the original strategy for fitting both the source strength and the sampling time interval to a specified RPT setup to guarantee a required accuracy of measurements. Additionally, the measurement accuracy of an RPT setup can be significantly increased by changing the reconstruction procedure. The results of the simulations, based on the Monte Carlo approach, have demonstrated that the proposed strategy allows for the successful implementation of the As Low As Reasonably Achievable (ALARA) principle when designing the RPT setup. The limitations and drawbacks of the proposed procedure are also presented. - Highlights: • We develop an original strategy for fitting source strength and measurement time interval in radioactive particle tracking (RPT) technique. • The proposed strategy allows successfully to implement the ALAPA (As Low As Reasonably Achievable) principle in designing of a RPT setup. • Measurement accuracy of a RPT setup can be significantly increased by improvement of the reconstruction procedure. • The algorithm can be applied to monitor the motion of the radioactive tracer in a reactor

  18. Direct reconstruction of the source intensity distribution of a clinical linear accelerator using a maximum likelihood expectation maximization algorithm.

    Science.gov (United States)

    Papaconstadopoulos, P; Levesque, I R; Maglieri, R; Seuntjens, J

    2016-02-07

    Direct determination of the source intensity distribution of clinical linear accelerators is still a challenging problem for small field beam modeling. Current techniques most often involve special equipment and are difficult to implement in the clinic. In this work we present a maximum-likelihood expectation-maximization (MLEM) approach to the source reconstruction problem utilizing small fields and a simple experimental set-up. The MLEM algorithm iteratively ray-traces photons from the source plane to the exit plane and extracts corrections based on photon fluence profile measurements. The photon fluence profiles were determined by dose profile film measurements in air using a high density thin foil as build-up material and an appropriate point spread function (PSF). The effect of other beam parameters and scatter sources was minimized by using the smallest field size ([Formula: see text] cm(2)). The source occlusion effect was reproduced by estimating the position of the collimating jaws during this process. The method was first benchmarked against simulations for a range of typical accelerator source sizes. The sources were reconstructed with an accuracy better than 0.12 mm in the full width at half maximum (FWHM) to the respective electron sources incident on the target. The estimated jaw positions agreed within 0.2 mm with the expected values. The reconstruction technique was also tested against measurements on a Varian Novalis Tx linear accelerator and compared to a previously commissioned Monte Carlo model. The reconstructed FWHM of the source agreed within 0.03 mm and 0.11 mm to the commissioned electron source in the crossplane and inplane orientations respectively. The impact of the jaw positioning, experimental and PSF uncertainties on the reconstructed source distribution was evaluated with the former presenting the dominant effect.

  19. Source reconstruction using phase space beam summation technique

    International Nuclear Information System (INIS)

    Graubart, Gideon.

    1990-10-01

    In this work, the phase-space beam summation technique (PSBS), is applied to back propagation and inverse source problems. The PSBS expresses the field as a superposition of shifted and tilted beams. This phase space spectrum of beams is matched to the source distribution via an amplitude function which expresses the local spectrum of the source function in terms of a local Fourier transform. In this work, the emphasis is on the phase space processing of the data, on the information content of this data and on the back propagation scheme. More work is still required to combine this back propagation approach in a full, multi experiment inverse scattering scheme. It is shown that the phase space distribution of the data, computed via the local spectrum transform, is localized along lines that define the local arrival direction of the wave data. We explore how the choice of the beam width affects the compactification of this distribution, and derive criteria for choosing a window that optimizes this distribution. It should be emphasized that compact distribution implies fewer beams in the back propagation scheme and therefore higher numerical efficiency and better physical insight. Furthermore it is shown how the local information property of the phase space representation can be used to improve the performance of this simple back propagation problem, in particular with regard to axial resolution; the distance to the source can be determined by back propagating only the large angle phase space beams that focus on the source. The information concerning transverse distribution of the source, on the other hand, is contained in the axial phase space region and can therefore be determined by the corresponding back propagating beams. Because of the global nature of the plane waves propagators the conventional plane wave back propagation scheme does not have the same 'focusing' property, and therefore suffers from lack of information localization and axial resolution. The

  20. Scaled nonuniform Fourier transform for image reconstruction in swept source optical coherence tomography

    Science.gov (United States)

    Mezgebo, Biniyam; Nagib, Karim; Fernando, Namal; Kordi, Behzad; Sherif, Sherif

    2018-02-01

    Swept Source optical coherence tomography (SS-OCT) is an important imaging modality for both medical and industrial diagnostic applications. A cross-sectional SS-OCT image is obtained by applying an inverse discrete Fourier transform (DFT) to axial interferograms measured in the frequency domain (k-space). This inverse DFT is typically implemented as a fast Fourier transform (FFT) that requires the data samples to be equidistant in k-space. As the frequency of light produced by a typical wavelength-swept laser is nonlinear in time, the recorded interferogram samples will not be uniformly spaced in k-space. Many image reconstruction methods have been proposed to overcome this problem. Most such methods rely on oversampling the measured interferogram then use either hardware, e.g., Mach-Zhender interferometer as a frequency clock module, or software, e.g., interpolation in k-space, to obtain equally spaced samples that are suitable for the FFT. To overcome the problem of nonuniform sampling in k-space without any need for interferogram oversampling, an earlier method demonstrated the use of the nonuniform discrete Fourier transform (NDFT) for image reconstruction in SS-OCT. In this paper, we present a more accurate method for SS-OCT image reconstruction from nonuniform samples in k-space using a scaled nonuniform Fourier transform. The result is demonstrated using SS-OCT images of Axolotl salamander eggs.

  1. Reconstruction formula for a 3-d phaseless inverse scattering problem for the Schrodinger equation

    OpenAIRE

    Klibanov, Michael V.; Romanov, Vladimir G.

    2014-01-01

    The inverse scattering problem of the reconstruction of the unknown potential with compact support in the 3-d Schr\\"odinger equation is considered. Only the modulus of the scattering complex valued wave field is known, whereas the phase is unknown. It is shown that the unknown potential can be reconstructed via the inverse Radon transform. Therefore, a long standing problem posed in 1977 by K. Chadan and P.C. Sabatier in their book "Inverse Problems in Quantum Scattering Theory" is solved.

  2. Reconstruction of extended sources for the Helmholtz equation

    KAUST Repository

    Kress, Rainer

    2013-02-26

    The basis of most imaging methods is to detect hidden obstacles or inclusions within a body when one can only make measurements on an exterior surface. Our underlying model is that of inverse acoustic scattering based on the Helmholtz equation. Our inclusions are interior forces with compact support and our data consist of a single measurement of near-field Cauchy data on the external boundary. We propose an algorithm that under certain assumptions allows for the determination of the support set of these forces by solving a simpler \\'equivalent point source\\' problem, and which uses a Newton scheme to improve the corresponding initial approximation. © 2013 IOP Publishing Ltd.

  3. Architectural and town-planning reconstruction problems of the city of Voronezh

    Science.gov (United States)

    Mikhaylova, TTatyana; Parshin, Dmitriy; Shoshinov, Vitaly; Trebukhin, Anatoliy

    2018-03-01

    The analysis of the state of the historically developed urban district of the city of Voronezh is made. The ways of solving the identified architectural and urban problems of reconstruction of historically developed buildings are proposed. The concept of reconstruction of a territory with historical buildings along Vaytsekhovsky Street is presented.

  4. Alternative sources in Slovakia, problems and resorts

    International Nuclear Information System (INIS)

    Hanzel, A.

    2005-01-01

    In this presentation author deals with power generation from renewable energy sources in some states (USA, Germany, Japan, Denmark, European Union) and in the Slovak Republic. Cost of electric power from different renewable energy sources is compared.

  5. Ensemble-based data assimilation and optimal sensor placement for scalar source reconstruction

    Science.gov (United States)

    Mons, Vincent; Wang, Qi; Zaki, Tamer

    2017-11-01

    Reconstructing the characteristics of a scalar source from limited remote measurements in a turbulent flow is a problem of great interest for environmental monitoring, and is challenging due to several aspects. Firstly, the numerical estimation of the scalar dispersion in a turbulent flow requires significant computational resources. Secondly, in actual practice, only a limited number of observations are available, which generally makes the corresponding inverse problem ill-posed. Ensemble-based variational data assimilation techniques are adopted to solve the problem of scalar source localization in a turbulent channel flow at Reτ = 180 . This approach combines the components of variational data assimilation and ensemble Kalman filtering, and inherits the robustness from the former and the ease of implementation from the latter. An ensemble-based methodology for optimal sensor placement is also proposed in order to improve the condition of the inverse problem, which enhances the performances of the data assimilation scheme. This work has been partially funded by the Office of Naval Research (Grant N00014-16-1-2542) and by the National Science Foundation (Grant 1461870).

  6. Numerical reconstruction of tsunami source using combined seismic, satellite and DART data

    Science.gov (United States)

    Krivorotko, Olga; Kabanikhin, Sergey; Marinin, Igor

    2014-05-01

    function, the adjoint problem is solved. The conservative finite-difference schemes for solving the direct and adjoint problems in the approximation of shallow water are constructed. Results of numerical experiments of the tsunami source reconstruction are presented and discussed. We show that using a combination of three different types of data allows one to increase the stability and efficiency of tsunami source reconstruction. Non-profit organization WAPMERR (World Agency of Planetary Monitoring and Earthquake Risk Reduction) in collaboration with Informap software development department developed the Integrated Tsunami Research and Information System (ITRIS) to simulate tsunami waves and earthquakes, river course changes, coastal zone floods, and risk estimates for coastal constructions at wave run-ups and earthquakes. The special scientific plug-in components are embedded in a specially developed GIS-type graphic shell for easy data retrieval, visualization and processing. This work was supported by the Russian Foundation for Basic Research (project No. 12-01-00773 'Theory and Numerical Methods for Solving Combined Inverse Problems of Mathematical Physics') and interdisciplinary project of SB RAS 14 'Inverse Problems and Applications: Theory, Algorithms, Software'.

  7. Reconstruction Methods for Inverse Problems with Partial Data

    DEFF Research Database (Denmark)

    Hoffmann, Kristoffer

    This thesis presents a theoretical and numerical analysis of a general mathematical formulation of hybrid inverse problems in impedance tomography. This includes problems from several existing hybrid imaging modalities such as Current Density Impedance Imaging, Magnetic Resonance Electrical...... Impedance Tomography, and Ultrasound Modulated Electrical Impedance Tomography. After giving an introduction to hybrid inverse problems in impedance tomography and the mathematical tools that facilitate the related analysis, we explain in detail the stability properties associated with the classification...... of a linearised hybrid inverse problem. This is done using pseudo-differential calculus and theory for overdetermined boundary value problem. Using microlocal analysis we then present novel results on the propagation of singularities, which give a precise description of the distinct features of solutions...

  8. Polyquant CT: direct electron and mass density reconstruction from a single polyenergetic source

    Science.gov (United States)

    Mason, Jonathan H.; Perelli, Alessandro; Nailon, William H.; Davies, Mike E.

    2017-11-01

    Quantifying material mass and electron density from computed tomography (CT) reconstructions can be highly valuable in certain medical practices, such as radiation therapy planning. However, uniquely parameterising the x-ray attenuation in terms of mass or electron density is an ill-posed problem when a single polyenergetic source is used with a spectrally indiscriminate detector. Existing approaches to single source polyenergetic modelling often impose consistency with a physical model, such as water-bone or photoelectric-Compton decompositions, which will either require detailed prior segmentation or restrictive energy dependencies, and may require further calibration to the quantity of interest. In this work, we introduce a data centric approach to fitting the attenuation with piecewise-linear functions directly to mass or electron density, and present a segmentation-free statistical reconstruction algorithm for exploiting it, with the same order of complexity as other iterative methods. We show how this allows both higher accuracy in attenuation modelling, and demonstrate its superior quantitative imaging, with numerical chest and metal implant data, and validate it with real cone-beam CT measurements.

  9. Algorithmic procedures for Bayesian MEG/EEG source reconstruction in SPM.

    Science.gov (United States)

    López, J D; Litvak, V; Espinosa, J J; Friston, K; Barnes, G R

    2014-01-01

    The MEG/EEG inverse problem is ill-posed, giving different source reconstructions depending on the initial assumption sets. Parametric Empirical Bayes allows one to implement most popular MEG/EEG inversion schemes (Minimum Norm, LORETA, etc.) within the same generic Bayesian framework. It also provides a cost-function in terms of the variational Free energy-an approximation to the marginal likelihood or evidence of the solution. In this manuscript, we revisit the algorithm for MEG/EEG source reconstruction with a view to providing a didactic and practical guide. The aim is to promote and help standardise the development and consolidation of other schemes within the same framework. We describe the implementation in the Statistical Parametric Mapping (SPM) software package, carefully explaining each of its stages with the help of a simple simulated data example. We focus on the Multiple Sparse Priors (MSP) model, which we compare with the well-known Minimum Norm and LORETA models, using the negative variational Free energy for model comparison. The manuscript is accompanied by Matlab scripts to allow the reader to test and explore the underlying algorithm. © 2013. Published by Elsevier Inc. All rights reserved.

  10. Algorithmic procedures for Bayesian MEG/EEG source reconstruction in SPM☆

    Science.gov (United States)

    López, J.D.; Litvak, V.; Espinosa, J.J.; Friston, K.; Barnes, G.R.

    2014-01-01

    The MEG/EEG inverse problem is ill-posed, giving different source reconstructions depending on the initial assumption sets. Parametric Empirical Bayes allows one to implement most popular MEG/EEG inversion schemes (Minimum Norm, LORETA, etc.) within the same generic Bayesian framework. It also provides a cost-function in terms of the variational Free energy—an approximation to the marginal likelihood or evidence of the solution. In this manuscript, we revisit the algorithm for MEG/EEG source reconstruction with a view to providing a didactic and practical guide. The aim is to promote and help standardise the development and consolidation of other schemes within the same framework. We describe the implementation in the Statistical Parametric Mapping (SPM) software package, carefully explaining each of its stages with the help of a simple simulated data example. We focus on the Multiple Sparse Priors (MSP) model, which we compare with the well-known Minimum Norm and LORETA models, using the negative variational Free energy for model comparison. The manuscript is accompanied by Matlab scripts to allow the reader to test and explore the underlying algorithm. PMID:24041874

  11. Convex optimization problem prototyping for image reconstruction in computed tomography with the Chambolle–Pock algorithm

    DEFF Research Database (Denmark)

    Sidky, Emil Y.; Jørgensen, Jakob Heide; Pan, Xiaochuan

    2012-01-01

    The primal–dual optimization algorithm developed in Chambolle and Pock (CP) (2011 J. Math. Imag. Vis. 40 1–26) is applied to various convex optimization problems of interest in computed tomography (CT) image reconstruction. This algorithm allows for rapid prototyping of optimization problems...... for the purpose of designing iterative image reconstruction algorithms for CT. The primal–dual algorithm is briefly summarized in this paper, and its potential for prototyping is demonstrated by explicitly deriving CP algorithm instances for many optimization problems relevant to CT. An example application...

  12. Public water supply sources - the practical problems

    International Nuclear Information System (INIS)

    Chambers, E.G.W.

    1990-01-01

    A complex system of reservoirs, streams, treatment works and pipe networks is used to provide the public water supply to consumers in Strathclyde. The manner in which a nuclear event would affect the quality of water available from this supply would depend on a wide variety of factors. The extent to which the quality from each source could be maintained or improved if found to be unsatisfactory would depend on the extent of contamination and the particular characteristics of each source. Development of contingency plans will incorporate monitoring of supplies and development of effective communications both internally and externally. (author)

  13. Sparse deconvolution for the large-scale ill-posed inverse problem of impact force reconstruction

    Science.gov (United States)

    Qiao, Baijie; Zhang, Xingwu; Gao, Jiawei; Liu, Ruonan; Chen, Xuefeng

    2017-01-01

    Most previous regularization methods for solving the inverse problem of force reconstruction are to minimize the l2-norm of the desired force. However, these traditional regularization methods such as Tikhonov regularization and truncated singular value decomposition, commonly fail to solve the large-scale ill-posed inverse problem in moderate computational cost. In this paper, taking into account the sparse characteristic of impact force, the idea of sparse deconvolution is first introduced to the field of impact force reconstruction and a general sparse deconvolution model of impact force is constructed. Second, a novel impact force reconstruction method based on the primal-dual interior point method (PDIPM) is proposed to solve such a large-scale sparse deconvolution model, where minimizing the l2-norm is replaced by minimizing the l1-norm. Meanwhile, the preconditioned conjugate gradient algorithm is used to compute the search direction of PDIPM with high computational efficiency. Finally, two experiments including the small-scale or medium-scale single impact force reconstruction and the relatively large-scale consecutive impact force reconstruction are conducted on a composite wind turbine blade and a shell structure to illustrate the advantage of PDIPM. Compared with Tikhonov regularization, PDIPM is more efficient, accurate and robust whether in the single impact force reconstruction or in the consecutive impact force reconstruction.

  14. EEG/MEG Source Reconstruction with Spatial-Temporal Two-Way Regularized Regression

    KAUST Repository

    Tian, Tian Siva; Huang, Jianhua Z.; Shen, Haipeng; Li, Zhimin

    2013-01-01

    In this work, we propose a spatial-temporal two-way regularized regression method for reconstructing neural source signals from EEG/MEG time course measurements. The proposed method estimates the dipole locations and amplitudes simultaneously

  15. The Reconstruction Toolkit (RTK), an open-source cone-beam CT reconstruction toolkit based on the Insight Toolkit (ITK)

    International Nuclear Information System (INIS)

    Rit, S; Vila Oliva, M; Sarrut, D; Brousmiche, S; Labarbe, R; Sharp, G C

    2014-01-01

    We propose the Reconstruction Toolkit (RTK, http://www.openrtk.org), an open-source toolkit for fast cone-beam CT reconstruction, based on the Insight Toolkit (ITK) and using GPU code extracted from Plastimatch. RTK is developed by an open consortium (see affiliations) under the non-contaminating Apache 2.0 license. The quality of the platform is daily checked with regression tests in partnership with Kitware, the company supporting ITK. Several features are already available: Elekta, Varian and IBA inputs, multi-threaded Feldkamp-David-Kress reconstruction on CPU and GPU, Parker short scan weighting, multi-threaded CPU and GPU forward projectors, etc. Each feature is either accessible through command line tools or C++ classes that can be included in independent software. A MIDAS community has been opened to share CatPhan datasets of several vendors (Elekta, Varian and IBA). RTK will be used in the upcoming cone-beam CT scanner developed by IBA for proton therapy rooms. Many features are under development: new input format support, iterative reconstruction, hybrid Monte Carlo / deterministic CBCT simulation, etc. RTK has been built to freely share tomographic reconstruction developments between researchers and is open for new contributions.

  16. Reconstruction of sound source signal by analytical passive TR in the environment with airflow

    Science.gov (United States)

    Wei, Long; Li, Min; Yang, Debin; Niu, Feng; Zeng, Wu

    2017-03-01

    In the acoustic design of air vehicles, the time-domain signals of noise sources on the surface of air vehicles can serve as data support to reveal the noise source generation mechanism, analyze acoustic fatigue, and take measures for noise insulation and reduction. To rapidly reconstruct the time-domain sound source signals in an environment with flow, a method combining the analytical passive time reversal mirror (AP-TR) with a shear flow correction is proposed. In this method, the negative influence of flow on sound wave propagation is suppressed by the shear flow correction, obtaining the corrected acoustic propagation time delay and path. Those corrected time delay and path together with the microphone array signals are then submitted to the AP-TR, reconstructing more accurate sound source signals in the environment with airflow. As an analytical method, AP-TR offers a supplementary way in 3D space to reconstruct the signal of sound source in the environment with airflow instead of the numerical TR. Experiments on the reconstruction of the sound source signals of a pair of loud speakers are conducted in an anechoic wind tunnel with subsonic airflow to validate the effectiveness and priorities of the proposed method. Moreover the comparison by theorem and experiment result between the AP-TR and the time-domain beamforming in reconstructing the sound source signal is also discussed.

  17. On a problem of reconstruction of a discontinuous function by its Radon transform

    Energy Technology Data Exchange (ETDEWEB)

    Derevtsov, Evgeny Yu.; Maltseva, Svetlana V.; Svetov, Ivan E. [Sobolev Institute of Mathematics of SB RAS, 630090, Novosibirsk (Russian Federation); Novosibirsk State University, 630090, Novosibirsk (Russian Federation); Sultanov, Murat A. [H. A. Yassawe International Kazakh-Turkish University, 161200, Turkestan (Kazakhstan)

    2016-08-10

    A problem of reconstruction of a discontinuous function by its Radon transform is considered. One of the approaches to the numerical solution for the problem consists in the next sequential steps: a visualization of a set of breaking points; an identification of this set; a determination of jump values; an elimination of discontinuities. We consider three of listed problems except the problem of jump values. The problems are investigated by mathematical modeling using numerical experiments. The results of simulation are satisfactory and allow to hope for the further development of the approach.

  18. Radiation source reconstruction with known geometry and materials using the adjoint

    International Nuclear Information System (INIS)

    Hykes, Joshua M.; Azmy, Yousry Y.

    2011-01-01

    We present a method to estimate an unknown isotropic source distribution, in space and energy, using detector measurements when the geometry and material composition are known. The estimated source distribution minimizes the difference between the measured and computed responses of detectors located at a selected number of points within the domain. In typical methods, a forward flux calculation is performed for each source guess in an iterative process. In contrast, we use the adjoint flux to compute the responses. Potential applications of the proposed method include determining the distribution of radio-contaminants following a nuclear event, monitoring the flow of radioactive fluids in pipes to determine hold-up locations, and retroactive reconstruction of radiation fields using workers' detectors' readings. After presenting the method, we describe a numerical test problem to demonstrate the preliminary viability of the method. As expected, using the adjoint flux reduces the number of transport solves to be proportional to the number of detector measurements, in contrast to methods using the forward flux that require a typically larger number proportional to the number of spatial mesh cells. (author)

  19. Atmospheric dispersion and inverse modelling for the reconstruction of accidental sources of pollutants

    International Nuclear Information System (INIS)

    Winiarek, Victor

    2014-01-01

    Uncontrolled releases of pollutant in the atmosphere may be the consequence of various situations: accidents, for instance leaks or explosions in an industrial plant, or terrorist attacks such as biological bombs, especially in urban areas. In the event of such situations, authorities' objectives are various: predict the contaminated zones to apply first countermeasures such as evacuation of concerned population; determine the source location; assess the long-term polluted areas, for instance by deposition of persistent pollutants in the soil. To achieve these objectives, numerical models can be used to model the atmospheric dispersion of pollutants. We will first present the different processes that govern the transport of pollutants in the atmosphere, then the different numerical models that are commonly used in this context. The choice between these models mainly depends of the scale and the details one seeks to take into account. We will then present several inverse modeling methods to estimate the emission as well as statistical methods to estimate prior errors, to which the inversion is very sensitive. Several case studies are presented, using synthetic data as well as real data such as the estimation of source terms from the Fukushima accident in March 2011. From our results, we estimate the Cesium-137 emission to be between 12 and 19 PBq with a standard deviation between 15 and 65% and the Iodine-131 emission to be between 190 and 380 PBq with a standard deviation between 5 and 10%. Concerning the localization of an unknown source of pollutant, two strategies can be considered. On one hand parametric methods use a limited number of parameters to characterize the source term to be reconstructed. To do so, strong assumptions are made on the nature of the source. The inverse problem is hence to estimate these parameters. On the other hand nonparametric methods attempt to reconstruct a full emission field. Several parametric and nonparametric methods are

  20. A high-throughput system for high-quality tomographic reconstruction of large datasets at Diamond Light Source.

    Science.gov (United States)

    Atwood, Robert C; Bodey, Andrew J; Price, Stephen W T; Basham, Mark; Drakopoulos, Michael

    2015-06-13

    Tomographic datasets collected at synchrotrons are becoming very large and complex, and, therefore, need to be managed efficiently. Raw images may have high pixel counts, and each pixel can be multidimensional and associated with additional data such as those derived from spectroscopy. In time-resolved studies, hundreds of tomographic datasets can be collected in sequence, yielding terabytes of data. Users of tomographic beamlines are drawn from various scientific disciplines, and many are keen to use tomographic reconstruction software that does not require a deep understanding of reconstruction principles. We have developed Savu, a reconstruction pipeline that enables users to rapidly reconstruct data to consistently create high-quality results. Savu is designed to work in an 'orthogonal' fashion, meaning that data can be converted between projection and sinogram space throughout the processing workflow as required. The Savu pipeline is modular and allows processing strategies to be optimized for users' purposes. In addition to the reconstruction algorithms themselves, it can include modules for identification of experimental problems, artefact correction, general image processing and data quality assessment. Savu is open source, open licensed and 'facility-independent': it can run on standard cluster infrastructure at any institution.

  1. Skull Defects in Finite Element Head Models for Source Reconstruction from Magnetoencephalography Signals

    Science.gov (United States)

    Lau, Stephan; Güllmar, Daniel; Flemming, Lars; Grayden, David B.; Cook, Mark J.; Wolters, Carsten H.; Haueisen, Jens

    2016-01-01

    Magnetoencephalography (MEG) signals are influenced by skull defects. However, there is a lack of evidence of this influence during source reconstruction. Our objectives are to characterize errors in source reconstruction from MEG signals due to ignoring skull defects and to assess the ability of an exact finite element head model to eliminate such errors. A detailed finite element model of the head of a rabbit used in a physical experiment was constructed from magnetic resonance and co-registered computer tomography imaging that differentiated nine tissue types. Sources of the MEG measurements above intact skull and above skull defects respectively were reconstructed using a finite element model with the intact skull and one incorporating the skull defects. The forward simulation of the MEG signals reproduced the experimentally observed characteristic magnitude and topography changes due to skull defects. Sources reconstructed from measured MEG signals above intact skull matched the known physical locations and orientations. Ignoring skull defects in the head model during reconstruction displaced sources under a skull defect away from that defect. Sources next to a defect were reoriented. When skull defects, with their physical conductivity, were incorporated in the head model, the location and orientation errors were mostly eliminated. The conductivity of the skull defect material non-uniformly modulated the influence on MEG signals. We propose concrete guidelines for taking into account conducting skull defects during MEG coil placement and modeling. Exact finite element head models can improve localization of brain function, specifically after surgery. PMID:27092044

  2. Simultaneous EEG Source and Forward Model Reconstruction (SOFOMORE) using a Hierarchical Bayesian Approach

    DEFF Research Database (Denmark)

    Stahlhut, Carsten; Mørup, Morten; Winther, Ole

    2011-01-01

    We present an approach to handle forward model uncertainty for EEG source reconstruction. A stochastic forward model representation is motivated by the many random contributions to the path from sources to measurements including the tissue conductivity distribution, the geometry of the cortical s...

  3. Reconstruction of Sound Source Pressures in an Enclosure Using the Phased Beam Tracing Method

    DEFF Research Database (Denmark)

    Jeong, Cheol-Ho; Ih, Jeong-Guon

    2009-01-01

    . First, surfaces of an extended source are divided into reasonably small segments. From each source segment, one beam is projected into the field and all emitted beams are traced. Radiated beams from the source reach array sensors after traveling various paths including the wall reflections. Collecting...... all the pressure histories at the field points, source-observer relations can be constructed in a matrix-vector form for each frequency. By multiplying the measured field data with the pseudo-inverse of the calculated transfer function, one obtains the distribution of source pressure. An omni......-directional sphere and a cubic source in a rectangular enclosure were taken as examples in the simulation tests. A reconstruction error was investigated by Monte Carlo simulation in terms of field point locations. When the source information was reconstructed by the present method, it was shown that the sound power...

  4. Automatic selection of optimal systolic and diastolic reconstruction windows for dual-source CT coronary angiography

    International Nuclear Information System (INIS)

    Seifarth, H.; Puesken, M.; Wienbeck, S.; Maintz, D.; Heindel, W.; Juergens, K.U.; Fischbach, R.

    2009-01-01

    The aim of this study was to assess the performance of a motion-map algorithm that automatically determines optimal reconstruction windows for dual-source coronary CT angiography. In datasets from 50 consecutive patients, optimal systolic and diastolic reconstruction windows were determined using the motion-map algorithm. For manual determination of the optimal reconstruction window, datasets were reconstructed in 5% steps throughout the RR interval. Motion artifacts were rated for each major coronary vessel using a five-point scale. Mean motion scores using the motion-map algorithm were 2.4 ± 0.8 for systolic reconstructions and 1.9 ± 0.8 for diastolic reconstructions. Using the manual approach, overall motion scores were significantly better (1.9 ± 0.5 and 1.7 ± 0.6, p 90% of cases using either approach. Using the automated approach, there was a negative correlation between heart rate and motion scores for systolic reconstructions (ρ = -0.26, p 80 bpm (systolic reconstruction). (orig.)

  5. Solution to the inversely stated transient source-receptor problem

    International Nuclear Information System (INIS)

    Sajo, E.; Sheff, J.R.

    1995-01-01

    Transient source-receptor problems are traditionally handled via the Boltzmann equation or through one of its variants. In the atmospheric transport of pollutants, meteorological uncertainties in the planetary boundary layer render only a few approximations to the Boltzmann equation useful. Often, due to the high number of unknowns, the atmospheric source-receptor problem is ill-posed. Moreover, models to estimate downwind concentration invariably assume that the source term is known. In this paper, an inverse methodology is developed, based on downwind measurement of concentration and that of meterological parameters to estimate the source term

  6. Sound source reconstruction using inverse boundary element calculations

    DEFF Research Database (Denmark)

    Schuhmacher, Andreas; Hald, Jørgen; Rasmussen, Karsten Bo

    2003-01-01

    Whereas standard boundary element calculations focus on the forward problem of computing the radiated acoustic field from a vibrating structure, the aim in this work is to reverse the process, i.e., to determine vibration from acoustic field data. This inverse problem is brought on a form suited ...... it is demonstrated that the L-curve criterion is robust with respect to the errors in a real measurement situation. In particular, it is shown that the L-curve criterion is superior to the more conventional generalized cross-validation (GCV) approach for the present tire noise studies....

  7. Reconstruction of Chernobyl source parameters using gamma dose rate measurements in town Pripjat

    Directory of Open Access Journals (Sweden)

    M. M. Talerko

    2010-06-01

    Full Text Available With the help of mathematical modeling of atmospheric transport the calculations of accidental release dispersion from the Chernobyl NPP to town Pripjat during period from 26 till 29 April 1986 have been carried out. Data of gamma rate measurements which was made in 31 points of the town were used. Based on the solution of atmospheric transport inverse problem the reconstruction of Chernobyl source parameters has been made including release intensity and effective source height. The input of main dose-forming radionuclides into the exposure dose during the first 40 hours after the accident (the period of population residence in the town before the evacuation has been estimated. According to the calculations the 131I deposition density averaged over the town territory was about 5.2 × 104 kBq/m2 (on 29.04.86. Minimum and maximum 131I deposition values were 2.8 × 104 kBq/m2 (western part, distance to the unit is 4.5 km and 1.2 × 105 kBq/m2 (north-eastern part of town, 2 km from the unit accordingly. For the moment of the evacuation dated April 27, deposition values were about 90 percent of these values.

  8. Improved iterative image reconstruction algorithm for the exterior problem of computed tomography

    International Nuclear Information System (INIS)

    Guo, Yumeng; Zeng, Li

    2017-01-01

    In industrial applications that are limited by the angle of a fan-beam and the length of a detector, the exterior problem of computed tomography (CT) uses only the projection data that correspond to the external annulus of the objects to reconstruct an image. Because the reconstructions are not affected by the projection data that correspond to the interior of the objects, the exterior problem is widely applied to detect cracks in the outer wall of large-sized objects, such as in-service pipelines. However, image reconstruction in the exterior problem is still a challenging problem due to truncated projection data and beam-hardening, both of which can lead to distortions and artifacts. Thus, developing an effective algorithm and adopting a scanning trajectory suited for the exterior problem may be valuable. In this study, an improved iterative algorithm that combines total variation minimization (TVM) with a region scalable fitting (RSF) model was developed for a unilateral off-centered scanning trajectory and can be utilized to inspect large-sized objects for defects. Experiments involving simulated phantoms and real projection data were conducted to validate the practicality of our algorithm. Furthermore, comparative experiments show that our algorithm outperforms others in suppressing the artifacts caused by truncated projection data and beam-hardening.

  9. Improved iterative image reconstruction algorithm for the exterior problem of computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Yumeng [Chongqing University, College of Mathematics and Statistics, Chongqing 401331 (China); Chongqing University, ICT Research Center, Key Laboratory of Optoelectronic Technology and System of the Education Ministry of China, Chongqing 400044 (China); Zeng, Li, E-mail: drlizeng@cqu.edu.cn [Chongqing University, College of Mathematics and Statistics, Chongqing 401331 (China); Chongqing University, ICT Research Center, Key Laboratory of Optoelectronic Technology and System of the Education Ministry of China, Chongqing 400044 (China)

    2017-01-11

    In industrial applications that are limited by the angle of a fan-beam and the length of a detector, the exterior problem of computed tomography (CT) uses only the projection data that correspond to the external annulus of the objects to reconstruct an image. Because the reconstructions are not affected by the projection data that correspond to the interior of the objects, the exterior problem is widely applied to detect cracks in the outer wall of large-sized objects, such as in-service pipelines. However, image reconstruction in the exterior problem is still a challenging problem due to truncated projection data and beam-hardening, both of which can lead to distortions and artifacts. Thus, developing an effective algorithm and adopting a scanning trajectory suited for the exterior problem may be valuable. In this study, an improved iterative algorithm that combines total variation minimization (TVM) with a region scalable fitting (RSF) model was developed for a unilateral off-centered scanning trajectory and can be utilized to inspect large-sized objects for defects. Experiments involving simulated phantoms and real projection data were conducted to validate the practicality of our algorithm. Furthermore, comparative experiments show that our algorithm outperforms others in suppressing the artifacts caused by truncated projection data and beam-hardening.

  10. Great Problems of Mathematics: A Course Based on Original Sources.

    Science.gov (United States)

    Laubenbacher, Reinhard C.; Pengelley, David J.

    1992-01-01

    Describes the history of five selected problems from mathematics that are included in an undergraduate honors course designed to utilize original sources for demonstrating the evolution of ideas developed in solving these problems: area and the definite integral, the beginnings of set theory, solutions of algebraic equations, Fermat's last…

  11. The use of hamstring tendon graft for the anterior cruciate ligament reconstruction (benefi ts, problems and their solutions

    Directory of Open Access Journals (Sweden)

    V. V. Slastinin

    2017-01-01

    Full Text Available The search for optimal graft for anterior cruciate ligament reconstruction is going on. The donor site morbidity remains one of the major problems when using autografts. The article provides an overview of the advantages and disadvantages of using the hamstring tendon autografts for anterior cruciate ligament reconstruction, and the ways of solving the problems associated with using such types of grafts.

  12. Hierarchical Bayesian Model for Simultaneous EEG Source and Forward Model Reconstruction (SOFOMORE)

    DEFF Research Database (Denmark)

    Stahlhut, Carsten; Mørup, Morten; Winther, Ole

    2009-01-01

    In this paper we propose an approach to handle forward model uncertainty for EEG source reconstruction. A stochastic forward model is motivated by the many uncertain contributions that form the forward propagation model including the tissue conductivity distribution, the cortical surface, and ele......In this paper we propose an approach to handle forward model uncertainty for EEG source reconstruction. A stochastic forward model is motivated by the many uncertain contributions that form the forward propagation model including the tissue conductivity distribution, the cortical surface...

  13. ITEM-QM solutions for EM problems in image reconstruction exemplary for the Compton Camera

    CERN Document Server

    Pauli, Josef; Anton, G

    2002-01-01

    Imaginary time expectation maximation (ITEM), a new algorithm for expectation maximization problems based on the quantum mechanics energy minimalization via imaginary (euclidian) time evolution is presented. Both (the algorithm as well as the implementation (http://www.johannes-pauli.de/item/index.html) are published under the terms of General GNU public License (http://www.gnu.org/copyleft/gpl.html). Due to its generality ITEM is applicable to various image reconstruction problems like CT, PET, SPECT, NMR, Compton Camera, tomosynthesis as well as any other energy minimization problem. The choice of the optimal ITEM Hamiltonian is discussed and numerical results are presented for the Compton Camera.

  14. Reconstruction of extended sources for the Helmholtz equation

    International Nuclear Information System (INIS)

    Kress, Rainer; Rundell, William

    2013-01-01

    The basis of most imaging methods is to detect hidden obstacles or inclusions within a body when one can only make measurements on an exterior surface. Our underlying model is that of inverse acoustic scattering based on the Helmholtz equation. Our inclusions are interior forces with compact support and our data consist of a single measurement of near-field Cauchy data on the external boundary. We propose an algorithm that under certain assumptions allows for the determination of the support set of these forces by solving a simpler ‘equivalent point source’ problem, and which uses a Newton scheme to improve the corresponding initial approximation. (paper)

  15. Nature and magnitude of the problem of spent radiation sources

    International Nuclear Information System (INIS)

    1991-09-01

    Various types of sealed radiation sources are widely used in industry, medicine and research. Virtually all countries have some sealed sources. The activity in the sources varies from kilobecquerels in consumer products to hundreds of pentabecquerels in facilities for food irradiation. Loss or misuse of sealed sources can give rise to accidents resulting in radiation exposure of workers and members of the general public, and can also give rise to extensive contamination of land, equipment and buildings. In extreme cases the exposure can be lethal. Problems of safety relating to spent radiation sources have been under consideration within the Agency for some years. The first objective of the project has been to prepare a comprehensive report reviewing the nature and background of the problem, also giving an overview of existing practices for the management of spent radiation sources. This report is the fulfilment of this first objective. The safe management of spent radiation sources cannot be studied in isolation from their normal use, so it has been necessary to include some details which are relevant to the use of radiation sources in general, although that area is outside the scope of this report. The report is limited to radiation sources made up of radioactive material. The Agency is implementing a comprehensive action plan for assistance to Member States, especially the developing countries, in all aspects of the safe management of spent radiation sources. The Agency is further seeking to establish regional or global solutions to the problems of long-term storage of spent radiation sources, as well as finding routes for the disposal of sources when it is not feasible to set up safe national solutions. The cost of remedial actions after an accident with radiation sources can be very high indeed: millions of dollars. If the Agency can help to prevent even one such single accident, the cost of its whole programme in this field would be more than covered. Refs

  16. An evolutionary algorithm for tomographic reconstructions in limited data sets problems

    International Nuclear Information System (INIS)

    Turcanu, Catrinel; Craciunescu, Teddy

    2000-01-01

    The paper proposes a new method for tomographic reconstructions. Unlike nuclear medicine applications, in physical science problems we are often confronted with limited data sets: constraints in the number of projections or limited angle views. The problem of image reconstruction from projections may be considered as a problem of finding an image (solution) having projections that match the experimental ones. In our approach, we choose a statistical correlation coefficient to evaluate the fitness of any potential solution. The optimization process is carried out by an evolutionary algorithm. Our algorithm has some problem-oriented characteristics. One of them is that a chromosome, representing a potential solution, is not linear but coded as a matrix of pixels corresponding to a two-dimensional image. This kind of internal representation reflects the genuine manifestation and slight differences between two points situated in the original problem space give rise to similar differences once they become coded. Another particular feature is a newly built crossover operator: the grid-based crossover, suitable for high dimension two-dimensional chromosomes. Except for the population size and the dimension of the cutting grid for the grid-based crossover, all the other parameters of the algorithm are independent of the geometry of the tomographic reconstruction. The performances of the method are evaluated in comparison with a traditional tomographic method, based on the maximization of the entropy of the image, that proved to work well with limited data sets. The test phantom is typical for an application with limited data sets: the determination of the neutron energy spectra with time resolution in case of short-pulsed neutron emission. The qualitative judgement and also the quantitative one, based on some figures of merit, point out that the proposed method ensures an improved reconstruction of shapes, sizes and resolution in the image, even in the presence of noise

  17. EEG source reconstruction reveals frontal-parietal dynamics of spatial conflict processing

    NARCIS (Netherlands)

    Cohen, M.X.; Ridderinkhof, K.R.

    2013-01-01

    Cognitive control requires the suppression of distracting information in order to focus on task-relevant information. We applied EEG source reconstruction via time-frequency linear constrained minimum variance beamforming to help elucidate the neural mechanisms involved in spatial conflict

  18. BoneSource hydroxyapatite cement: a novel biomaterial for craniofacial skeletal tissue engineering and reconstruction.

    Science.gov (United States)

    Friedman, C D; Costantino, P D; Takagi, S; Chow, L C

    1998-01-01

    BoneSource-hydroxyapatite cement is a new self-setting calcium phosphate cement biomaterial. Its unique and innovative physical chemistry coupled with enhanced biocompatibility make it useful for craniofacial skeletal reconstruction. The general properties and clinical use guidelines are reviewed. The biomaterial and surgical applications offer insight into improved outcomes and potential new uses for hydroxyapatite cement systems.

  19. Dual-source CT coronary imaging in heart transplant recipients: image quality and optimal reconstruction interval

    International Nuclear Information System (INIS)

    Bastarrika, Gorka; Arraiza, Maria; Pueyo, Jesus C.; Cecco, Carlo N. de; Ubilla, Matias; Mastrobuoni, Stefano; Rabago, Gregorio

    2008-01-01

    The image quality and optimal reconstruction interval for coronary arteries in heart transplant recipients undergoing non-invasive dual-source computed tomography (DSCT) coronary angiography was evaluated. Twenty consecutive heart transplant recipients who underwent DSCT coronary angiography were included (19 male, one female; mean age 63.1±10.7 years). Data sets were reconstructed in 5% steps from 30% to 80% of the R-R interval. Two blinded independent observers assessed the image quality of each coronary segments using a five-point scale (from 0 = not evaluative to 4=excellent quality). A total of 289 coronary segments in 20 heart transplant recipients were evaluated. Mean heart rate during the scan was 89.1±10.4 bpm. At the best reconstruction interval, diagnostic image quality (score ≥2) was obtained in 93.4% of the coronary segments (270/289) with a mean image quality score of 3.04± 0.63. Systolic reconstruction intervals provided better image quality scores than diastolic reconstruction intervals (overall mean quality scores obtained with the systolic and diastolic reconstructions 3.03±1.06 and 2.73±1.11, respectively; P<0.001). Different systolic reconstruction intervals (35%, 40%, 45% of RR interval) did not yield to significant differences in image quality scores for the coronary segments (P=0.74). Reconstructions obtained at the systolic phase of the cardiac cycle allowed excellent diagnostic image quality coronary angiograms in heart transplant recipients undergoing DSCT coronary angiography. (orig.)

  20. Monte Carlo source convergence and the Whitesides problem

    International Nuclear Information System (INIS)

    Blomquist, R. N.

    2000-01-01

    The issue of fission source convergence in Monte Carlo eigenvalue calculations is of interest because of the potential consequences of erroneous criticality safety calculations. In this work, the authors compare two different techniques to improve the source convergence behavior of standard Monte Carlo calculations applied to challenging source convergence problems. The first method, super-history powering, attempts to avoid discarding important fission sites between generations by delaying stochastic sampling of the fission site bank until after several generations of multiplication. The second method, stratified sampling of the fission site bank, explicitly keeps the important sites even if conventional sampling would have eliminated them. The test problems are variants of Whitesides' Criticality of the World problem in which the fission site phase space was intentionally undersampled in order to induce marginally intolerable variability in local fission site populations. Three variants of the problem were studied, each with a different degree of coupling between fissionable pieces. Both the superhistory powering method and the stratified sampling method were shown to improve convergence behavior, although stratified sampling is more robust for the extreme case of no coupling. Neither algorithm completely eliminates the loss of the most important fissionable piece, and if coupling is absent, the lost piece cannot be recovered unless its sites from earlier generations have been retained. Finally, criteria for measuring source convergence reliability are proposed and applied to the test problems

  1. Radiation protection problems with sealed Pu radiation sources

    International Nuclear Information System (INIS)

    Naumann, M.; Wels, C.

    1982-01-01

    A brief outline of the production methods and most important properties of Pu-238 and Pu-239 is given, followed by an overview of possibilities for utilizing the different types of radiation emitted, a description of problems involved in the safe handling of Pu radiation sources, and an assessment of the design principles for Pu-containing alpha, photon, neutron and energy sources from the radiation protection point of view. (author)

  2. Some statistical problems inherent in radioactive-source detection

    International Nuclear Information System (INIS)

    Barnett, C.S.

    1978-01-01

    Some of the statistical questions associated with problems of detecting random-point-process signals embedded in random-point-process noise are examined. An example of such a problem is that of searching for a lost radioactive source with a moving detection system. The emphasis is on theoretical questions, but some experimental and Monte Carlo results are used to test the theoretical results. Several idealized binary decision problems are treated by starting with simple, specific situations and progressing toward more general problems. This sequence of decision problems culminates in the minimum-cost-expectation rule for deciding between two Poisson processes with arbitrary intensity functions. As an example, this rule is then specialized to the detector-passing-a-point-source decision problem. Finally, Monte Carlo techniques are used to develop and test one estimation procedure: the maximum-likelihood estimation of a parameter in the intensity function of a Poisson process. For the Monte Carlo test this estimation procedure is specialized to the detector-passing-a-point-source case. Introductory material from probability theory is included so as to make the report accessible to those not especially conversant with probabilistic concepts and methods. 16 figures

  3. Constrained Null Space Component Analysis for Semiblind Source Separation Problem.

    Science.gov (United States)

    Hwang, Wen-Liang; Lu, Keng-Shih; Ho, Jinn

    2018-02-01

    The blind source separation (BSS) problem extracts unknown sources from observations of their unknown mixtures. A current trend in BSS is the semiblind approach, which incorporates prior information on sources or how the sources are mixed. The constrained independent component analysis (ICA) approach has been studied to impose constraints on the famous ICA framework. We introduced an alternative approach based on the null space component (NCA) framework and referred to the approach as the c-NCA approach. We also presented the c-NCA algorithm that uses signal-dependent semidefinite operators, which is a bilinear mapping, as signatures for operator design in the c-NCA approach. Theoretically, we showed that the source estimation of the c-NCA algorithm converges with a convergence rate dependent on the decay of the sequence, obtained by applying the estimated operators on corresponding sources. The c-NCA can be formulated as a deterministic constrained optimization method, and thus, it can take advantage of solvers developed in optimization society for solving the BSS problem. As examples, we demonstrated electroencephalogram interference rejection problems can be solved by the c-NCA with proximal splitting algorithms by incorporating a sparsity-enforcing separation model and considering the case when reference signals are available.

  4. Nash evolutionary algorithms : Testing problem size in reconstruction problems in frame structures

    OpenAIRE

    Greiner, D.; Periaux, Jacques; Emperador, J.M.; Galván, B.; Winter, G.

    2016-01-01

    The use of evolutionary algorithms has been enhanced in recent years for solving real engineering problems, where the requirements of intense computational calculations are needed, especially when computational engineering simulations are involved (use of finite element method, boundary element method, etc). The coupling of game-theory concepts in evolutionary algorithms has been a recent line of research which could enhance the efficiency of the optimum design procedure and th...

  5. Stable methods for ill-posed problems and application to reconstruction of atmospheric temperature profile

    International Nuclear Information System (INIS)

    Son, H.H.; Luong, P.T.; Loan, N.T.

    1990-04-01

    The problems of Remote Sensing (passive or active) are investigated on the base of main principle which consists in interpretation of radiometric electromagnetic measurements in such spectral interval where the radiation is sensitive to interested physical property of medium. Those problems such as an analysis of composition and structure of atmosphere using the records of scattered radiation, cloud identification, investigation of thermodynamic state and composition of system, reconstructing the atmospheric temperature profile on the base of data processing of infrared radiation emitted by system Earth-Atmosphere... belong to class of inverse problems of mathematical physics which are often incorrect. Int his paper a new class of regularized solution corresponding to general formulated RATP-problem is considered. (author). 14 refs, 3 figs, 3 tabs

  6. Iterative Reconstruction Methods for Inverse Problems in Tomography with Hybrid Data

    DEFF Research Database (Denmark)

    Sherina, Ekaterina

    . The goal of these modalities is to quantify physical parameters of materials or tissues inside an object from given interior data, which is measured everywhere inside the object. The advantage of these modalities is that large variations in physical parameters can be resolved and therefore, they have...... data is precisely the reason why reconstructions with a high contrast and a high resolution can be expected. The main contributions of this thesis consist in formulating the underlying mathematical problems with interior data as nonlinear operator equations, theoretically analysing them within...... iteration and the Levenberg-Marquardt method are employed for solving the problems. The first problem considered in this thesis is a problem of conductivity estimation from interior measurements of the power density, known as Acousto-Electrical Tomography. A special case of limited angle tomography...

  7. Uncertainty principles for inverse source problems for electromagnetic and elastic waves

    Science.gov (United States)

    Griesmaier, Roland; Sylvester, John

    2018-06-01

    In isotropic homogeneous media, far fields of time-harmonic electromagnetic waves radiated by compactly supported volume currents, and elastic waves radiated by compactly supported body force densities can be modelled in very similar fashions. Both are projected restricted Fourier transforms of vector-valued source terms. In this work we generalize two types of uncertainty principles recently developed for far fields of scalar-valued time-harmonic waves in Griesmaier and Sylvester (2017 SIAM J. Appl. Math. 77 154–80) to this vector-valued setting. These uncertainty principles yield stability criteria and algorithms for splitting far fields radiated by collections of well-separated sources into the far fields radiated by individual source components, and for the restoration of missing data segments. We discuss proper regularization strategies for these inverse problems, provide stability estimates based on the new uncertainty principles, and comment on reconstruction schemes. A numerical example illustrates our theoretical findings.

  8. Numerical method in reproducing kernel space for an inverse source problem for the fractional diffusion equation

    International Nuclear Information System (INIS)

    Wang, Wenyan; Han, Bo; Yamamoto, Masahiro

    2013-01-01

    We propose a new numerical method for reproducing kernel Hilbert space to solve an inverse source problem for a two-dimensional fractional diffusion equation, where we are required to determine an x-dependent function in a source term by data at the final time. The exact solution is represented in the form of a series and the approximation solution is obtained by truncating the series. Furthermore, a technique is proposed to improve some of the existing methods. We prove that the numerical method is convergent under an a priori assumption of the regularity of solutions. The method is simple to implement. Our numerical result shows that our method is effective and that it is robust against noise in L 2 -space in reconstructing a source function. (paper)

  9. Bayesian model selection of template forward models for EEG source reconstruction.

    Science.gov (United States)

    Strobbe, Gregor; van Mierlo, Pieter; De Vos, Maarten; Mijović, Bogdan; Hallez, Hans; Van Huffel, Sabine; López, José David; Vandenberghe, Stefaan

    2014-06-01

    Several EEG source reconstruction techniques have been proposed to identify the generating neuronal sources of electrical activity measured on the scalp. The solution of these techniques depends directly on the accuracy of the forward model that is inverted. Recently, a parametric empirical Bayesian (PEB) framework for distributed source reconstruction in EEG/MEG was introduced and implemented in the Statistical Parametric Mapping (SPM) software. The framework allows us to compare different forward modeling approaches, using real data, instead of using more traditional simulated data from an assumed true forward model. In the absence of a subject specific MR image, a 3-layered boundary element method (BEM) template head model is currently used including a scalp, skull and brain compartment. In this study, we introduced volumetric template head models based on the finite difference method (FDM). We constructed a FDM head model equivalent to the BEM model and an extended FDM model including CSF. These models were compared within the context of three different types of source priors related to the type of inversion used in the PEB framework: independent and identically distributed (IID) sources, equivalent to classical minimum norm approaches, coherence (COH) priors similar to methods such as LORETA, and multiple sparse priors (MSP). The resulting models were compared based on ERP data of 20 subjects using Bayesian model selection for group studies. The reconstructed activity was also compared with the findings of previous studies using functional magnetic resonance imaging. We found very strong evidence in favor of the extended FDM head model with CSF and assuming MSP. These results suggest that the use of realistic volumetric forward models can improve PEB EEG source reconstruction. Copyright © 2014 Elsevier Inc. All rights reserved.

  10. Heat source reconstruction from noisy temperature fields using an optimised derivative Gaussian filter

    Science.gov (United States)

    Delpueyo, D.; Balandraud, X.; Grédiac, M.

    2013-09-01

    The aim of this paper is to present a post-processing technique based on a derivative Gaussian filter to reconstruct heat source fields from temperature fields measured by infrared thermography. Heat sources can be deduced from temperature variations thanks to the heat diffusion equation. Filtering and differentiating are key-issues which are closely related here because the temperature fields which are processed are unavoidably noisy. We focus here only on the diffusion term because it is the most difficult term to estimate in the procedure, the reason being that it involves spatial second derivatives (a Laplacian for isotropic materials). This quantity can be reasonably estimated using a convolution of the temperature variation fields with second derivatives of a Gaussian function. The study is first based on synthetic temperature variation fields corrupted by added noise. The filter is optimised in order to reconstruct at best the heat source fields. The influence of both the dimension and the level of a localised heat source is discussed. Obtained results are also compared with another type of processing based on an averaging filter. The second part of this study presents an application to experimental temperature fields measured with an infrared camera on a thin plate in aluminium alloy. Heat sources are generated with an electric heating patch glued on the specimen surface. Heat source fields reconstructed from measured temperature fields are compared with the imposed heat sources. Obtained results illustrate the relevancy of the derivative Gaussian filter to reliably extract heat sources from noisy temperature fields for the experimental thermomechanics of materials.

  11. Reconstruction of source location in a network of gravitational wave interferometric detectors

    International Nuclear Information System (INIS)

    Cavalier, Fabien; Barsuglia, Matteo; Bizouard, Marie-Anne; Brisson, Violette; Clapson, Andre-Claude; Davier, Michel; Hello, Patrice; Kreckelbergh, Stephane; Leroy, Nicolas; Varvella, Monica

    2006-01-01

    This paper deals with the reconstruction of the direction of a gravitational wave source using the detection made by a network of interferometric detectors, mainly the LIGO and Virgo detectors. We suppose that an event has been seen in coincidence using a filter applied on the three detector data streams. Using the arrival time (and its associated error) of the gravitational signal in each detector, the direction of the source in the sky is computed using a χ 2 minimization technique. For reasonably large signals (SNR>4.5 in all detectors), the mean angular error between the real location and the reconstructed one is about 1 deg. . We also investigate the effect of the network geometry assuming the same angular response for all interferometric detectors. It appears that the reconstruction quality is not uniform over the sky and is degraded when the source approaches the plane defined by the three detectors. Adding at least one other detector to the LIGO-Virgo network reduces the blind regions and in the case of 6 detectors, a precision less than 1 deg. on the source direction can be reached for 99% of the sky

  12. Time-stretch microscopy based on time-wavelength sequence reconstruction from wideband incoherent source

    International Nuclear Information System (INIS)

    Zhang, Chi; Xu, Yiqing; Wei, Xiaoming; Tsia, Kevin K.; Wong, Kenneth K. Y.

    2014-01-01

    Time-stretch microscopy has emerged as an ultrafast optical imaging concept offering the unprecedented combination of the imaging speed and sensitivity. However, dedicated wideband and coherence optical pulse source with high shot-to-shot stability has been mandated for time-wavelength mapping—the enabling process for ultrahigh speed wavelength-encoded image retrieval. From the practical point of view, exploiting methods to relax the stringent requirements (e.g., temporal stability and coherence) for the source of time-stretch microscopy is thus of great value. In this paper, we demonstrated time-stretch microscopy by reconstructing the time-wavelength mapping sequence from a wideband incoherent source. Utilizing the time-lens focusing mechanism mediated by a narrow-band pulse source, this approach allows generation of a wideband incoherent source, with the spectral efficiency enhanced by a factor of 18. As a proof-of-principle demonstration, time-stretch imaging with the scan rate as high as MHz and diffraction-limited resolution is achieved based on the wideband incoherent source. We note that the concept of time-wavelength sequence reconstruction from wideband incoherent source can also be generalized to any high-speed optical real-time measurements, where wavelength is acted as the information carrier

  13. Source Plane Reconstruction of the Bright Lensed Galaxy RCSGA 032727-132609

    Science.gov (United States)

    Sharon, Keren; Gladders, Michael D.; Rigby, Jane R.; Wuyts, Eva; Koester, Benjamin P.; Bayliss, Matthew B.; Barrientos, L. Felipe

    2011-01-01

    We present new HST/WFC3 imaging data of RCS2 032727-132609, a bright lensed galaxy at z=1.7 that is magnified and stretched by the lensing cluster RCS2 032727-132623. Using this new high-resolution imaging, we modify our previous lens model (which was based on ground-based data) to fully understand the lensing geometry, and use it to reconstruct the lensed galaxy in the source plane. This giant arc represents a unique opportunity to peer into 100-pc scale structures in a high redshift galaxy. This new source reconstruction will be crucial for a future analysis of the spatially-resolved rest-UV and rest-optical spectra of the brightest parts of the arc.

  14. EEG source reconstruction reveals frontal-parietal dynamics of spatial conflict processing

    OpenAIRE

    Cohen, M.X.; Ridderinkhof, K.R.

    2013-01-01

    Cognitive control requires the suppression of distracting information in order to focus on task-relevant information. We applied EEG source reconstruction via time-frequency linear constrained minimum variance beamforming to help elucidate the neural mechanisms involved in spatial conflict processing. Human subjects performed a Simon task, in which conflict was induced by incongruence between spatial location and response hand. We found an early (?200 ms post-stimulus) conflict modulation in ...

  15. Review on solving the forward problem in EEG source analysis

    Directory of Open Access Journals (Sweden)

    Vergult Anneleen

    2007-11-01

    Full Text Available Abstract Background The aim of electroencephalogram (EEG source localization is to find the brain areas responsible for EEG waves of interest. It consists of solving forward and inverse problems. The forward problem is solved by starting from a given electrical source and calculating the potentials at the electrodes. These evaluations are necessary to solve the inverse problem which is defined as finding brain sources which are responsible for the measured potentials at the EEG electrodes. Methods While other reviews give an extensive summary of the both forward and inverse problem, this review article focuses on different aspects of solving the forward problem and it is intended for newcomers in this research field. Results It starts with focusing on the generators of the EEG: the post-synaptic potentials in the apical dendrites of pyramidal neurons. These cells generate an extracellular current which can be modeled by Poisson's differential equation, and Neumann and Dirichlet boundary conditions. The compartments in which these currents flow can be anisotropic (e.g. skull and white matter. In a three-shell spherical head model an analytical expression exists to solve the forward problem. During the last two decades researchers have tried to solve Poisson's equation in a realistically shaped head model obtained from 3D medical images, which requires numerical methods. The following methods are compared with each other: the boundary element method (BEM, the finite element method (FEM and the finite difference method (FDM. In the last two methods anisotropic conducting compartments can conveniently be introduced. Then the focus will be set on the use of reciprocity in EEG source localization. It is introduced to speed up the forward calculations which are here performed for each electrode position rather than for each dipole position. Solving Poisson's equation utilizing FEM and FDM corresponds to solving a large sparse linear system. Iterative

  16. Accurate Reconstruction of the Roman Circus in Milan by Georeferencing Heterogeneous Data Sources with GIS

    Directory of Open Access Journals (Sweden)

    Gabriele Guidi

    2017-09-01

    Full Text Available This paper presents the methodological approach and the actual workflow for creating the 3D digital reconstruction in time of the ancient Roman Circus of Milan, which is presently covered completely by the urban fabric of the modern city. The diachronic reconstruction is based on a proper mix of quantitative data originated by current 3D surveys and historical sources, such as ancient maps, drawings, archaeological reports, restrictions decrees, and old photographs. When possible, such heterogeneous sources have been georeferenced and stored in a GIS system. In this way the sources have been analyzed in depth, allowing the deduction of geometrical information not explicitly revealed by the material available. A reliable reconstruction of the area in different historical periods has been therefore hypothesized. This research has been carried on in the framework of the project Cultural Heritage Through Time—CHT2, funded by the Joint Programming Initiative on Cultural Heritage (JPI-CH, supported by the Italian Ministry for Cultural Heritage (MiBACT, the Italian Ministry for University and Research (MIUR, and the European Commission.

  17. Blind source separation problem in GPS time series

    Science.gov (United States)

    Gualandi, A.; Serpelloni, E.; Belardinelli, M. E.

    2016-04-01

    A critical point in the analysis of ground displacement time series, as those recorded by space geodetic techniques, is the development of data-driven methods that allow the different sources of deformation to be discerned and characterized in the space and time domains. Multivariate statistic includes several approaches that can be considered as a part of data-driven methods. A widely used technique is the principal component analysis (PCA), which allows us to reduce the dimensionality of the data space while maintaining most of the variance of the dataset explained. However, PCA does not perform well in finding the solution to the so-called blind source separation (BSS) problem, i.e., in recovering and separating the original sources that generate the observed data. This is mainly due to the fact that PCA minimizes the misfit calculated using an L2 norm (χ 2), looking for a new Euclidean space where the projected data are uncorrelated. The independent component analysis (ICA) is a popular technique adopted to approach the BSS problem. However, the independence condition is not easy to impose, and it is often necessary to introduce some approximations. To work around this problem, we test the use of a modified variational Bayesian ICA (vbICA) method to recover the multiple sources of ground deformation even in the presence of missing data. The vbICA method models the probability density function (pdf) of each source signal using a mix of Gaussian distributions, allowing for more flexibility in the description of the pdf of the sources with respect to standard ICA, and giving a more reliable estimate of them. Here we present its application to synthetic global positioning system (GPS) position time series, generated by simulating deformation near an active fault, including inter-seismic, co-seismic, and post-seismic signals, plus seasonal signals and noise, and an additional time-dependent volcanic source. We evaluate the ability of the PCA and ICA decomposition

  18. Radiation problems expected for the German spallation neutron source

    International Nuclear Information System (INIS)

    Goebel, K.

    1981-01-01

    The German project for the construction of a Spallation Neutron Source with high proton beam power (5.5 MW) will have to cope with a number of radiation problems. The present report describes these problems and proposes solutions for keeping exposures for the staff and release of activity and radiation into the environment as low as reasonably achievable. It is shown that the strict requirements of the German radiation protection regulations can be met. The main problem will be the exposure of maintenance personnel to remanent gamma radiation, as is the case at existing proton accelerators. Closed ventilation and cooling systems will reduce the release of (mainly short-lived) activity to acceptable levels. Shielding requirements for different sections are discussed, and it is demonstrated by calculations and extrapolations from experiments that fence-post doses well below 150 mrem/y can be obtained at distances of the order of 100 metres from the principal source points. The radiation protection system proposed for the Spallation Neutron Source is discussed, in particular the needs for monitor systems and a central radiation protection data base and alarm system. (orig.)

  19. On rational approximation methods for inverse source problems

    KAUST Repository

    Rundell, William

    2011-02-01

    The basis of most imaging methods is to detect hidden obstacles or inclusions within a body when one can only make measurements on an exterior surface. Such is the ubiquity of these problems, the underlying model can lead to a partial differential equation of any of the major types, but here we focus on the case of steady-state electrostatic or thermal imaging and consider boundary value problems for Laplace\\'s equation. Our inclusions are interior forces with compact support and our data consists of a single measurement of (say) voltage/current or temperature/heat flux on the external boundary. We propose an algorithm that under certain assumptions allows for the determination of the support set of these forces by solving a simpler "equivalent point source" problem, and which uses a Newton scheme to improve the corresponding initial approximation. © 2011 American Institute of Mathematical Sciences.

  20. On rational approximation methods for inverse source problems

    KAUST Repository

    Rundell, William; Hanke, Martin

    2011-01-01

    The basis of most imaging methods is to detect hidden obstacles or inclusions within a body when one can only make measurements on an exterior surface. Such is the ubiquity of these problems, the underlying model can lead to a partial differential equation of any of the major types, but here we focus on the case of steady-state electrostatic or thermal imaging and consider boundary value problems for Laplace's equation. Our inclusions are interior forces with compact support and our data consists of a single measurement of (say) voltage/current or temperature/heat flux on the external boundary. We propose an algorithm that under certain assumptions allows for the determination of the support set of these forces by solving a simpler "equivalent point source" problem, and which uses a Newton scheme to improve the corresponding initial approximation. © 2011 American Institute of Mathematical Sciences.

  1. Bioelectromagnetic forward problem: isolated source approach revis(it)ed.

    Science.gov (United States)

    Stenroos, M; Sarvas, J

    2012-06-07

    Electro- and magnetoencephalography (EEG and MEG) are non-invasive modalities for studying the electrical activity of the brain by measuring voltages on the scalp and magnetic fields outside the head. In the forward problem of EEG and MEG, the relationship between the neural sources and resulting signals is characterized using electromagnetic field theory. This forward problem is commonly solved with the boundary-element method (BEM). The EEG forward problem is numerically challenging due to the low relative conductivity of the skull. In this work, we revise the isolated source approach (ISA) that enables the accurate, computationally efficient BEM solution of this problem. The ISA is formulated for generic basis and weight functions that enable the use of Galerkin weighting. The implementation of the ISA-formulated linear Galerkin BEM (LGISA) is first verified in spherical geometry. Then, the LGISA is compared with conventional Galerkin and symmetric BEM approaches in a realistic 3-shell EEG/MEG model. The results show that the LGISA is a state-of-the-art method for EEG/MEG forward modeling: the ISA formulation increases the accuracy and decreases the computational load. Contrary to some earlier studies, the results show that the ISA increases the accuracy also in the computation of magnetic fields.

  2. The inverse problems of reconstruction in the X-rays, gamma or positron tomographic imaging systems

    International Nuclear Information System (INIS)

    Grangeat, P.

    1999-01-01

    The revolution in imagery, brought by the tomographic technic in the years 70, allows the computation of local values cartography for the attenuation or the emission activity. The reconstruction techniques thus allow the connection from integral measurements to characteristic information distribution by inversion of the measurement equations. They are a main application of the solution technic for inverse problems. In a first part the author recalls the physical principles for measures in X-rays, gamma and positron imaging. Then he presents the various problems with their associated inversion techniques. The third part is devoted to the activity sector and examples, to conclude in the last part with the forecast. (A.L.B.)

  3. Reconstruction of far-field tsunami amplitude distributions from earthquake sources

    Science.gov (United States)

    Geist, Eric L.; Parsons, Thomas E.

    2016-01-01

    The probability distribution of far-field tsunami amplitudes is explained in relation to the distribution of seismic moment at subduction zones. Tsunami amplitude distributions at tide gauge stations follow a similar functional form, well described by a tapered Pareto distribution that is parameterized by a power-law exponent and a corner amplitude. Distribution parameters are first established for eight tide gauge stations in the Pacific, using maximum likelihood estimation. A procedure is then developed to reconstruct the tsunami amplitude distribution that consists of four steps: (1) define the distribution of seismic moment at subduction zones; (2) establish a source-station scaling relation from regression analysis; (3) transform the seismic moment distribution to a tsunami amplitude distribution for each subduction zone; and (4) mix the transformed distribution for all subduction zones to an aggregate tsunami amplitude distribution specific to the tide gauge station. The tsunami amplitude distribution is adequately reconstructed for four tide gauge stations using globally constant seismic moment distribution parameters established in previous studies. In comparisons to empirical tsunami amplitude distributions from maximum likelihood estimation, the reconstructed distributions consistently exhibit higher corner amplitude values, implying that in most cases, the empirical catalogs are too short to include the largest amplitudes. Because the reconstructed distribution is based on a catalog of earthquakes that is much larger than the tsunami catalog, it is less susceptible to the effects of record-breaking events and more indicative of the actual distribution of tsunami amplitudes.

  4. Probabilist methods applied to electric source problems in nuclear safety

    International Nuclear Information System (INIS)

    Carnino, A.; Llory, M.

    1979-01-01

    Nuclear Safety has frequently been asked to quantify safety margins and evaluate the hazard. In order to do so, the probabilist methods have proved to be the most promising. Without completely replacing determinist safety, they are now commonly used at the reliability or availability stages of systems as well as for determining the likely accidental sequences. In this paper an application linked to the problem of electric sources is described, whilst at the same time indicating the methods used. This is the calculation of the probable loss of all the electric sources of a pressurized water nuclear power station, the evaluation of the reliability of diesels by event trees of failures and the determination of accidental sequences which could be brought about by the 'total electric source loss' initiator and affect the installation or the environment [fr

  5. Model-Based Least Squares Reconstruction of Coded Source Neutron Radiographs: Integrating the ORNL HFIR CG1D Source Model

    Energy Technology Data Exchange (ETDEWEB)

    Santos-Villalobos, Hector J [ORNL; Gregor, Jens [University of Tennessee, Knoxville (UTK); Bingham, Philip R [ORNL

    2014-01-01

    At the present, neutron sources cannot be fabricated small and powerful enough in order to achieve high resolution radiography while maintaining an adequate flux. One solution is to employ computational imaging techniques such as a Magnified Coded Source Imaging (CSI) system. A coded-mask is placed between the neutron source and the object. The system resolution is increased by reducing the size of the mask holes and the flux is increased by increasing the size of the coded-mask and/or the number of holes. One limitation of such system is that the resolution of current state-of-the-art scintillator-based detectors caps around 50um. To overcome this challenge, the coded-mask and object are magnified by making the distance from the coded-mask to the object much smaller than the distance from object to detector. In previous work, we have shown via synthetic experiments that our least squares method outperforms other methods in image quality and reconstruction precision because of the modeling of the CSI system components. However, the validation experiments were limited to simplistic neutron sources. In this work, we aim to model the flux distribution of a real neutron source and incorporate such a model in our least squares computational system. We provide a full description of the methodology used to characterize the neutron source and validate the method with synthetic experiments.

  6. Full field image reconstruction is suitable for high-pitch dual-source computed tomography.

    Science.gov (United States)

    Mahnken, Andreas H; Allmendinger, Thomas; Sedlmair, Martin; Tamm, Miriam; Reinartz, Sebastian D; Flohr, Thomas

    2012-11-01

    The field of view (FOV) in high-pitch dual-source computed tomography (DSCT) is limited by the size of the second detector. The goal of this study was to develop and evaluate a full FOV image reconstruction technique for high-pitch DSCT. For reconstruction beyond the FOV of the second detector, raw data of the second system were extended to the full dimensions of the first system, using the partly existing data of the first system in combination with a very smooth transition weight function. During the weighted filtered backprojection, the data of the second system were applied with an additional weighting factor. This method was tested for different pitch values from 1.5 to 3.5 on a simulated phantom and on 25 high-pitch DSCT data sets acquired at pitch values of 1.6, 2.0, 2.5, 2.8, and 3.0. Images were reconstructed with FOV sizes of 260 × 260 and 500 × 500 mm. Image quality was assessed by 2 radiologists using a 5-point Likert scale and analyzed with repeated-measure analysis of variance. In phantom and patient data, full FOV image quality depended on pitch. Where complete projection data from both tube-detector systems were available, image quality was unaffected by pitch changes. Full FOV image quality was not compromised at pitch values of 1.6 and remained fully diagnostic up to a pitch of 2.0. At higher pitch values, there was an increasing difference in image quality between limited and full FOV images (P = 0.0097). With this new image reconstruction technique, full FOV image reconstruction can be used up to a pitch of 2.0.

  7. WASS: An open-source pipeline for 3D stereo reconstruction of ocean waves

    Science.gov (United States)

    Bergamasco, Filippo; Torsello, Andrea; Sclavo, Mauro; Barbariol, Francesco; Benetazzo, Alvise

    2017-10-01

    Stereo 3D reconstruction of ocean waves is gaining more and more popularity in the oceanographic community and industry. Indeed, recent advances of both computer vision algorithms and computer processing power now allow the study of the spatio-temporal wave field with unprecedented accuracy, especially at small scales. Even if simple in theory, multiple details are difficult to be mastered for a practitioner, so that the implementation of a sea-waves 3D reconstruction pipeline is in general considered a complex task. For instance, camera calibration, reliable stereo feature matching and mean sea-plane estimation are all factors for which a well designed implementation can make the difference to obtain valuable results. For this reason, we believe that the open availability of a well tested software package that automates the reconstruction process from stereo images to a 3D point cloud would be a valuable addition for future researches in this area. We present WASS (http://www.dais.unive.it/wass), an Open-Source stereo processing pipeline for sea waves 3D reconstruction. Our tool completely automates all the steps required to estimate dense point clouds from stereo images. Namely, it computes the extrinsic parameters of the stereo rig so that no delicate calibration has to be performed on the field. It implements a fast 3D dense stereo reconstruction procedure based on the consolidated OpenCV library and, lastly, it includes set of filtering techniques both on the disparity map and the produced point cloud to remove the vast majority of erroneous points that can naturally arise while analyzing the optically complex nature of the water surface. In this paper, we describe the architecture of WASS and the internal algorithms involved. The pipeline workflow is shown step-by-step and demonstrated on real datasets acquired at sea.

  8. Crowd Sourcing for Challenging Technical Problems and Business Model

    Science.gov (United States)

    Davis, Jeffrey R.; Richard, Elizabeth

    2011-01-01

    Crowd sourcing may be defined as the act of outsourcing tasks that are traditionally performed by an employee or contractor to an undefined, generally large group of people or community (a crowd) in the form of an open call. The open call may be issued by an organization wishing to find a solution to a particular problem or complete a task, or by an open innovation service provider on behalf of that organization. In 2008, the Space Life Sciences Directorate (SLSD), with the support of Wyle Integrated Science and Engineering, established and implemented pilot projects in open innovation (crowd sourcing) to determine if these new internet-based platforms could indeed find solutions to difficult technical challenges. These unsolved technical problems were converted to problem statements, also called "Challenges" or "Technical Needs" by the various open innovation service providers, and were then posted externally to seek solutions. In addition, an open call was issued internally to NASA employees Agency wide (10 Field Centers and NASA HQ) using an open innovation service provider crowd sourcing platform to post NASA challenges from each Center for the others to propose solutions). From 2008 to 2010, the SLSD issued 34 challenges, 14 externally and 20 internally. The 14 external problems or challenges were posted through three different vendors: InnoCentive, Yet2.com and TopCoder. The 20 internal challenges were conducted using the InnoCentive crowd sourcing platform designed for internal use by an organization. This platform was customized for NASA use and promoted as NASA@Work. The results were significant. Of the seven InnoCentive external challenges, two full and five partial awards were made in complex technical areas such as predicting solar flares and long-duration food packaging. Similarly, the TopCoder challenge yielded an optimization algorithm for designing a lunar medical kit. The Yet2.com challenges yielded many new industry and academic contacts in bone

  9. The completeness condition and source orbits for exact image reconstruction in 3D cone-beam CT

    International Nuclear Information System (INIS)

    Mao Xiping; Kang Kejun

    1997-01-01

    The completeness condition for exact image reconstruction in 3D cone-beam CT are carefully analyzed in theory, and discussions about some source orbits which fulfill the completeness condition are followed

  10. Review on solving the inverse problem in EEG source analysis

    Directory of Open Access Journals (Sweden)

    Fabri Simon G

    2008-11-01

    Full Text Available Abstract In this primer, we give a review of the inverse problem for EEG source localization. This is intended for the researchers new in the field to get insight in the state-of-the-art techniques used to find approximate solutions of the brain sources giving rise to a scalp potential recording. Furthermore, a review of the performance results of the different techniques is provided to compare these different inverse solutions. The authors also include the results of a Monte-Carlo analysis which they performed to compare four non parametric algorithms and hence contribute to what is presently recorded in the literature. An extensive list of references to the work of other researchers is also provided. This paper starts off with a mathematical description of the inverse problem and proceeds to discuss the two main categories of methods which were developed to solve the EEG inverse problem, mainly the non parametric and parametric methods. The main difference between the two is to whether a fixed number of dipoles is assumed a priori or not. Various techniques falling within these categories are described including minimum norm estimates and their generalizations, LORETA, sLORETA, VARETA, S-MAP, ST-MAP, Backus-Gilbert, LAURA, Shrinking LORETA FOCUSS (SLF, SSLOFO and ALF for non parametric methods and beamforming techniques, BESA, subspace techniques such as MUSIC and methods derived from it, FINES, simulated annealing and computational intelligence algorithms for parametric methods. From a review of the performance of these techniques as documented in the literature, one could conclude that in most cases the LORETA solution gives satisfactory results. In situations involving clusters of dipoles, higher resolution algorithms such as MUSIC or FINES are however preferred. Imposing reliable biophysical and psychological constraints, as done by LAURA has given superior results. The Monte-Carlo analysis performed, comparing WMN, LORETA, sLORETA and SLF

  11. Progress toward the development and testing of source reconstruction methods for NIF neutron imaging.

    Science.gov (United States)

    Loomis, E N; Grim, G P; Wilde, C; Wilson, D C; Morgan, G; Wilke, M; Tregillis, I; Merrill, F; Clark, D; Finch, J; Fittinghoff, D; Bower, D

    2010-10-01

    Development of analysis techniques for neutron imaging at the National Ignition Facility is an important and difficult task for the detailed understanding of high-neutron yield inertial confinement fusion implosions. Once developed, these methods must provide accurate images of the hot and cold fuels so that information about the implosion, such as symmetry and areal density, can be extracted. One method under development involves the numerical inversion of the pinhole image using knowledge of neutron transport through the pinhole aperture from Monte Carlo simulations. In this article we present results of source reconstructions based on simulated images that test the methods effectiveness with regard to pinhole misalignment.

  12. Point-source reconstruction with a sparse light-sensor array for optical TPC readout

    International Nuclear Information System (INIS)

    Rutter, G; Richards, M; Bennieston, A J; Ramachers, Y A

    2011-01-01

    A reconstruction technique for sparse array optical signal readout is introduced and applied to the generic challenge of large-area readout of a large number of point light sources. This challenge finds a prominent example in future, large volume neutrino detector studies based on liquid argon. It is concluded that the sparse array option may be ruled out for reasons of required number of channels when compared to a benchmark derived from charge readout on wire-planes. Smaller-scale detectors, however, could benefit from this technology.

  13. EXTEND OPERATION PROBLEMS AND RECONSTRUCTION OF LARGEPANEL FIVE-STOREY BUILDINGS OF 50-60-IES XX CENTURY

    Directory of Open Access Journals (Sweden)

    BOLSHAKOV V. I.,

    2016-01-01

    Full Text Available Raising of the problem. In many regions is utilised housing, that age is more than half a century. According to the research materials of the analytical center of Ukrainian Cities Association there are 25,5 thousand houses built by first mass series project of large, block and brick buildings with a total area of 72 million M2 today in the state, rather those, that require reconstruction and modernization. In general, most of the housing stock of Ukraine is in a poor technical condition due to its deficient funding; it keeps the tendency of premature aging of the housing stock.One of the major problems of modern construction industry is the continuation of housing exploitation, in particular is it the building era of mass construction of 50-60-ies of XX century, called "Khrushchevki". According to the State Statistics Service of Ukraine the deterioration of residential buildings in Ukraine amounts to 47.2%, which makes us think of the immediate actions to occure this situation. The most acceptable way, at first viewe, seems the reconstruction of "Khrushchvki". However, the reconstruction is a complex problem that requires the construction industry solution due to the economic component, the social factor, the views of residents of these homes to create a technological and economical viable result. Analysis of publications. The problem of the "Khrushchevki" reconstruction is the subject of continual researches of leading builders of Ukraine. In the researchers' attention just as the technological problems [1 - 3], so economic components [4 - 6], in general, give an idea of the work scale required to overcome the impending crisis. The purpose of the article. Defining the main problems of exploatation of panel fivestory buildings of 50 - 60-ies twentieth century and their residents thoughts about existing inconvenience, as well as associated economic, technological and legal problems in the implementation of buildings reconstruction. Conclusions

  14. Smartphones Get Emotional: Mind Reading Images and Reconstructing the Neural Sources

    DEFF Research Database (Denmark)

    Petersen, Michael Kai; Stahlhut, Carsten; Stopczynski, Arkadiusz

    2011-01-01

    components across subjects we are able to remove artifacts and identify common sources of synchronous brain activity, consistent with earlier ndings based on conventional EEG equipment. Applying a Bayesian approach to reconstruct the neural sources not only facilitates dierentiation of emotional responses...... but may also provide an intuitive interface for interacting with a 3D rendered model of brain activity. Integrating a wireless EEG set with a smartphone thus offers completely new opportunities for modeling the mental state of users as well as providing a basis for novel bio-feedback applications.......Combining a 14 channel neuroheadset with a smartphone to capture and process brain imaging data, we demonstrate the ability to distinguish among emotional responses re ected in dierent scalp potentials when viewing pleasant and unpleasant pictures compared to neutral content. Clustering independent...

  15. EEG/MEG Source Reconstruction with Spatial-Temporal Two-Way Regularized Regression

    KAUST Repository

    Tian, Tian Siva

    2013-07-11

    In this work, we propose a spatial-temporal two-way regularized regression method for reconstructing neural source signals from EEG/MEG time course measurements. The proposed method estimates the dipole locations and amplitudes simultaneously through minimizing a single penalized least squares criterion. The novelty of our methodology is the simultaneous consideration of three desirable properties of the reconstructed source signals, that is, spatial focality, spatial smoothness, and temporal smoothness. The desirable properties are achieved by using three separate penalty functions in the penalized regression framework. Specifically, we impose a roughness penalty in the temporal domain for temporal smoothness, and a sparsity-inducing penalty and a graph Laplacian penalty in the spatial domain for spatial focality and smoothness. We develop a computational efficient multilevel block coordinate descent algorithm to implement the method. Using a simulation study with several settings of different spatial complexity and two real MEG examples, we show that the proposed method outperforms existing methods that use only a subset of the three penalty functions. © 2013 Springer Science+Business Media New York.

  16. MEG/EEG source reconstruction, statistical evaluation, and visualization with NUTMEG.

    Science.gov (United States)

    Dalal, Sarang S; Zumer, Johanna M; Guggisberg, Adrian G; Trumpis, Michael; Wong, Daniel D E; Sekihara, Kensuke; Nagarajan, Srikantan S

    2011-01-01

    NUTMEG is a source analysis toolbox geared towards cognitive neuroscience researchers using MEG and EEG, including intracranial recordings. Evoked and unaveraged data can be imported to the toolbox for source analysis in either the time or time-frequency domains. NUTMEG offers several variants of adaptive beamformers, probabilistic reconstruction algorithms, as well as minimum-norm techniques to generate functional maps of spatiotemporal neural source activity. Lead fields can be calculated from single and overlapping sphere head models or imported from other software. Group averages and statistics can be calculated as well. In addition to data analysis tools, NUTMEG provides a unique and intuitive graphical interface for visualization of results. Source analyses can be superimposed onto a structural MRI or headshape to provide a convenient visual correspondence to anatomy. These results can also be navigated interactively, with the spatial maps and source time series or spectrogram linked accordingly. Animations can be generated to view the evolution of neural activity over time. NUTMEG can also display brain renderings and perform spatial normalization of functional maps using SPM's engine. As a MATLAB package, the end user may easily link with other toolboxes or add customized functions.

  17. Performance evaluation of the Champagne source reconstruction algorithm on simulated and real M/EEG data.

    Science.gov (United States)

    Owen, Julia P; Wipf, David P; Attias, Hagai T; Sekihara, Kensuke; Nagarajan, Srikantan S

    2012-03-01

    In this paper, we present an extensive performance evaluation of a novel source localization algorithm, Champagne. It is derived in an empirical Bayesian framework that yields sparse solutions to the inverse problem. It is robust to correlated sources and learns the statistics of non-stimulus-evoked activity to suppress the effect of noise and interfering brain activity. We tested Champagne on both simulated and real M/EEG data. The source locations used for the simulated data were chosen to test the performance on challenging source configurations. In simulations, we found that Champagne outperforms the benchmark algorithms in terms of both the accuracy of the source localizations and the correct estimation of source time courses. We also demonstrate that Champagne is more robust to correlated brain activity present in real MEG data and is able to resolve many distinct and functionally relevant brain areas with real MEG and EEG data. Copyright © 2011 Elsevier Inc. All rights reserved.

  18. Sources and methods to reconstruct past masting patterns in European oak species.

    Science.gov (United States)

    Szabó, Péter

    2012-01-01

    The irregular occurrence of good seed years in forest trees is known in many parts of the world. Mast year frequency in the past few decades can be examined through field observational studies; however, masting patterns in the more distant past are equally important in gaining a better understanding of long-term forest ecology. Past masting patterns can be studied through the examination of historical written sources. These pose considerable challenges, because data in them were usually not recorded with the aim of providing information about masting. Several studies examined masting in the deeper past, however, authors hardly ever considered the methodological implications of using and combining various source types. This paper provides a critical overview of the types of archival written that are available for the reconstruction of past masting patterns for European oak species and proposes a method to unify and evaluate different types of data. Available sources cover approximately eight centuries and can be put into two basic categories: direct observations on the amount of acorns and references to sums of money received in exchange for access to acorns. Because archival sources are highly different in origin and quality, the optimal solution for creating databases for past masting data is a three-point scale: zero mast, moderate mast, good mast. When larger amounts of data are available in a unified three-point-scale database, they can be used to test hypotheses about past masting frequencies, the driving forces of masting or regional masting patterns.

  19. WASS: an open-source stereo processing pipeline for sea waves 3D reconstruction

    Science.gov (United States)

    Bergamasco, Filippo; Benetazzo, Alvise; Torsello, Andrea; Barbariol, Francesco; Carniel, Sandro; Sclavo, Mauro

    2017-04-01

    Stereo 3D reconstruction of ocean waves is gaining more and more popularity in the oceanographic community. In fact, recent advances of both computer vision algorithms and CPU processing power can now allow the study of the spatio-temporal wave fields with unprecedented accuracy, especially at small scales. Even if simple in theory, multiple details are difficult to be mastered for a practitioner so that the implementation of a 3D reconstruction pipeline is in general considered a complex task. For instance, camera calibration, reliable stereo feature matching and mean sea-plane estimation are all factors for which a well designed implementation can make the difference to obtain valuable results. For this reason, we believe that the open availability of a well-tested software package that automates the steps from stereo images to a 3D point cloud would be a valuable addition for future researches in this area. We present WASS, a completely Open-Source stereo processing pipeline for sea waves 3D reconstruction, available at http://www.dais.unive.it/wass/. Our tool completely automates the recovery of dense point clouds from stereo images by providing three main functionalities. First, WASS can automatically recover the extrinsic parameters of the stereo rig (up to scale) so that no delicate calibration has to be performed on the field. Second, WASS implements a fast 3D dense stereo reconstruction procedure so that an accurate 3D point cloud can be computed from each stereo pair. We rely on the well-consolidated OpenCV library both for the image stereo rectification and disparity map recovery. Lastly, a set of 2D and 3D filtering techniques both on the disparity map and the produced point cloud are implemented to remove the vast majority of erroneous points that can naturally arise while analyzing the optically complex nature of the water surface (examples are sun-glares, large white-capped areas, fog and water areosol, etc). Developed to be as fast as possible, WASS

  20. An artificial neural network approach to reconstruct the source term of a nuclear accident

    International Nuclear Information System (INIS)

    Giles, J.; Palma, C. R.; Weller, P.

    1997-01-01

    This work makes use of one of the main features of artificial neural networks, which is their ability to 'learn' from sets of known input and output data. Indeed, a trained artificial neural network can be used to make predictions on the input data when the output is known, and this feedback process enables one to reconstruct the source term from field observations. With this aim, an artificial neural networks has been trained, using the projections of a segmented plume atmospheric dispersion model at fixed points, simulating a set of gamma detectors located outside the perimeter of a nuclear facility. The resulting set of artificial neural networks was used to determine the release fraction and rate for each of the noble gases, iodines and particulate fission products that could originate from a nuclear accident. Model projections were made using a large data set consisting of effective release height, release fraction of noble gases, iodines and particulate fission products, atmospheric stability, wind speed and wind direction. The model computed nuclide-specific gamma dose rates. The locations of the detectors were chosen taking into account both building shine and wake effects, and varied in distance between 800 and 1200 m from the reactor.The inputs to the artificial neural networks consisted of the measurements from the detector array, atmospheric stability, wind speed and wind direction; the outputs comprised a set of release fractions and heights. Once trained, the artificial neural networks was used to reconstruct the source term from the detector responses for data sets not used in training. The preliminary results are encouraging and show that the noble gases and particulate fission product release fractions are well determined

  1. Reconstructing the nature of the first cosmic sources from the anisotropic 21-cm signal.

    Science.gov (United States)

    Fialkov, Anastasia; Barkana, Rennan; Cohen, Aviad

    2015-03-13

    The redshifted 21-cm background is expected to be a powerful probe of the early Universe, carrying both cosmological and astrophysical information from a wide range of redshifts. In particular, the power spectrum of fluctuations in the 21-cm brightness temperature is anisotropic due to the line-of-sight velocity gradient, which in principle allows for a simple extraction of this information in the limit of linear fluctuations. However, recent numerical studies suggest that the 21-cm signal is actually rather complex, and its analysis likely depends on detailed model fitting. We present the first realistic simulation of the anisotropic 21-cm power spectrum over a wide period of early cosmic history. We show that on observable scales, the anisotropy is large and thus measurable at most redshifts, and its form tracks the evolution of 21-cm fluctuations as they are produced early on by Lyman-α radiation from stars, then switch to x-ray radiation from early heating sources, and finally to ionizing radiation from stars. In particular, we predict a redshift window during cosmic heating (at z∼15), when the anisotropy is small, during which the shape of the 21-cm power spectrum on large scales is determined directly by the average radial distribution of the flux from x-ray sources. This makes possible a model-independent reconstruction of the x-ray spectrum of the earliest sources of cosmic heating.

  2. Three-dimensional reconstruction of neutron, gamma-ray, and x-ray sources using spherical harmonic decomposition

    Science.gov (United States)

    Volegov, P. L.; Danly, C. R.; Fittinghoff, D.; Geppert-Kleinrath, V.; Grim, G.; Merrill, F. E.; Wilde, C. H.

    2017-11-01

    Neutron, gamma-ray, and x-ray imaging are important diagnostic tools at the National Ignition Facility (NIF) for measuring the two-dimensional (2D) size and shape of the neutron producing region, for probing the remaining ablator and measuring the extent of the DT plasmas during the stagnation phase of Inertial Confinement Fusion implosions. Due to the difficulty and expense of building these imagers, at most only a few two-dimensional projections images will be available to reconstruct the three-dimensional (3D) sources. In this paper, we present a technique that has been developed for the 3D reconstruction of neutron, gamma-ray, and x-ray sources from a minimal number of 2D projections using spherical harmonics decomposition. We present the detailed algorithms used for this characterization and the results of reconstructed sources from experimental neutron and x-ray data collected at OMEGA and NIF.

  3. INLINING 3D RECONSTRUCTION, MULTI-SOURCE TEXTURE MAPPING AND SEMANTIC ANALYSIS USING OBLIQUE AERIAL IMAGERY

    Directory of Open Access Journals (Sweden)

    D. Frommholz

    2016-06-01

    Full Text Available This paper proposes an in-line method for the simplified reconstruction of city buildings from nadir and oblique aerial images that at the same time are being used for multi-source texture mapping with minimal resampling. Further, the resulting unrectified texture atlases are analyzed for fac¸ade elements like windows to be reintegrated into the original 3D models. Tests on real-world data of Heligoland/ Germany comprising more than 800 buildings exposed a median positional deviation of 0.31 m at the fac¸ades compared to the cadastral map, a correctness of 67% for the detected windows and good visual quality when being rendered with GPU-based perspective correction. As part of the process building reconstruction takes the oriented input images and transforms them into dense point clouds by semi-global matching (SGM. The point sets undergo local RANSAC-based regression and topology analysis to detect adjacent planar surfaces and determine their semantics. Based on this information the roof, wall and ground surfaces found get intersected and limited in their extension to form a closed 3D building hull. For texture mapping the hull polygons are projected into each possible input bitmap to find suitable color sources regarding the coverage and resolution. Occlusions are detected by ray-casting a full-scale digital surface model (DSM of the scene and stored in pixel-precise visibility maps. These maps are used to derive overlap statistics and radiometric adjustment coefficients to be applied when the visible image parts for each building polygon are being copied into a compact texture atlas without resampling whenever possible. The atlas bitmap is passed to a commercial object-based image analysis (OBIA tool running a custom rule set to identify windows on the contained fac¸ade patches. Following multi-resolution segmentation and classification based on brightness and contrast differences potential window objects are evaluated against geometric

  4. Beyond Open Source Software: Solving Common Library Problems Using the Open Source Hardware Arduino Platform

    Directory of Open Access Journals (Sweden)

    Jonathan Younker

    2013-06-01

    Full Text Available Using open source hardware platforms like the Arduino, libraries have the ability to quickly and inexpensively prototype custom hardware solutions to common library problems. The authors present the Arduino environment, what it is, what it does, and how it was used at the James A. Gibson Library at Brock University to create a production portable barcode-scanning utility for in-house use statistics collection as well as a prototype for a service desk statistics tabulation program’s hardware interface.

  5. 40 CFR Table 1 to Subpart Oooo of... - Emission Limits for New or Reconstructed and Existing Affected Sources in the Printing, Coating...

    Science.gov (United States)

    2010-07-01

    ... Reconstructed and Existing Affected Sources in the Printing, Coating and Dyeing of Fabrics and Other Textiles... SOURCE CATEGORIES National Emission Standards for Hazardous Air Pollutants: Printing, Coating, and Dyeing...—Emission Limits for New or Reconstructed and Existing Affected Sources in the Printing, Coating and Dyeing...

  6. Multiple Linear Regression for Reconstruction of Gene Regulatory Networks in Solving Cascade Error Problems

    Directory of Open Access Journals (Sweden)

    Faridah Hani Mohamed Salleh

    2017-01-01

    Full Text Available Gene regulatory network (GRN reconstruction is the process of identifying regulatory gene interactions from experimental data through computational analysis. One of the main reasons for the reduced performance of previous GRN methods had been inaccurate prediction of cascade motifs. Cascade error is defined as the wrong prediction of cascade motifs, where an indirect interaction is misinterpreted as a direct interaction. Despite the active research on various GRN prediction methods, the discussion on specific methods to solve problems related to cascade errors is still lacking. In fact, the experiments conducted by the past studies were not specifically geared towards proving the ability of GRN prediction methods in avoiding the occurrences of cascade errors. Hence, this research aims to propose Multiple Linear Regression (MLR to infer GRN from gene expression data and to avoid wrongly inferring of an indirect interaction (A → B → C as a direct interaction (A → C. Since the number of observations of the real experiment datasets was far less than the number of predictors, some predictors were eliminated by extracting the random subnetworks from global interaction networks via an established extraction method. In addition, the experiment was extended to assess the effectiveness of MLR in dealing with cascade error by using a novel experimental procedure that had been proposed in this work. The experiment revealed that the number of cascade errors had been very minimal. Apart from that, the Belsley collinearity test proved that multicollinearity did affect the datasets used in this experiment greatly. All the tested subnetworks obtained satisfactory results, with AUROC values above 0.5.

  7. Multiple Linear Regression for Reconstruction of Gene Regulatory Networks in Solving Cascade Error Problems.

    Science.gov (United States)

    Salleh, Faridah Hani Mohamed; Zainudin, Suhaila; Arif, Shereena M

    2017-01-01

    Gene regulatory network (GRN) reconstruction is the process of identifying regulatory gene interactions from experimental data through computational analysis. One of the main reasons for the reduced performance of previous GRN methods had been inaccurate prediction of cascade motifs. Cascade error is defined as the wrong prediction of cascade motifs, where an indirect interaction is misinterpreted as a direct interaction. Despite the active research on various GRN prediction methods, the discussion on specific methods to solve problems related to cascade errors is still lacking. In fact, the experiments conducted by the past studies were not specifically geared towards proving the ability of GRN prediction methods in avoiding the occurrences of cascade errors. Hence, this research aims to propose Multiple Linear Regression (MLR) to infer GRN from gene expression data and to avoid wrongly inferring of an indirect interaction (A → B → C) as a direct interaction (A → C). Since the number of observations of the real experiment datasets was far less than the number of predictors, some predictors were eliminated by extracting the random subnetworks from global interaction networks via an established extraction method. In addition, the experiment was extended to assess the effectiveness of MLR in dealing with cascade error by using a novel experimental procedure that had been proposed in this work. The experiment revealed that the number of cascade errors had been very minimal. Apart from that, the Belsley collinearity test proved that multicollinearity did affect the datasets used in this experiment greatly. All the tested subnetworks obtained satisfactory results, with AUROC values above 0.5.

  8. The Iterative Reweighted Mixed-Norm Estimate for Spatio-Temporal MEG/EEG Source Reconstruction.

    Science.gov (United States)

    Strohmeier, Daniel; Bekhti, Yousra; Haueisen, Jens; Gramfort, Alexandre

    2016-10-01

    Source imaging based on magnetoencephalography (MEG) and electroencephalography (EEG) allows for the non-invasive analysis of brain activity with high temporal and good spatial resolution. As the bioelectromagnetic inverse problem is ill-posed, constraints are required. For the analysis of evoked brain activity, spatial sparsity of the neuronal activation is a common assumption. It is often taken into account using convex constraints based on the l 1 -norm. The resulting source estimates are however biased in amplitude and often suboptimal in terms of source selection due to high correlations in the forward model. In this work, we demonstrate that an inverse solver based on a block-separable penalty with a Frobenius norm per block and a l 0.5 -quasinorm over blocks addresses both of these issues. For solving the resulting non-convex optimization problem, we propose the iterative reweighted Mixed Norm Estimate (irMxNE), an optimization scheme based on iterative reweighted convex surrogate optimization problems, which are solved efficiently using a block coordinate descent scheme and an active set strategy. We compare the proposed sparse imaging method to the dSPM and the RAP-MUSIC approach based on two MEG data sets. We provide empirical evidence based on simulations and analysis of MEG data that the proposed method improves on the standard Mixed Norm Estimate (MxNE) in terms of amplitude bias, support recovery, and stability.

  9. Environmental problems connected to the use of renewable energy sources

    International Nuclear Information System (INIS)

    Mottana, A.; Pignotti, S.

    2000-01-01

    The development of FER (renewable energy sources) can represent a fundamental answer to the growing energy need and the requirement for a new environmental quality. Also the renewable sources, however, have an environmental cost, whose amount can be considered of little importance at a world balance, but can have a large impact at a local level. Among FER the author has chosen hydroelectric source, biomass and wind energy, since they are most effective according to the aims of this discussion [it

  10. Digital Reconstruction of AN Archaeological Site Based on the Integration of 3d Data and Historical Sources

    Science.gov (United States)

    Guidi, G.; Russo, M.; Angheleddu, D.

    2013-02-01

    The methodology proposed in this paper in based on an integrated approach for creating a 3D digital reconstruction of an archaeological site, using extensively the 3D documentation of the site in its current state, followed by an iterative interaction between archaeologists and digital modelers, leading to a progressive refinement of the reconstructive hypotheses. The starting point of the method is the reality-based model, which, together with ancient drawings and documents, is used for generating the first reconstructive step. Such rough approximation of a possible architectural structure can be annotated through archaeological considerations that has to be confronted with geometrical constraints, producing a reduction of the reconstructive hypotheses to a limited set, each one to be archaeologically evaluated. This refinement loop on the reconstructive choices is iterated until the result become convincing by both points of view, integrating in the best way all the available sources. The proposed method has been verified on the ruins of five temples in the My Son site, a wide archaeological area located in central Vietnam. The integration of 3D surveyed data and historical documentation has allowed to support a digital reconstruction of not existing architectures, developing their three-dimensional digital models step by step, from rough shapes to highly sophisticate virtual prototypes.

  11. A compressed sensing based reconstruction algorithm for synchrotron source propagation-based X-ray phase contrast computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Melli, Seyed Ali, E-mail: sem649@mail.usask.ca [Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, SK (Canada); Wahid, Khan A. [Department of Electrical and Computer Engineering, University of Saskatchewan, Saskatoon, SK (Canada); Babyn, Paul [Department of Medical Imaging, University of Saskatchewan, Saskatoon, SK (Canada); Montgomery, James [College of Medicine, University of Saskatchewan, Saskatoon, SK (Canada); Snead, Elisabeth [Western College of Veterinary Medicine, University of Saskatchewan, Saskatoon, SK (Canada); El-Gayed, Ali [College of Medicine, University of Saskatchewan, Saskatoon, SK (Canada); Pettitt, Murray; Wolkowski, Bailey [College of Agriculture and Bioresources, University of Saskatchewan, Saskatoon, SK (Canada); Wesolowski, Michal [Department of Medical Imaging, University of Saskatchewan, Saskatoon, SK (Canada)

    2016-01-11

    Synchrotron source propagation-based X-ray phase contrast computed tomography is increasingly used in pre-clinical imaging. However, it typically requires a large number of projections, and subsequently a large radiation dose, to produce high quality images. To improve the applicability of this imaging technique, reconstruction algorithms that can reduce the radiation dose and acquisition time without degrading image quality are needed. The proposed research focused on using a novel combination of Douglas–Rachford splitting and randomized Kaczmarz algorithms to solve large-scale total variation based optimization in a compressed sensing framework to reconstruct 2D images from a reduced number of projections. Visual assessment and quantitative performance evaluations of a synthetic abdomen phantom and real reconstructed image of an ex-vivo slice of canine prostate tissue demonstrate that the proposed algorithm is competitive in reconstruction process compared with other well-known algorithms. An additional potential benefit of reducing the number of projections would be reduction of time for motion artifact to occur if the sample moves during image acquisition. Use of this reconstruction algorithm to reduce the required number of projections in synchrotron source propagation-based X-ray phase contrast computed tomography is an effective form of dose reduction that may pave the way for imaging of in-vivo samples.

  12. The Static Ladder Problem with Two Sources of Friction

    Science.gov (United States)

    Bennett, Jonathan; Mauney, Alex

    2011-01-01

    The problem of a ladder leaning against a wall in static equilibrium is a classic example encountered in introductory mechanics texts. Most discussions of this problem assume that the static frictional force between the ladder and wall can be ignored. A few authors consider the case where the static friction coefficients between ladder/wall…

  13. EEG source reconstruction reveals frontal-parietal dynamics of spatial conflict processing.

    Directory of Open Access Journals (Sweden)

    Michael X Cohen

    Full Text Available Cognitive control requires the suppression of distracting information in order to focus on task-relevant information. We applied EEG source reconstruction via time-frequency linear constrained minimum variance beamforming to help elucidate the neural mechanisms involved in spatial conflict processing. Human subjects performed a Simon task, in which conflict was induced by incongruence between spatial location and response hand. We found an early (∼200 ms post-stimulus conflict modulation in stimulus-contralateral parietal gamma (30-50 Hz, followed by a later alpha-band (8-12 Hz conflict modulation, suggesting an early detection of spatial conflict and inhibition of spatial location processing. Inter-regional connectivity analyses assessed via cross-frequency coupling of theta (4-8 Hz, alpha, and gamma power revealed conflict-induced shifts in cortical network interactions: Congruent trials (relative to incongruent trials had stronger coupling between frontal theta and stimulus-contrahemifield parietal alpha/gamma power, whereas incongruent trials had increased theta coupling between medial frontal and lateral frontal regions. These findings shed new light into the large-scale network dynamics of spatial conflict processing, and how those networks are shaped by oscillatory interactions.

  14. EEG source reconstruction reveals frontal-parietal dynamics of spatial conflict processing.

    Science.gov (United States)

    Cohen, Michael X; Ridderinkhof, K Richard

    2013-01-01

    Cognitive control requires the suppression of distracting information in order to focus on task-relevant information. We applied EEG source reconstruction via time-frequency linear constrained minimum variance beamforming to help elucidate the neural mechanisms involved in spatial conflict processing. Human subjects performed a Simon task, in which conflict was induced by incongruence between spatial location and response hand. We found an early (∼200 ms post-stimulus) conflict modulation in stimulus-contralateral parietal gamma (30-50 Hz), followed by a later alpha-band (8-12 Hz) conflict modulation, suggesting an early detection of spatial conflict and inhibition of spatial location processing. Inter-regional connectivity analyses assessed via cross-frequency coupling of theta (4-8 Hz), alpha, and gamma power revealed conflict-induced shifts in cortical network interactions: Congruent trials (relative to incongruent trials) had stronger coupling between frontal theta and stimulus-contrahemifield parietal alpha/gamma power, whereas incongruent trials had increased theta coupling between medial frontal and lateral frontal regions. These findings shed new light into the large-scale network dynamics of spatial conflict processing, and how those networks are shaped by oscillatory interactions.

  15. EEG Source Reconstruction Reveals Frontal-Parietal Dynamics of Spatial Conflict Processing

    Science.gov (United States)

    Cohen, Michael X; Ridderinkhof, K. Richard

    2013-01-01

    Cognitive control requires the suppression of distracting information in order to focus on task-relevant information. We applied EEG source reconstruction via time-frequency linear constrained minimum variance beamforming to help elucidate the neural mechanisms involved in spatial conflict processing. Human subjects performed a Simon task, in which conflict was induced by incongruence between spatial location and response hand. We found an early (∼200 ms post-stimulus) conflict modulation in stimulus-contralateral parietal gamma (30–50 Hz), followed by a later alpha-band (8–12 Hz) conflict modulation, suggesting an early detection of spatial conflict and inhibition of spatial location processing. Inter-regional connectivity analyses assessed via cross-frequency coupling of theta (4–8 Hz), alpha, and gamma power revealed conflict-induced shifts in cortical network interactions: Congruent trials (relative to incongruent trials) had stronger coupling between frontal theta and stimulus-contrahemifield parietal alpha/gamma power, whereas incongruent trials had increased theta coupling between medial frontal and lateral frontal regions. These findings shed new light into the large-scale network dynamics of spatial conflict processing, and how those networks are shaped by oscillatory interactions. PMID:23451201

  16. Problems in the neutron dynamics of source-driven systems

    International Nuclear Information System (INIS)

    Ravetto, P.

    2001-01-01

    The present paper presents some neutronic features of source-driven neutron multiplying systems, with special regards to dynamics, discussing the validity and limitations of classical methods, developed for systems in the vicinity of criticality. Specific characteristics, such as source dominance and the role of delayed neutron emissions are illustrated. Some dynamic peculiarities of innovative concepts proposed for accelerator-driven systems, such as fluid-fuel, are also discussed. The second portion of the work formulates the quasi-static methods for source-driven systems, evidencing its novel features and presenting some numerical results. (author)

  17. Problem of spiral galaxies and satellite radio sources

    International Nuclear Information System (INIS)

    Arp, H.; Carpenter, R.; Gulkis, S.; Klein, M.

    1976-01-01

    A detailed comparison is made between the results of this program and the results of previous investigators. In particular, attention is called to the potentially important implications of an investigation by Tovmasyan, who searched a large number of spirals and found evidence that a small percentage of them apparently have radio satellites located up to 20' from the central galaxy. 15 sources were measured selected from Tovmasyan's list of 43 satellite sources. Results confirm his positions and relative flux densities for each of the sources

  18. Some problems in the categorization of source terms

    International Nuclear Information System (INIS)

    Abbey, F.; Dunbar, I.H.; Hayns, M.R.; Nixon, W.

    1985-01-01

    In recent years techniques for calculating source terms have been considerably improved. It would be unfortunate if the new information were to be blurred by the use of old schemes for the categorization of source terms. In the past categorization schemes have been devised without the question of the general principles of categorization and the available options being addressed explicitly. In this paper these principles are set out, providing a framework within which categorization schemes used in past probabilistic risk assessments and possible future improvements are discussed. In particular the use of input from scoping consequence calculations in deciding how to group source terms, and the question of how modelling uncertainties may be expressed as uncertainties in a final category source terms are considered

  19. Analytical reconstruction schemes for coarse-mesh spectral nodal solution of slab-geometry SN transport problems

    International Nuclear Information System (INIS)

    Barros, R. C.; Filho, H. A.; Platt, G. M.; Oliveira, F. B. S.; Militao, D. S.

    2009-01-01

    Coarse-mesh numerical methods are very efficient in the sense that they generate accurate results in short computational time, as the number of floating point operations generally decrease, as a result of the reduced number of mesh points. On the other hand, they generate numerical solutions that do not give detailed information on the problem solution profile, as the grid points can be located considerably away from each other. In this paper we describe two analytical reconstruction schemes for the coarse-mesh solution generated by the spectral nodal method for neutral particle discrete ordinates (S N ) transport model in slab geometry. The first scheme we describe is based on the analytical reconstruction of the coarse-mesh solution within each discretization cell of the spatial grid set up on the slab. The second scheme is based on the angular reconstruction of the discrete ordinates solution between two contiguous ordinates of the angular quadrature set used in the S N model. Numerical results are given so we can illustrate the accuracy of the two reconstruction schemes, as described in this paper. (authors)

  20. On the finite line source problem in diffusion theory

    International Nuclear Information System (INIS)

    Mikkelsen, T.; Troen, I.; Larsen, S.E.

    1981-09-01

    A simple formula for calculating dispersion from a continuous finite line source, placed at right angles to the mean wind direction, is derived on the basis of statistical theory. Comparison is made with the virtual source concept usually used and this is shown to be correct only in the limit where the virtual time lag Tsub(v) is small compared to the timescale of the turbulence tsub(l). (Auth.)

  1. Iterative image reconstruction for positron emission tomography based on a detector response function estimated from point source measurements

    International Nuclear Information System (INIS)

    Tohme, Michel S; Qi Jinyi

    2009-01-01

    The accuracy of the system model in an iterative reconstruction algorithm greatly affects the quality of reconstructed positron emission tomography (PET) images. For efficient computation in reconstruction, the system model in PET can be factored into a product of a geometric projection matrix and sinogram blurring matrix, where the former is often computed based on analytical calculation, and the latter is estimated using Monte Carlo simulations. Direct measurement of a sinogram blurring matrix is difficult in practice because of the requirement of a collimated source. In this work, we propose a method to estimate the 2D blurring kernels from uncollimated point source measurements. Since the resulting sinogram blurring matrix stems from actual measurements, it can take into account the physical effects in the photon detection process that are difficult or impossible to model in a Monte Carlo (MC) simulation, and hence provide a more accurate system model. Another advantage of the proposed method over MC simulation is that it can easily be applied to data that have undergone a transformation to reduce the data size (e.g., Fourier rebinning). Point source measurements were acquired with high count statistics in a relatively fine grid inside the microPET II scanner using a high-precision 2D motion stage. A monotonically convergent iterative algorithm has been derived to estimate the detector blurring matrix from the point source measurements. The algorithm takes advantage of the rotational symmetry of the PET scanner and explicitly models the detector block structure. The resulting sinogram blurring matrix is incorporated into a maximum a posteriori (MAP) image reconstruction algorithm. The proposed method has been validated using a 3 x 3 line phantom, an ultra-micro resolution phantom and a 22 Na point source superimposed on a warm background. The results of the proposed method show improvements in both resolution and contrast ratio when compared with the MAP

  2. Haplotype reconstruction error as a classical misclassification problem: introducing sensitivity and specificity as error measures.

    Directory of Open Access Journals (Sweden)

    Claudia Lamina

    Full Text Available BACKGROUND: Statistically reconstructing haplotypes from single nucleotide polymorphism (SNP genotypes, can lead to falsely classified haplotypes. This can be an issue when interpreting haplotype association results or when selecting subjects with certain haplotypes for subsequent functional studies. It was our aim to quantify haplotype reconstruction error and to provide tools for it. METHODS AND RESULTS: By numerous simulation scenarios, we systematically investigated several error measures, including discrepancy, error rate, and R(2, and introduced the sensitivity and specificity to this context. We exemplified several measures in the KORA study, a large population-based study from Southern Germany. We find that the specificity is slightly reduced only for common haplotypes, while the sensitivity was decreased for some, but not all rare haplotypes. The overall error rate was generally increasing with increasing number of loci, increasing minor allele frequency of SNPs, decreasing correlation between the alleles and increasing ambiguity. CONCLUSIONS: We conclude that, with the analytical approach presented here, haplotype-specific error measures can be computed to gain insight into the haplotype uncertainty. This method provides the information, if a specific risk haplotype can be expected to be reconstructed with rather no or high misclassification and thus on the magnitude of expected bias in association estimates. We also illustrate that sensitivity and specificity separate two dimensions of the haplotype reconstruction error, which completely describe the misclassification matrix and thus provide the prerequisite for methods accounting for misclassification.

  3. [Current problems in the reconstructive surgery of the locomotor apparatus in children].

    Science.gov (United States)

    Kupatadze, D D; Nabokov, V V; Malikov, S A; Polozov, R N; Kanina, L Ia; Veselov, A G

    1997-01-01

    The authors analyze results of treatment of 778 children with malignant and benign tumors of the bones, pseudoarthroses, amputations of lower extremities and fingers, injuries of the tendons, vessels and contused-lacerated wounds of distal phalanges of fingers. The possibility to use a precision technique for the reconstructive operations of the vessels in children is shown.

  4. Direct and inverse source problems for a space fractional advection dispersion equation

    KAUST Repository

    Aldoghaither, Abeer; Laleg-Kirati, Taous-Meriem; Liu, Da Yan

    2016-01-01

    In this paper, direct and inverse problems for a space fractional advection dispersion equation on a finite domain are studied. The inverse problem consists in determining the source term from final observations. We first derive the analytic

  5. New and renewable energy sources and the ecological problem. Developments from the Republic of Argentina

    International Nuclear Information System (INIS)

    Moragues, Jaime A.

    1992-01-01

    This paper focuses the renewable energy sources developments in Argentina. Every one of sources are described in details, including environmental aspects. The problems with energy demand, mainly in rural areas, are also presented. 9 figs., 3 tabs

  6. Direct and inverse source problems for a space fractional advection dispersion equation

    KAUST Repository

    Aldoghaither, Abeer

    2016-05-15

    In this paper, direct and inverse problems for a space fractional advection dispersion equation on a finite domain are studied. The inverse problem consists in determining the source term from final observations. We first derive the analytic solution to the direct problem which we use to prove the uniqueness and the unstability of the inverse source problem using final measurements. Finally, we illustrate the results with a numerical example.

  7. Noise reduction by sparse representation in learned dictionaries for application to blind tip reconstruction problem

    International Nuclear Information System (INIS)

    Jóźwiak, Grzegorz

    2017-01-01

    Scanning probe microscopy (SPM) is a well known tool used for the investigation of phenomena in objects in the nanometer size range. However, quantitative results are limited by the size and the shape of the nanoprobe used in experiments. Blind tip reconstruction (BTR) is a very popular method used to reconstruct the upper boundary on the shape of the probe. This method is known to be very sensitive to all kinds of interference in the atomic force microscopy (AFM) image. Due to mathematical morphology calculus, the interference makes the BTR results biased rather than randomly disrupted. For this reason, the careful choice of methods used for image enhancement and denoising, as well as the shape of a calibration sample are very important. In the paper, the results of thorough investigations on the shape of a calibration standard are shown. A novel shape is proposed and a tool for the simulation of AFM images of this calibration standard was designed. It was shown that careful choice of the initial tip allows us to use images of hole structures to blindly reconstruct the shape of a probe. The simulator was used to test the impact of modern filtration algorithms on the BTR process. These techniques are based on sparse approximation with function dictionaries learned on the basis of an image itself. Various learning algorithms and parameters were tested to determine the optimal combination for sparse representation. It was observed that the strong reduction of noise does not guarantee strong reduction in reconstruction errors. It seems that further improvements will be possible by the combination of BTR and a noise reduction procedure. (paper)

  8. Transport of radioactive sources-an environmental problem

    International Nuclear Information System (INIS)

    Merckaert, G.

    1996-01-01

    Full text: The transport of dangerous goods is submitted to various regulations. These can be international, national or regional and they can differ from country to country. The basis for the regulations for dangerous goods can be found in the recommendations on the transport of dangerous goods, issued by the United Nations committee of experts on the transport of dangerous goods (orange book). For radioactive material the regulations for the safe transport of radioactive material, issued by the International Atomic Energy Agency (IAEA), are applied. The UN recommendations provide for 9 classes of dangerous goods. With regard to class 7, specifically related to the transport of radioactive material special recommendation relating to class 70, the IAEA regulations are referred to. These IAEA regulations for their part provide for 13 schedules, varying between weakly and highly radioactive. The radioactive sources which are used for non-destructive testing or for medical purposes are mostly sealed sources, i.e. the radioactive material is contained in a metallic shell. According to the nature of the isotope and their activity, the sources are transported either in industrial packagings, type A or type B packagings. According to the mode of transport, either air, sea, rail or road, various specific rules are applied, which however, are fortunately nearly completely harmonized. Special attention is paid to radiation protection, heat removal and the testing and fabrication of packagings. As a general rule, the safety of transport is based on the safety of the packagings, i.e. their ability to maintain, even in accident conditions, their capacity of tightness, shielding against radiation and removing the heat generated by the transported material

  9. General problems associated with the control and safe use of radiation sources (199)

    International Nuclear Information System (INIS)

    Ahmed, J.U.

    1993-01-01

    There are problems at various levels in ensuring safety in the use of radiation sources. A relatively new problem that warrants international action is the smuggling of radioactive material across international borders. An international convention on the control and safe use of radiation sources is essential to provide a universally harmonized mechanism for ensuring safety

  10. On an inverse source problem for enhanced oil recovery by wave motion maximization in reservoirs

    KAUST Repository

    Karve, Pranav M.; Kucukcoban, Sezgin; Kallivokas, Loukas F.

    2014-01-01

    to increase the mobility of otherwise entrapped oil. The goal is to arrive at the spatial and temporal description of surface sources that are capable of maximizing mobility in the target reservoir. The focusing problem is posed as an inverse source problem

  11. Vision problems are a leading source of modifiable health expenditures.

    Science.gov (United States)

    Rein, David B

    2013-12-13

    According to recent studies, visual problems represent one of the top contributors to economic health burden in the United States. This burden is divided nearly equally between direct expenditures for the care and treatment of visual problems, and the indirect costs of outcomes caused by low vision, including productivity losses, the cost of care, and incremental nursing home placements. A large amount of academic research is devoted to visual science, the biology of the visual system, and the medical treatment of visual disorders. Compared to the burden, a disproportionate share of this research is devoted to the study of retinal disorders and glaucoma. This is understandable, as research into the retina and optic nerve has the potential to unlock fundamental insights into the nature of sight and visual cognition. However, population visual health and the functionality that depends upon it also may benefit greatly from additional research into areas of prevention, rehabilitation, and adaptation. In addition, comparative research into the benefits of resource allocation across prevention, treatment, and rehabilitative resources could lead to improvements in population health.

  12. Cogenerational sources of energies and their allocating problem

    Directory of Open Access Journals (Sweden)

    Badida Miroslav

    1997-12-01

    Full Text Available Energy production in industrial communities consume a main part of primary raw materials and it is one of the sources of ecologicall impact. Electric power plants and warm produce plants are mostly important investment – consuming establishments with a long time of return, what stress along with the economical, predictional, logistical and environmental decision making aspect of their allocating. Already input of the mentioned aspects along with the price movement after the energy depression motivate a formation of new conception of combinated so-called items, which are able to use the energetic potential of fuels with a higher concurrent efficiency and, on the other hand, can reduce ecologic impacts of fossil combustion.

  13. A global health problem caused by arsenic from natural sources

    Energy Technology Data Exchange (ETDEWEB)

    Ng, J.C.; Wang, J.P.; Shraim, A. [University of Queensland, Brisbane, Qld. (Australia). National Research Center for Environmental Toxicology

    2003-09-01

    Arsenic is a carcinogen to both humans and animals. Arsenicals have been associated with cancers of the skin, lung, and bladder. Clinical manifestations of chronic arsenic poisoning include non-cancer end point of hyper- and hypo-pigmentation, keratosis, hypertension, cardiovascular diseases and diabetes. Epidemiological evidence indicates that arsenic concentration exceeding 50 {mu}g l{sup -1} in the drinking water is not public health protective. The current WHO recommended guideline value for arsenic in drinking water is 10 {mu}g l{sup -1}, whereas many developing countries are still having a value of 50 {mu}g 1{sup -1}. It has been estimated that tens of millions of people are at risk exposing to excessive levels of arsenic from both contaminated water and arsenic-bearing coal from natural sources. The global health implication and possible intervention strategies were also discussed in this review article.

  14. Sourcing human embryos for embryonic stem cell lines: Problems & perspectives

    Directory of Open Access Journals (Sweden)

    Rajvi H Mehta

    2014-01-01

    Full Text Available The ability to successfully derive human embryonic stem cells (hESC lines from human embryos following in vitro fertilization (IVF opened up a plethora of potential applications of this technique. These cell lines could have been successfully used to increase our understanding of human developmental biology, transplantation medicine and the emerging science of regenerative medicine. The main source for human embryos has been ′discarded′ or ′spare′ fresh or frozen human embryos following IVF. It is a common practice to stimulate the ovaries of women undergoing any of the assisted reproductive technologies (ART and retrieve multiple oocytes which subsequently lead to multiple embryos. Of these, only two or maximum of three embryos are transferred while the rest are cryopreserved as per the decision of the couple. In case a couple does not desire to ′cryopreserve′ their embryos then all the embryos remaining following embryo transfer can be considered ′spare′ or if a couple is no longer in need of the ′cryopreserved′ embryos then these also can be considered as ′spare′. But, the question raised by the ethicists is, "what about ′slightly′ over-stimulating a woman to get a few extra eggs and embryos? The decision becomes more difficult when it comes to ′discarded′ embryos. As of today, the quality of the embryos is primarily assessed based on morphology and the rate of development mainly judged by single point assessment. Despite many criteria described in the literature, the quality assessment is purely subjective. The question that arises is on the decision of ′discarding′ embryos. What would be the criteria for discarding embryos and the potential ′use′ of ESC derived from the ′abnormal appearing′ embryos? This paper discusses some of the newer methods to procure embryos for the derivation of embryonic stem cell lines which will respect the ethical concerns but still provide the source material.

  15. Potential utilization of renewable energy sources and the related problems

    International Nuclear Information System (INIS)

    Roos, I.; Selg, V.

    1996-01-01

    Estonia's most promising resource of renewable energy is the natural biomass. In 1994 the use of wood and waste wood formed about 4.9% of the primary energy supply, the available resource will provide for a much higher share of biomass in the future primary energy supply, reaching 9-14%. Along with the biomass, wind energy can be considered the largest resource. On the western and northern coast of Estonia, in particular, on the islands, over several years, the average wind speed has been 5 m/s. Based on the assumption that the wind speed exceeds 6 m/s in the area that forms ca 1.5% of the Estonian territory (the total area of Estonia is about 45,000 km 2 ) and is 5 - 6 m/s on about 15% of the total area, using 0.5 MW/km 2 for the installation density, very approximate estimates permit to state that the maximum hypothetical installed capacity could be 3750 MW. It might be useful to make use of the current maximum 50 MW, which could enable the generation of approximately 70 - 100 GW h of energy per year. Although the solar energy currently has no practical use in Estonia and the resource of hydro power is also insignificant (only ca 1% of the electricity consumption), these two resources of renewable energy hold future promise in view of the use of local resources and that of environmental protection. It is not reasonable to regard renewable energy sources as a substitute for the traditional oil shale-based power engineering in Estonia. But, to some extent, local energy demand can be covered by renewable energy sources. Thus, they can contribute to the reduction of the greenhouse gases emissions in Estonia

  16. Theoretical stability in coefficient inverse problems for general hyperbolic equations with numerical reconstruction

    Science.gov (United States)

    Yu, Jie; Liu, Yikan; Yamamoto, Masahiro

    2018-04-01

    In this article, we investigate the determination of the spatial component in the time-dependent second order coefficient of a hyperbolic equation from both theoretical and numerical aspects. By the Carleman estimates for general hyperbolic operators and an auxiliary Carleman estimate, we establish local Hölder stability with either partial boundary or interior measurements under certain geometrical conditions. For numerical reconstruction, we minimize a Tikhonov functional which penalizes the gradient of the unknown function. Based on the resulting variational equation, we design an iteration method which is updated by solving a Poisson equation at each step. One-dimensional prototype examples illustrate the numerical performance of the proposed iteration.

  17. The severe accidents and source term problems approach in France

    International Nuclear Information System (INIS)

    Bussac, J.; Cogne, F.; Pelce, J.

    1986-01-01

    The French methodology described in this report aims at providing operators with a comprehensive body which should enable them to tackle any situation or type of accident whose occurrence does not appear to be physically unconceivable. It is no longer considered that the early failure of the containment can result from a conceivable accidental sequence and is therefore no longer taken into consideration, except, however, when examining and selecting the site criteria. Prevention measures and relatively inexpensive remedies allow a substantial reduction in the probability of core melting and, if melting occurs, in the probability of loss of leaktightness and the level of possible releases out of the containment. This implies very special attention to accident management and operators' actions, to which great importance is attached. We must not stop research because of the lack of experimental data concerning problems which have not yet been completely settled and there is a real need for an integral check program for the code systems existing or in development

  18. Nitrogen in the environment: sources, problems, and management.

    Science.gov (United States)

    Follett, R F; Hatfield, J L

    2001-10-30

    Nitrogen (N) is applied worldwide to produce food. It is in the atmosphere, soil, and water and is essential to all life. N for agriculture includes fertilizer, biologically fixed, manure, recycled crop residue, and soil-mineralized N. Presently, fertilizer N is a major source of N, and animal manure N is inefficiently used. Potential environmental impacts of N excreted by humans are increasing rapidly with increasing world populations. Where needed, N must be efficiently used because N can be transported immense distances and transformed into soluble and/or gaseous forms that pollute water resources and cause greenhouse effects. Unfortunately, increased amounts of gaseous N enter the environment as N2O to cause greenhouse warming and as NH3 to shift ecological balances of natural ecosystems. Large amounts of N are displaced with eroding sediments in surface waters. Soluble N in runoff or leachate water enters streams, rivers, and groundwater. High-nitrate drinking water can cause methemoglobinemia, while nitrosamines are associated with various human cancers. We describe the benefits, but also how N in the wrong form or place results in harmful effects on humans and animals, as well as to ecological and environmental systems.

  19. Nitrogen in the Environment: Sources, Problems, and Management

    Directory of Open Access Journals (Sweden)

    R.F. Follett

    2001-01-01

    Full Text Available Nitrogen (N is applied worldwide to produce food. It is in the atmosphere, soil, and water and is essential to all life. N for agriculture includes fertilizer, biologically fixed, manure, recycled crop residue, and soil-mineralized N. Presently, fertilizer N is a major source of N, and animal manure N is inefficiently used. Potential environmental impacts of N excreted by humans are increasing rapidly with increasing world populations. Where needed, N must be efficiently used because N can be transported immense distances and transformed into soluble and/or gaseous forms that pollute water resources and cause greenhouse effects. Unfortunately, increased amounts of gaseous N enter the environment as N2O to cause greenhouse warming and as NH3 to shift ecological balances of natural ecosystems. Large amounts of N are displaced with eroding sediments in surface waters. Soluble N in runoff or leachate water enters streams, rivers, and groundwater. High-nitrate drinking water can cause methemoglobinemia, while nitrosamines are associated with various human cancers. We describe the benefits, but also how N in the wrong form or place results in harmful effects on humans and animals, as well as to ecological and environmental systems.

  20. Comparison of 3D reconstruction of mandible for pre-operative planning using commercial and open-source software

    Science.gov (United States)

    Abdullah, Johari Yap; Omar, Marzuki; Pritam, Helmi Mohd Hadi; Husein, Adam; Rajion, Zainul Ahmad

    2016-12-01

    3D printing of mandible is important for pre-operative planning, diagnostic purposes, as well as for education and training. Currently, the processing of CT data is routinely performed with commercial software which increases the cost of operation and patient management for a small clinical setting. Usage of open-source software as an alternative to commercial software for 3D reconstruction of the mandible from CT data is scarce. The aim of this study is to compare two methods of 3D reconstruction of the mandible using commercial Materialise Mimics software and open-source Medical Imaging Interaction Toolkit (MITK) software. Head CT images with a slice thickness of 1 mm and a matrix of 512x512 pixels each were retrieved from the server located at the Radiology Department of Hospital Universiti Sains Malaysia. The CT data were analysed and the 3D models of mandible were reconstructed using both commercial Materialise Mimics and open-source MITK software. Both virtual 3D models were saved in STL format and exported to 3matic and MeshLab software for morphometric and image analyses. Both models were compared using Wilcoxon Signed Rank Test and Hausdorff Distance. No significant differences were obtained between the 3D models of the mandible produced using Mimics and MITK software. The 3D model of the mandible produced using MITK open-source software is comparable to the commercial MIMICS software. Therefore, open-source software could be used in clinical setting for pre-operative planning to minimise the operational cost.

  1. Simulating variable source problems via post processing of individual particle tallies

    International Nuclear Information System (INIS)

    Bleuel, D.L.; Donahue, R.J.; Ludewigt, B.A.; Vujic, J.

    2000-01-01

    Monte Carlo is an extremely powerful method of simulating complex, three dimensional environments without excessive problem simplification. However, it is often time consuming to simulate models in which the source can be highly varied. Similarly difficult are optimization studies involving sources in which many input parameters are variable, such as particle energy, angle, and spatial distribution. Such studies are often approached using brute force methods or intelligent guesswork. One field in which these problems are often encountered is accelerator-driven Boron Neutron Capture Therapy (BNCT) for the treatment of cancers. Solving the reverse problem of determining the best neutron source for optimal BNCT treatment can be accomplished by separating the time-consuming particle-tracking process of a full Monte Carlo simulation from the calculation of the source weighting factors which is typically performed at the beginning of a Monte Carlo simulation. By post-processing these weighting factors on a recorded file of individual particle tally information, the effect of changing source variables can be realized in a matter of seconds, instead of requiring hours or days for additional complete simulations. By intelligent source biasing, any number of different source distributions can be calculated quickly from a single Monte Carlo simulation. The source description can be treated as variable and the effect of changing multiple interdependent source variables on the problem's solution can be determined. Though the focus of this study is on BNCT applications, this procedure may be applicable to any problem that involves a variable source

  2. A Direct Numerical Reconstruction Algorithm for the 3D Calderón Problem

    DEFF Research Database (Denmark)

    Delbary, Fabrice; Hansen, Per Christian; Knudsen, Kim

    2011-01-01

    In three dimensions Calderón's problem was addressed and solved in theory in the 1980s in a series of papers, but only recently the numerical implementation of the algorithm was initiated. The main ingredients in the solution of the problem are complex geometrical optics solutions to the conducti...

  3. (Re)Constructing the Wicked Problem Through the Visual and the Verbal

    DEFF Research Database (Denmark)

    Holm Jacobsen, Peter; Harty, Chris; Tryggestad, Kjell

    2016-01-01

    Wicked problems are open ended and complex societal problems. There is a lack of empirical research into the dynamics and mechanisms that (re) construct problems to become wicked. This paper builds on an ethnographic study of a dialogue-based architect competition to do just that. The competition...... processes creates new knowledge and insights, but at the same time present new problems related to the ongoing verbal feedback. The design problem being (re) constructed appears as Heracles' fight with Hydra: Every time Heracles cut of a head, two new heads grow back. The paper contributes to understanding...... the relationship between the visual and the verbal (dialogue) in complex design processes in the early phases of large construction projects, and how the dynamic interplay between the design visualization and verbal dialogue develops before the competition produces, or negotiates, “a "winning design”....

  4. The coordination of problem solving strategies: when low competence sources exert more influence on task processing than high competence sources.

    Science.gov (United States)

    Quiamzade, Alain; Mugny, Gabriel; Darnon, Céline

    2009-03-01

    Previous research has shown that low competence sources, compared to highly competent sources, can exert influence in aptitudes tasks in as much as they induce people to focus on the task and to solve it more deeply. Two experiments aimed at testing the coordination between self and source's problem solving strategies as a main explanation of such a difference in influence. The influence of a low versus high competence source has been examined in an anagram task that allows for distinguishing between three response strategies, including one that corresponds to the coordination between the source's strategy and participants' own strategy. In Study 1 the strategy suggested by the source was either relevant and useful or irrelevant and useless for solving the task. Results indicated that participants used the coordination strategy in a larger extend when they had been confronted to a low competence rather than a highly competent source but only when the source displayed a strategy that was useful to solve the task. In Study 2 the source's strategy was always relevant and useful, but a decentring procedure was introduced for half of the participants. This procedure induced participants to consider other points of view than their own. Results replicated the difference observed in Study 1 when no decentring was introduced. The difference however disappeared when decentring was induced, because of an increase of the high competence source's influence. These results highlight coordination of strategies as one mechanism underlying influence from low competence sources.

  5. Reconstruction phases in the planar three- and four-vortex problems

    Science.gov (United States)

    Hernández-Garduño, Antonio; Shashikanth, Banavara N.

    2018-03-01

    Pure reconstruction phases—geometric and dynamic—are computed in the N-point-vortex model in the plane, for the cases N=3 and N=4 . The phases are computed relative to a metric-orthogonal connection on appropriately defined principal fiber bundles. The metric is similar to the kinetic energy metric for point masses but with the masses replaced by vortex strengths. The geometric phases are shown to be proportional to areas enclosed by the closed orbit on the symmetry reduced spaces. More interestingly, simple formulae are obtained for the dynamic phases, analogous to Montgomery’s result for the free rigid body, which show them to be proportional to the time period of the symmetry reduced closed orbits. For the case N = 3 a non-zero total vortex strength is assumed. For the case N = 4 the vortex strengths are assumed equal.

  6. Technical Note: FreeCT_ICD: An Open Source Implementation of a Model-Based Iterative Reconstruction Method using Coordinate Descent Optimization for CT Imaging Investigations.

    Science.gov (United States)

    Hoffman, John M; Noo, Frédéric; Young, Stefano; Hsieh, Scott S; McNitt-Gray, Michael

    2018-06-01

    To facilitate investigations into the impacts of acquisition and reconstruction parameters on quantitative imaging, radiomics and CAD using CT imaging, we previously released an open source implementation of a conventional weighted filtered backprojection reconstruction called FreeCT_wFBP. Our purpose was to extend that work by providing an open-source implementation of a model-based iterative reconstruction method using coordinate descent optimization, called FreeCT_ICD. Model-based iterative reconstruction offers the potential for substantial radiation dose reduction, but can impose substantial computational processing and storage requirements. FreeCT_ICD is an open source implementation of a model-based iterative reconstruction method that provides a reasonable tradeoff between these requirements. This was accomplished by adapting a previously proposed method that allows the system matrix to be stored with a reasonable memory requirement. The method amounts to describing the attenuation coefficient using rotating slices that follow the helical geometry. In the initially-proposed version, the rotating slices are themselves described using blobs. We have replaced this description by a unique model that relies on tri-linear interpolation together with the principles of Joseph's method. This model offers an improvement in memory requirement while still allowing highly accurate reconstruction for conventional CT geometries. The system matrix is stored column-wise and combined with an iterative coordinate descent (ICD) optimization. The result is FreeCT_ICD, which is a reconstruction program developed on the Linux platform using C++ libraries and the open source GNU GPL v2.0 license. The software is capable of reconstructing raw projection data of helical CT scans. In this work, the software has been described and evaluated by reconstructing datasets exported from a clinical scanner which consisted of an ACR accreditation phantom dataset and a clinical pediatric

  7. 42 CFR 82.13 - What sources of information may be used for dose reconstructions?

    Science.gov (United States)

    2010-10-01

    ... THE ENERGY EMPLOYEES OCCUPATIONAL ILLNESS COMPENSATION PROGRAM ACT OF 2000 Dose Reconstruction Process... Atomic Weapons Employers and the former worker medical screening program; (b) NIOSH and other records...) Co-workers of covered employees, or others with information relevant to the covered employee's...

  8. Iterative observer based method for source localization problem for Poisson equation in 3D

    KAUST Repository

    Majeed, Muhammad Usman; Laleg-Kirati, Taous-Meriem

    2017-01-01

    A state-observer based method is developed to solve point source localization problem for Poisson equation in a 3D rectangular prism with available boundary data. The technique requires a weighted sum of solutions of multiple boundary data

  9. Noise-tolerance analysis for detection and reconstruction of absorbing inhomogeneities with diffuse optical tomography using single- and phase-correlated dual-source schemes

    International Nuclear Information System (INIS)

    Kanmani, B; Vasu, R M

    2007-01-01

    An iterative reconstruction procedure is used to invert intensity data from both single- and phase-correlated dual-source illuminations for absorption inhomogeneities. The Jacobian for the dual source is constructed by an algebraic addition of the Jacobians estimated for the two sources separately. By numerical simulations, it is shown that the dual-source scheme performs superior to the single-source system in regard to (i) noise tolerance in data and (ii) ability to reconstruct smaller and lower contrast objects. The quality of reconstructions from single-source data, as indicated by mean-square error at convergence, is markedly poorer compared to their dual-source counterpart, when noise in data was in excess of 2%. With fixed contrast and decreasing inhomogeneity diameter, our simulations showed that, for diameters below 7 mm, the dual-source scheme has a higher percentage contrast recovery compared to the single-source scheme. Similarly, the dual-source scheme reconstructs to a higher percentage contrast recovery from lower contrast inhomogeneity, in comparison to the single-source scheme

  10. The impact of heart rate on image quality and reconstruction timing of dual-source CT coronary angiography

    International Nuclear Information System (INIS)

    Wang Yining; Jin Zhengyu; Kong Lingyan; Zhang Zhuhua; Song Lan; Mu Wenbin; Wang Yun; Zhao Wenmin; Zhang Shuyang; Lin Songbai

    2008-01-01

    Objective: To evaluate the impact of patient's heart rate (HR) on coronary CT angiography (CTA) image quality (IQ) and reconstruction timing in dual-source CT (DSCT). Methods Ninety-five patients with suspicion of coronary artery disease were examined with a DSCT scanner (Somatom Definition, Siemens) using 32 x 0.6 mm collimation. All patients were divided three groups according to the heart rate (HR): group 1, HR ≤ 70 beats per minute (bpm), n=26; group 2, HR >70 bpm to ≤90 bpm, n=37; group 3, HR > 90 bpm, n=32. No beta-blockers were taken before CT scan. 50- 60 ml of nonionic contrast agent were injected with a rate of 5 ml/s. Images were reconstructed from 10% to 100% of the R-R interval using single-segment reconstruction. Two readers independently assessed IQ of all coronary, segments using a 3-point scale from excellent (1) to non-assessable (3) for coronary segments and the relationship between IQ and the HR. Results: Overall mean IQ score was 1.31 ± 0.55 for all patients with 1.08 ± 0.27 for group 1, 1.32 ± 0.58 for group 2 and 1.47 ± 0.61 for group 3. The IQ was better in the LAD than the RCA and LCX (P<0.01). Only 1.4% (19/1386) of coronary artery segments were considered non-assessable due to the motion artifacts. Optimal image quality of all coronary segments in 74 patients (77.9%) can be achieved with one reconstruction data set. The best IQ was predominately in diastole (88.5%) in group 1, while the best IQ was in systole (84.4%) in group 3. Conclusions: DSCT can achieve the optimal IQ with a wide range of HR using single-segment reconstruction. With the increasing of HR, the timing of data reconstruction for the best IQ shifts from mid-diastole to systole. (authors)

  11. Public opinion confronted by the safety problems associated with different energy source

    Energy Technology Data Exchange (ETDEWEB)

    Otway, H J; Thomas, K

    1978-09-01

    Model study of public opinion 'for' and 'against' the various energy sources - oil, coal, solar and nuclear power. Attitudes are examined from four aspects: psychology - economic advantages, sociopolitical problems, environmental problems and safety. The investigation focuses on nuclear energy. (13 refs.) (In French)

  12. The status quo, problems and improvements pertaining to radiation source management in China

    International Nuclear Information System (INIS)

    Jin Jiaqi

    1998-01-01

    Early in 1930s, radiation sources were used in medicine in China, and since then their application has been widely extended in a variety of fields. This paper presents a brief outline of the status quo, problems on management for radiation sources, and some relevant improvements as recommended by author are also included in it. (author)

  13. Possibility of using sources of vacuum ultraviolet irradiation to solve problems of space material science

    Science.gov (United States)

    Verkhoutseva, E. T.; Yaremenko, E. I.

    1974-01-01

    An urgent problem in space materials science is simulating the interaction of vacuum ultraviolet (VUV) of solar emission with solids in space conditions, that is, producing a light source with a distribution that approximates the distribution of solar energy. Information is presented on the distribution of the energy flux of VUV of solar radiation. Requirements that must be satisfied by the VUV source used for space materials science are formulated, and a critical evaluation is given of the possibilities of using existing sources for space materials science. From this evaluation it was established that none of the sources of VUV satisfies the specific requirements imposed on the simulator of solar radiation. A solution to the problem was found to be in the development of a new type of source based on exciting a supersonic gas jet flowing into vacuum with a sense electron beam. A description of this gas-jet source, along with its spectral and operation characteristics, is presented.

  14. Iterative observer based method for source localization problem for Poisson equation in 3D

    KAUST Repository

    Majeed, Muhammad Usman

    2017-07-10

    A state-observer based method is developed to solve point source localization problem for Poisson equation in a 3D rectangular prism with available boundary data. The technique requires a weighted sum of solutions of multiple boundary data estimation problems for Laplace equation over the 3D domain. The solution of each of these boundary estimation problems involves writing down the mathematical problem in state-space-like representation using one of the space variables as time-like. First, system observability result for 3D boundary estimation problem is recalled in an infinite dimensional setting. Then, based on the observability result, the boundary estimation problem is decomposed into a set of independent 2D sub-problems. These 2D problems are then solved using an iterative observer to obtain the solution. Theoretical results are provided. The method is implemented numerically using finite difference discretization schemes. Numerical illustrations along with simulation results are provided.

  15. Nodal collocation approximation for the multidimensional PL equations applied to transport source problems

    International Nuclear Information System (INIS)

    Verdu, G.; Capilla, M.; Talavera, C. F.; Ginestar, D.

    2012-01-01

    PL equations are classical high order approximations to the transport equations which are based on the expansion of the angular dependence of the angular neutron flux and the nuclear cross sections in terms of spherical harmonics. A nodal collocation method is used to discretize the PL equations associated with a neutron source transport problem. The performance of the method is tested solving two 1D problems with analytical solution for the transport equation and a classical 2D problem. (authors)

  16. Nodal collocation approximation for the multidimensional PL equations applied to transport source problems

    Energy Technology Data Exchange (ETDEWEB)

    Verdu, G. [Departamento de Ingenieria Quimica Y Nuclear, Universitat Politecnica de Valencia, Cami de Vera, 14, 46022. Valencia (Spain); Capilla, M.; Talavera, C. F.; Ginestar, D. [Dept. of Nuclear Engineering, Departamento de Matematica Aplicada, Universitat Politecnica de Valencia, Cami de Vera, 14, 46022. Valencia (Spain)

    2012-07-01

    PL equations are classical high order approximations to the transport equations which are based on the expansion of the angular dependence of the angular neutron flux and the nuclear cross sections in terms of spherical harmonics. A nodal collocation method is used to discretize the PL equations associated with a neutron source transport problem. The performance of the method is tested solving two 1D problems with analytical solution for the transport equation and a classical 2D problem. (authors)

  17. Source convergence problems in the application of burnup credit for WWER-440 fuel

    International Nuclear Information System (INIS)

    Hordosy, Gabor

    2003-01-01

    The problems in Monte Carlo criticality calculations caused by the slow convergence of the fission source are examined on an example. A spent fuel storage cask designed for WWER-440 fuel used a sample case. The influence of the main parameters of the calculations is investigated including the initial fission source. A possible strategy is proposed to overcome the difficulties associated by the slow source convergence. The advantage of the proposed strategy that it can be implemented using the standard MCNP features. (author)

  18. Systolic reconstruction in patients with low heart rate using coronary dual-source CT angiography

    International Nuclear Information System (INIS)

    Okada, Munemasa; Nakashima, Yoshiteru; Shigemoto, Youko; Matsunaga, Naofumi; Miura, Toshiro; Nao, Tomoko; Sano, Yuichi; Narazaki, Akiko; Kido, Shoji

    2011-01-01

    Objectives: The purpose of our study was to determine the relationship between the predictive factors and systolic reconstruction (SR) as an optimal reconstruction window in patients with low heart rate (LHR; less than 65 bpm). Methods: 391 patients (262 male and 129 female, mean age; 67.1 ± 10.1 years of age) underwent coronary CTA without the additional administration of a beta-blocker. Affecting factors for SR were analyzed in age, gender, body weight (BW), diabetes mellitus (DM), coronary arterial disease (CAD), ejection fraction (EF), systolic and diastolic body pressure (BP) and heart rate variability (HRV) during coronary CTA. Results: In 29 (7.4%) of the 391 patients, SR was needed, but there was no apparent characteristic difference between the systolic and diastolic reconstruction groups in terms of gender, age, BW, DM, CAD and EF. In a multivariate analysis, the co-existence of DM [P < 0.05; OR, 0.27; 95% CI, 0.092-0.80], diastolic BP [P < 0.01; OR, 0.95; 95% CI, 0.92-0.98] and HRV [P < 0.01; OR, 0.98; 95% CI, 0.96-0.99] were found to be the factors for SR. In gender-related analysis, HRV was an important factor regardless of sex, but co-existence of DM affected especially for female and BP for male. Conclusion: Especially in the patients with LHR who had a medication of DM, high HRV or high BP, SR, in addition to DR, was needed to obtain high-quality coronary CTA images.

  19. Systolic reconstruction in patients with low heart rate using coronary dual-source CT angiography

    Energy Technology Data Exchange (ETDEWEB)

    Okada, Munemasa, E-mail: radokada@yamaguchi-u.ac.jp [Department of Radiology, Yamaguchi University Graduate School of Medicine, 1-1-1 Minamikogushi, Ube, Yamaguchi 755-8505 (Japan); Nakashima, Yoshiteru; Shigemoto, Youko; Matsunaga, Naofumi [Department of Radiology, Yamaguchi University Graduate School of Medicine, 1-1-1 Minamikogushi, Ube, Yamaguchi 755-8505 (Japan); Miura, Toshiro; Nao, Tomoko [Department of Cardiology, Yamaguchi University Graduate School of Medicine (Japan); Sano, Yuichi; Narazaki, Akiko [Department of Radiology, Yamaguchi University Hospital (Japan); Kido, Shoji [Computer-aided Diagnosis and Biomedical Imaging Research Biomedical Engineering, Applied Medical Engineering Science Graduate School of Medicine, Yamaguchi University (Japan)

    2011-11-15

    Objectives: The purpose of our study was to determine the relationship between the predictive factors and systolic reconstruction (SR) as an optimal reconstruction window in patients with low heart rate (LHR; less than 65 bpm). Methods: 391 patients (262 male and 129 female, mean age; 67.1 {+-} 10.1 years of age) underwent coronary CTA without the additional administration of a beta-blocker. Affecting factors for SR were analyzed in age, gender, body weight (BW), diabetes mellitus (DM), coronary arterial disease (CAD), ejection fraction (EF), systolic and diastolic body pressure (BP) and heart rate variability (HRV) during coronary CTA. Results: In 29 (7.4%) of the 391 patients, SR was needed, but there was no apparent characteristic difference between the systolic and diastolic reconstruction groups in terms of gender, age, BW, DM, CAD and EF. In a multivariate analysis, the co-existence of DM [P < 0.05; OR, 0.27; 95% CI, 0.092-0.80], diastolic BP [P < 0.01; OR, 0.95; 95% CI, 0.92-0.98] and HRV [P < 0.01; OR, 0.98; 95% CI, 0.96-0.99] were found to be the factors for SR. In gender-related analysis, HRV was an important factor regardless of sex, but co-existence of DM affected especially for female and BP for male. Conclusion: Especially in the patients with LHR who had a medication of DM, high HRV or high BP, SR, in addition to DR, was needed to obtain high-quality coronary CTA images.

  20. Image reconstruction of mMR PET data using the open source software STIR

    Energy Technology Data Exchange (ETDEWEB)

    Markiewicz, Pawel [Centre for Medical Image Computing, University College London, London (United Kingdom); Thielemans, Kris [Institute of Nuclear Medicine, University College London, London (United Kingdom); Burgos, Ninon [Centre for Medical Image Computing, University College London, London (United Kingdom); Manber, Richard [Institute of Nuclear Medicine, University College London, London (United Kingdom); Jiao, Jieqing [Centre for Medical Image Computing, University College London, London (United Kingdom); Barnes, Anna [Institute of Nuclear Medicine, University College London, London (United Kingdom); Atkinson, David [Centre for Medical Imaging, University College London, London (United Kingdom); Arridge, Simon R [Centre for Medical Image Computing, University College London, London (United Kingdom); Hutton, Brian F [Institute of Nuclear Medicine, University College London, London (United Kingdom); Ourselin, Sébastien [Centre for Medical Image Computing, University College London, London (United Kingdom); Dementia Research Centre, University College London, London (United Kingdom)

    2014-07-29

    Simultaneous PET and MR acquisitions have now become possible with the new hybrid Biograph Molecular MR (mMR) scanner from Siemens. The purpose of this work is to create a platform for mMR 3D and 4D PET image reconstruction which would be freely accessible to the community as well as fully adjustable in order to obtain optimal images for a given research task in PET imaging. The proposed platform is envisaged to prove useful in developing novel and robust image bio-markers which could then be adapted for use on the mMR scanner.

  1. Image reconstruction of mMR PET data using the open source software STIR

    International Nuclear Information System (INIS)

    Markiewicz, Pawel; Thielemans, Kris; Burgos, Ninon; Manber, Richard; Jiao, Jieqing; Barnes, Anna; Atkinson, David; Arridge, Simon R; Hutton, Brian F; Ourselin, Sébastien

    2014-01-01

    Simultaneous PET and MR acquisitions have now become possible with the new hybrid Biograph Molecular MR (mMR) scanner from Siemens. The purpose of this work is to create a platform for mMR 3D and 4D PET image reconstruction which would be freely accessible to the community as well as fully adjustable in order to obtain optimal images for a given research task in PET imaging. The proposed platform is envisaged to prove useful in developing novel and robust image bio-markers which could then be adapted for use on the mMR scanner.

  2. Single Phase Current-Source Active Rectifier for Traction: Control System Design and Practical Problems

    Directory of Open Access Journals (Sweden)

    Jan Michalik

    2006-01-01

    Full Text Available This research has been motivated by industrial demand for single phase current-source active rectifier dedicated for reconstruction of older types of dc machine locomotives. This paper presents converters control structure design and simulations. The proposed converter control is based on the mathematical model and due to possible interaction with railway signaling and required low switching frequency employs synchronous PWM. The simulation results are verified by experimental tests performed on designed laboratory prototype of power of 7kVA

  3. Bat calls while preying: A method for reconstructing the signal emitted by a directional sound source

    DEFF Research Database (Denmark)

    Guarato, Francesco; Hallam, John

    2010-01-01

    Understanding and modeling bat biosonar behavior should take into account what the bat actually emitted while exploring the surrounding environment. Recording of the bat calls could be performed by means of a telemetry system small enough to sit on the bat head, though filtering due to bat...... directivity affects recordings and not all bat species are able to carry such a device. Instead, remote microphone recordings of the bat calls could be processed by means of a mathematical method that estimates bat head orientation as a first step before calculating the amplitudes of each call for each...... and discussed. A further improvement of the method is necessary as its performance for call reconstruction strongly depends on correct choice of the sample at which the recorded call is thought to start in each microphone data set....

  4. 3D reconstruction of the source and scale of buried young flood channels on Mars.

    Science.gov (United States)

    Morgan, Gareth A; Campbell, Bruce A; Carter, Lynn M; Plaut, Jeffrey J; Phillips, Roger J

    2013-05-03

    Outflow channels on Mars are interpreted as the product of gigantic floods due to the catastrophic eruption of groundwater that may also have initiated episodes of climate change. Marte Vallis, the largest of the young martian outflow channels (Mars hydrologic activity during a period otherwise considered to be cold and dry. Using data from the Shallow Radar sounder on the Mars Reconnaissance Orbiter, we present a three-dimensional (3D) reconstruction of buried channels on Mars and provide estimates of paleohydrologic parameters. Our work shows that Cerberus Fossae provided the waters that carved Marte Vallis, and it extended an additional 180 kilometers to the east before the emplacement of the younger lava flows. We identified two stages of channel incision and determined that channel depths were more than twice those of previous estimates.

  5. A new open-source pin power reconstruction capability in DRAGON5 and DONJON5 neutronic codes

    Energy Technology Data Exchange (ETDEWEB)

    Chambon, R., E-mail: richard-pierre.chambon@polymtl.ca; Hébert, A., E-mail: alain.hebert@polymtl.ca

    2015-08-15

    In order to better optimize the fuel energy efficiency in PWRs, the burnup distribution has to be known as accurately as possible, ideally in each pin. However, this level of detail is lost when core calculations are performed with homogenized cross-sections. The pin power reconstruction (PPR) method can be used to get back those levels of details as accurately as possible in a small additional computing time frame compared to classical core calculations. Such a de-homogenization technique for core calculations using arbitrarily homogenized fuel assembly geometries was presented originally by Fliscounakis et al. In our work, the same methodology was implemented in the open-source neutronic codes DRAGON5 and DONJON5. The new type of Selengut homogenization, called macro-calculation water gap, also proposed by Fliscounakis et al. was implemented. Some important details on the methodology were emphasized in order to get precise results. Validation tests were performed on 12 configurations of 3×3 clusters where simulations in transport theory and in diffusion theory followed by pin-power reconstruction were compared. The results shows that the pin power reconstruction and the Selengut macro-calculation water gap methods were correctly implemented. The accuracy of the simulations depends on the SPH method and on the homogenization geometry choices. Results show that the heterogeneous homogenization is highly recommended. SPH techniques were investigated with flux-volume and Selengut normalization, but the former leads to inaccurate results. Even though the new Selengut macro-calculation water gap method gives promising results regarding flux continuity at assembly interfaces, the classical Selengut approach is more reliable in terms of maximum and average errors in the whole range of configurations.

  6. A sparsity-based iterative algorithm for reconstruction of micro-CT images from highly undersampled projection datasets obtained with a synchrotron X-ray source

    Science.gov (United States)

    Melli, S. Ali; Wahid, Khan A.; Babyn, Paul; Cooper, David M. L.; Gopi, Varun P.

    2016-12-01

    Synchrotron X-ray Micro Computed Tomography (Micro-CT) is an imaging technique which is increasingly used for non-invasive in vivo preclinical imaging. However, it often requires a large number of projections from many different angles to reconstruct high-quality images leading to significantly high radiation doses and long scan times. To utilize this imaging technique further for in vivo imaging, we need to design reconstruction algorithms that reduce the radiation dose and scan time without reduction of reconstructed image quality. This research is focused on using a combination of gradient-based Douglas-Rachford splitting and discrete wavelet packet shrinkage image denoising methods to design an algorithm for reconstruction of large-scale reduced-view synchrotron Micro-CT images with acceptable quality metrics. These quality metrics are computed by comparing the reconstructed images with a high-dose reference image reconstructed from 1800 equally spaced projections spanning 180°. Visual and quantitative-based performance assessment of a synthetic head phantom and a femoral cortical bone sample imaged in the biomedical imaging and therapy bending magnet beamline at the Canadian Light Source demonstrates that the proposed algorithm is superior to the existing reconstruction algorithms. Using the proposed reconstruction algorithm to reduce the number of projections in synchrotron Micro-CT is an effective way to reduce the overall radiation dose and scan time which improves in vivo imaging protocols.

  7. Analytic reconstruction algorithms for triple-source CT with horizontal data truncation

    International Nuclear Information System (INIS)

    Chen, Ming; Yu, Hengyong

    2015-01-01

    Purpose: This paper explores a triple-source imaging method with horizontal data truncation to enlarge the field of view (FOV) for big objects. Methods: The study is conducted by using theoretical analysis, mathematical deduction, and numerical simulations. The proposed algorithms are implemented in c + + and MATLAB. While the basic platform is constructed in MATLAB, the computationally intensive segments are coded in c + +, which are linked via a MEX interface. Results: A triple-source circular scanning configuration with horizontal data truncation is developed, where three pairs of x-ray sources and detectors are unevenly distributed on the same circle to cover the whole imaging object. For this triple-source configuration, a fan-beam filtered backprojection-type algorithm is derived for truncated full-scan projections without data rebinning. The algorithm is also extended for horizontally truncated half-scan projections and cone-beam projections in a Feldkamp-type framework. Using their method, the FOV is enlarged twofold to threefold to scan bigger objects with high speed and quality. The numerical simulation results confirm the correctness and effectiveness of the developed algorithms. Conclusions: The triple-source scanning configuration with horizontal data truncation cannot only keep most of the advantages of a traditional multisource system but also cover a larger FOV for big imaging objects. In addition, because the filtering is shift-invariant, the proposed algorithms are very fast and easily parallelized on graphic processing units

  8. A hybrid algorithm for stochastic single-source capacitated facility location problem with service level requirements

    Directory of Open Access Journals (Sweden)

    Hosseinali Salemi

    2016-04-01

    Full Text Available Facility location models are observed in many diverse areas such as communication networks, transportation, and distribution systems planning. They play significant role in supply chain and operations management and are one of the main well-known topics in strategic agenda of contemporary manufacturing and service companies accompanied by long-lasting effects. We define a new approach for solving stochastic single source capacitated facility location problem (SSSCFLP. Customers with stochastic demand are assigned to set of capacitated facilities that are selected to serve them. It is demonstrated that problem can be transformed to deterministic Single Source Capacitated Facility Location Problem (SSCFLP for Poisson demand distribution. A hybrid algorithm which combines Lagrangian heuristic with adjusted mixture of Ant colony and Genetic optimization is proposed to find lower and upper bounds for this problem. Computational results of various instances with distinct properties indicate that proposed solving approach is efficient.

  9. The present problems of hygienic supervising of workplaces with ionizing radiation sources

    International Nuclear Information System (INIS)

    Husar, J.

    1995-01-01

    This paper deals with the problems of hygienic supervising of workplaces with ionizing radiation sources in Bratislava. Most problems consist in present economic transformation of State Corporations. Abolishing of previous State Corporations and arising of new organizations means new field of their activities. It often happens, that previously used ionizing radiation sources, X-ray tubes, or radioactive sources, are not longer to use and it is necessary to remove corresponding workplaces.The big State Corporation with series of subsidiaries in whole Slovakia was divided to many new smaller Joint-stock Corporations. A subsidiary possessed workplace with X-ray tube and sealed radioactive source of medium radioactivity. During a routine hygienic inspection was found, that the original establishment was abolished, all personal dismissed and another organization is going to move at this place. New organization personnel has not known,that the previous workplace was such one with radiation sources. The situation was complicated by the fact, that new management had no connection to previous personnel and had not sufficient information about abolished establishment. The problem of supervising workplaces with ionizing radiation sources is described. (J.K.)

  10. The present problems of hygienic supervising of workplaces with ionizing radiation sources

    Energy Technology Data Exchange (ETDEWEB)

    Husar, J [State Health Institute of Slovak Republic, Bratislava (Czech Republic)

    1996-12-31

    This paper deals with the problems of hygienic supervising of workplaces with ionizing radiation sources in Bratislava. Most problems consist in present economic transformation of State Corporations. Abolishing of previous State Corporations and arising of new organizations means new field of their activities. It often happens, that previously used ionizing radiation sources, X-ray tubes, or radioactive sources, are not longer to use and it is necessary to remove corresponding workplaces.The big State Corporation with series of subsidiaries in whole Slovakia was divided to many new smaller Joint-stock Corporations. A subsidiary possessed workplace with X-ray tube and sealed radioactive source of medium radioactivity. During a routine hygienic inspection was found, that the original establishment was abolished, all personal dismissed and another organization is going to move at this place. New organization personnel has not known,that the previous workplace was such one with radiation sources. The situation was complicated by the fact, that new management had no connection to previous personnel and had not sufficient information about abolished establishment. The problem of supervising workplaces with ionizing radiation sources is described. (J.K.).

  11. EEG source reconstruction evidence for the noun-verb neural dissociation along semantic dimensions.

    Science.gov (United States)

    Zhao, Bin; Dang, Jianwu; Zhang, Gaoyan

    2017-09-17

    One of the long-standing issues in neurolinguistic research is about the neural basis of word representation, concerning whether grammatical classification or semantic difference causes the neural dissociation of brain activity patterns when processing different word categories, especially nouns and verbs. To disentangle this puzzle, four orthogonalized word categories in Chinese: unambiguous nouns (UN), unambiguous verbs (UV), ambiguous words with noun-biased semantics (AN), and ambiguous words with verb-biased semantics (AV) were adopted in an auditory task for recording electroencephalographic (EEG) signals from 128 electrodes on the scalps of twenty-two subjects. With the advanced current density reconstruction (CDR) algorithm and the constraint of standardized low-resolution electromagnetic tomography, the spatiotemporal brain dynamics of word processing were explored with the results that in multiple time periods including P1 (60-90ms), N1 (100-140ms), P200 (150-250ms) and N400 (350-450ms), noun-verb dissociation over the parietal-occipital and frontal-central cortices appeared not only between the UN-UV grammatical classes but also between the grammatically identical but semantically different AN-AV pairs. The apparent semantic dissociation within one grammatical class strongly suggests that the semantic difference rather than grammatical classification could be interpreted as the origin of the noun-verb neural dissociation. Our results also revealed that semantic dissociation occurs from an early stage and repeats in multiple phases, thus supporting a functionally hierarchical word processing mechanism. Copyright © 2017 IBRO. Published by Elsevier Ltd. All rights reserved.

  12. Iterative and range test methods for an inverse source problem for acoustic waves

    International Nuclear Information System (INIS)

    Alves, Carlos; Kress, Rainer; Serranho, Pedro

    2009-01-01

    We propose two methods for solving an inverse source problem for time-harmonic acoustic waves. Based on the reciprocity gap principle a nonlinear equation is presented for the locations and intensities of the point sources that can be solved via Newton iterations. To provide an initial guess for this iteration we suggest a range test algorithm for approximating the source locations. We give a mathematical foundation for the range test and exhibit its feasibility in connection with the iteration method by some numerical examples

  13. An investigation of the closure problem applied to reactor accident source terms

    International Nuclear Information System (INIS)

    Brearley, I.R.; Nixon, W.; Hayns, M.R.

    1987-01-01

    The closure problem, as considered here, focuses attention on the question of when in current research programmes enough has been learned about the source terms for reactor accident releases. Noting that current research is tending to reduce the estimated magnitude of the aerosol component of atmospheric, accidental releases, several possible criteria for closure are suggested. Moreover, using the reactor accident consequence model CRACUK, the effect of gradually reducing the aerosol release fractions of a pressurized water reactor (PWR2) source term (as defined in the WASH-1400 study) is investigated and the implications of applying the suggested criteria to current source term research discussed. (author)

  14. Is mammary reconstruction with the anatomical Becker expander a simple procedure? Complications and hidden problems leading to secondary surgical procedures: a follow-up study.

    Science.gov (United States)

    Farace, Francesco; Faenza, Mario; Bulla, Antonio; Rubino, Corrado; Campus, Gian Vittorio

    2013-06-01

    Debate over the role of Becker expander implants (BEIs) in breast reconstruction is still ongoing. There are no clear indications for BEI use. The main indications for BEI use are one-stage breast reconstruction procedure and congenital breast deformities correction, due to the postoperative ability to vary BEI volume. Recent studies showed that BEIs were removed 5 years after mammary reconstruction in 68% of operated patients. This entails a further surgical procedure. BEIs should not, therefore, be regarded as one-stage prostheses. We performed a case-series study of breast reconstructions with anatomically shaped Becker-35™ implants, in order to highlight complications and to flag unseen problems, which might entail a second surgical procedure. A total of 229 patients, reconstructed from 2005 to 2010, were enrolled in this study. Data relating to implant type, volume, mean operative time and complications were recorded. All the patients underwent the same surgical procedure. The minimum follow-up period was 18 months. During a 5-year follow-up, 99 patients required secondary surgery to correct their complications or sequelae; 46 of them underwent BEI removal within 2 years of implantation, 56 within 3 years, 65 within 4 years and 74 within 5 years. Our findings show that two different sorts of complications can arise with these devices, leading to premature implant removal, one common to any breast implant and one peculiar to BEIs. The Becker implant is a permanent expander. Surgeons must, therefore, be aware that, once positioned, the Becker expander cannot be adjusted at a later date, as in two-stage expander/prosthesis reconstructions for instance. Surgeons must have a clear understanding of possible BEI complications in order to be able to discuss these with their patients. Therefore, only surgeons experienced in breast reconstruction should use BEIs. Copyright © 2013 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by

  15. Reframing a Problem: Identifying the Sources of Conflict in a Teacher Education Course

    Science.gov (United States)

    Quebec Fuentes, Sarah; Bloom, Mark

    2017-01-01

    This article exemplifies the critical initial phase of action research, problem identification, in the context of a teacher education course. After frustration arose between preservice elementary teachers (PSTs) and their instructor over classwork quality, the instructor employed reflective journaling and discussions to examine the source of the…

  16. A multi-supplier sourcing problem with a preference ordering of suppliers

    NARCIS (Netherlands)

    Honhon, D.B.L.P.; Gaur, V.; Seshadri, S.

    2012-01-01

    We study a sourcing problem faced by a firm that seeks to procure a product or a component from a pool of alternative suppliers. The firm has a preference ordering of the suppliers based on factors such as their past performance, quality, service, geographical location, and financial strength, which

  17. The Use of Original Sources and Its Potential Relation to the Recruitment Problem

    Science.gov (United States)

    Jankvist, Uffe Thomas

    2014-01-01

    Based on a study about using original sources with Danish upper secondary students, the paper addresses the potential outcome of such an approach in regard to the so-called recruitment problem to the mathematical sciences. 24 students were exposed to questionnaire questions and 16 of these to follow-up interviews, which form the basis for both a…

  18. A Numerical Simulation Of The Pulse Sequence Reconstruction in AC Biased TESs With a β Source

    International Nuclear Information System (INIS)

    Ferrari, Lorenza; Vaccarone, Renzo

    2009-01-01

    We study the response of micro-calorimeters based on Ir/Au TESs biased by an AC voltage in the MHz range to the power input generated by beta emission in a Re source thermally connected to the calorimeter itself. The micro-calorimeter is assumed to work at -80 mK, and the energy pulses corresponding to the beta emission have an energy distributed between zero and 2.58 KeV. In this numerical simulation the TES is inserted in a RLC resonating circuit, with a low quality factor. The thermal conductivities between the source and the calorimeter and that from the calorimeter to the heat sink are non-linear. The superconducting to normal transition of the TES is described by a realistic non-linear model. The AC current at the carrier frequency, modulated by the changing resistance of the TES, is demodulated and the output is filtered. The resulting signal is analyzed to deduce the attainable time resolution and the linearity of the response.

  19. On Inverse Coefficient Heat-Conduction Problems on Reconstruction of Nonlinear Components of the Thermal-Conductivity Tensor of Anisotropic Bodies

    Science.gov (United States)

    Formalev, V. F.; Kolesnik, S. A.

    2017-11-01

    The authors are the first to present a closed procedure for numerical solution of inverse coefficient problems of heat conduction in anisotropic materials used as heat-shielding ones in rocket and space equipment. The reconstructed components of the thermal-conductivity tensor depend on temperature (are nonlinear). The procedure includes the formation of experimental data, the implicit gradient-descent method, the economical absolutely stable method of numerical solution of parabolic problems containing mixed derivatives, the parametric identification, construction, and numerical solution of the problem for elements of sensitivity matrices, the development of a quadratic residual functional and regularizing functionals, and also the development of algorithms and software systems. The implicit gradient-descent method permits expanding the quadratic functional in a Taylor series with retention of the linear terms for the increments of the sought functions. This substantially improves the exactness and stability of solution of the inverse problems. Software systems are developed with account taken of the errors in experimental data and disregarding them. On the basis of a priori assumptions of the qualitative behavior of the functional dependences of the components of the thermal-conductivity tensor on temperature, regularizing functionals are constructed by means of which one can reconstruct the components of the thermal-conductivity tensor with an error no higher than the error of the experimental data. Results of the numerical solution of the inverse coefficient problems on reconstruction of nonlinear components of the thermal-conductivity tensor have been obtained and are discussed.

  20. Observation of {gamma}-sources using a new reconstruction technique in the CLUE experiment

    Energy Technology Data Exchange (ETDEWEB)

    Bartoli, B.; Bastieri, D.; Bigongiari, C.; Biral, R.; Ciocci, M.A.; Cresti, M.; Dokoutchaeva, V.; Kartashov, D.; Liello, F.; Malakhov, N.; Mariotti, M.; Marsella, G.; Menzione, A.; Paoletti, R.; Peruzzo, L.; Piccioli, A.; Pegna, R.; Rosso, F.; Saggion, A.; Sartori, G.; Sbarra, C.; Scribano, A.; Smogailov, E.; Stamerra, A.; Turini, N

    2001-04-01

    The CLUE experiment, located in La Palma island at 2200 m a.s.l., is an array of 3x3 telescope, detecting the UV (190 - 230 nm) Cherenkov light produced by atmospheric showers. Since atmospheric absorption in the UV range is higher than in the visible range, CLUE cannot apply existing algorithms normally used in IACT experiments to determine primary cosmic ray direction. In this paper we present a new method developed by CLUE. The algorithm performances were evaluated using simulated showers. Using the new technique, preliminary results of last two years observational campaigns on the Crab Nebula and on Markarian 421 are presented, showing a clear signal on both sources. The CLUE experiment collected also data with the telescopes aiming directly at the Moon: we expect improvements also on the Moon Shadow measurement adopting the new method.

  1. Observation of γ-sources using a new reconstruction technique in the CLUE experiment

    International Nuclear Information System (INIS)

    Bartoli, B.; Bastieri, D.; Bigongiari, C.; Biral, R.; Ciocci, M.A.; Cresti, M.; Dokoutchaeva, V.; Kartashov, D.; Liello, F.; Malakhov, N.; Mariotti, M.; Marsella, G.; Menzione, A.; Paoletti, R.; Peruzzo, L.; Piccioli, A.; Pegna, R.; Rosso, F.; Saggion, A.; Sartori, G.; Sbarra, C.; Scribano, A.; Smogailov, E.; Stamerra, A.; Turini, N.

    2001-01-01

    The CLUE experiment, located in La Palma island at 2200 m a.s.l., is an array of 3x3 telescope, detecting the UV (190 - 230 nm) Cherenkov light produced by atmospheric showers. Since atmospheric absorption in the UV range is higher than in the visible range, CLUE cannot apply existing algorithms normally used in IACT experiments to determine primary cosmic ray direction. In this paper we present a new method developed by CLUE. The algorithm performances were evaluated using simulated showers. Using the new technique, preliminary results of last two years observational campaigns on the Crab Nebula and on Markarian 421 are presented, showing a clear signal on both sources. The CLUE experiment collected also data with the telescopes aiming directly at the Moon: we expect improvements also on the Moon Shadow measurement adopting the new method

  2. Development of modified voxel phantoms for the numerical dosimetric reconstruction of radiological accidents involving external sources: implementation in SESAME tool.

    Science.gov (United States)

    Courageot, Estelle; Sayah, Rima; Huet, Christelle

    2010-05-07

    Estimating the dose distribution in a victim's body is a relevant indicator in assessing biological damage from exposure in the event of a radiological accident caused by an external source. When the dose distribution is evaluated with a numerical anthropomorphic model, the posture and morphology of the victim have to be reproduced as realistically as possible. Several years ago, IRSN developed a specific software application, called the simulation of external source accident with medical images (SESAME), for the dosimetric reconstruction of radiological accidents by numerical simulation. This tool combines voxel geometry and the MCNP(X) Monte Carlo computer code for radiation-material interaction. This note presents a new functionality in this software that enables the modelling of a victim's posture and morphology based on non-uniform rational B-spline (NURBS) surfaces. The procedure for constructing the modified voxel phantoms is described, along with a numerical validation of this new functionality using a voxel phantom of the RANDO tissue-equivalent physical model.

  3. A comparative study of electrocardiogram multi-segment reconstruction and dual source computed tomography using a computer controlled coronary phantom

    International Nuclear Information System (INIS)

    Ohashi, Kazuya; Higashide, Ryo; Kunitomo, Hirosi; Ichikawa, Katsuhiro

    2011-01-01

    Currently, there are two main methods for improving temporal resolution of coronary computed tomography (CT): electrocardiogram-gated multi-segment reconstruction (EMR) and dual source scanning using dual source CT (DSCT). We developed a motion phantom system for image quality assessment of cardiac CT to evaluate these two methods. This phantom system was designed to move an object at arbitrary speeds during a desired phase range in cyclic motion. By using this system, we obtained coronary CT mode images for motion objects like coronary arteries. We investigated the difference in motion artifacts between EMR and the DSCT using a 3-mm-diameter acrylic rod resembling the coronary artery. EMR was evaluated using 16-row multi-slice CT (16MSCT). To evaluate the image quality, we examined the degree of motion artifacts by analyzing the profiles around the rod and the displacement of a peak pixel in the rod image. In the 16MSCT, remarkable increases of artifacts and displacement were caused by the EMR. In contrast, the DSCT presented excellent images with fewer artifacts. The results showed the validity of DSCT to improve true temporal resolution. (author)

  4. On an inverse source problem for enhanced oil recovery by wave motion maximization in reservoirs

    KAUST Repository

    Karve, Pranav M.

    2014-12-28

    © 2014, Springer International Publishing Switzerland. We discuss an optimization methodology for focusing wave energy to subterranean formations using strong motion actuators placed on the ground surface. The motivation stems from the desire to increase the mobility of otherwise entrapped oil. The goal is to arrive at the spatial and temporal description of surface sources that are capable of maximizing mobility in the target reservoir. The focusing problem is posed as an inverse source problem. The underlying wave propagation problems are abstracted in two spatial dimensions, and the semi-infinite extent of the physical domain is negotiated by a buffer of perfectly-matched-layers (PMLs) placed at the domain’s truncation boundary. We discuss two possible numerical implementations: Their utility for deciding the tempo-spatial characteristics of optimal wave sources is shown via numerical experiments. Overall, the simulations demonstrate the inverse source method’s ability to simultaneously optimize load locations and time signals leading to the maximization of energy delivery to a target formation.

  5. SESAME: a software tool for the numerical dosimetric reconstruction of radiological accidents involving external sources and its application to the accident in Chile in December 2005.

    Science.gov (United States)

    Huet, C; Lemosquet, A; Clairand, I; Rioual, J B; Franck, D; de Carlan, L; Aubineau-Lanièce, I; Bottollier-Depois, J F

    2009-01-01

    Estimating the dose distribution in a victim's body is a relevant indicator in assessing biological damage from exposure in the event of a radiological accident caused by an external source. This dose distribution can be assessed by physical dosimetric reconstruction methods. Physical dosimetric reconstruction can be achieved using experimental or numerical techniques. This article presents the laboratory-developed SESAME--Simulation of External Source Accident with MEdical images--tool specific to dosimetric reconstruction of radiological accidents through numerical simulations which combine voxel geometry and the radiation-material interaction MCNP(X) Monte Carlo computer code. The experimental validation of the tool using a photon field and its application to a radiological accident in Chile in December 2005 are also described.

  6. Comparison between correlated sampling and the perturbation technique of MCNP5 for fixed-source problems

    International Nuclear Information System (INIS)

    He Tao; Su Bingjing

    2011-01-01

    Highlights: → The performance of the MCNP differential operator perturbation technique is compared with that of the MCNP correlated sampling method for three types of fixed-source problems. → In terms of precision, the MCNP perturbation technique outperforms correlated sampling for one type of problem but performs comparably with or even under-performs correlated sampling for the other two types of problems. → In terms of accuracy, the MCNP perturbation calculations may predict inaccurate results for some of the test problems. However, the accuracy can be improved if the midpoint correction technique is used. - Abstract: Correlated sampling and the differential operator perturbation technique are two methods that enable MCNP (Monte Carlo N-Particle) to simulate small response change between an original system and a perturbed system. In this work the performance of the MCNP differential operator perturbation technique is compared with that of the MCNP correlated sampling method for three types of fixed-source problems. In terms of precision of predicted response changes, the MCNP perturbation technique outperforms correlated sampling for the problem involving variation of nuclide concentrations in the same direction but performs comparably with or even underperforms correlated sampling for the other two types of problems that involve void or variation of nuclide concentrations in opposite directions. In terms of accuracy, the MCNP differential operator perturbation calculations may predict inaccurate results that deviate from the benchmarks well beyond their uncertainty ranges for some of the test problems. However, the accuracy of the MCNP differential operator perturbation can be improved if the midpoint correction technique is used.

  7. [Mandibular reconstruction with fibula free flap. Experience of virtual reconstruction using Osirix®, a free and open source software for medical imagery].

    Science.gov (United States)

    Albert, S; Cristofari, J-P; Cox, A; Bensimon, J-L; Guedon, C; Barry, B

    2011-12-01

    The techniques of free tissue transfers are mainly used for mandibular reconstruction by specialized surgical teams. This type of reconstruction is mostly realized in matters of head and neck cancers affecting mandibular bone and requiring a wide surgical resection and interruption of the mandible. To decrease the duration of the operation, surgical procedure involves generally two teams, one devoted to cancer resection and the other one to raise the fibular flap and making the reconstruction. For a better preparation of this surgical procedure, we propose here the use of a medical imaging software enabling mandibular reconstructions in three dimensions using the CT-scan done during the initial disease-staging checkup. The software used is Osirix®, developed since 2004 by a team of radiologists from Geneva and UCLA, working on Apple® computers and downloadable free of charge in its basic version. We report here our experience of this software in 17 patients, with a preoperative modelling in three dimensions of the mandible, of the segment of mandible to be removed. It also forecasts the numbers of fragments of fibula needed and the location of osteotomies. Copyright © 2009 Elsevier Masson SAS. All rights reserved.

  8. Typology of historical sources and the reconstruction of long-term historical changes of riverine fish: a case study of the Austrian Danube and northern Russian rivers

    Science.gov (United States)

    Haidvogl, Gertrud; Lajus, Dmitry; Pont, Didier; Schmid, Martin; Jungwirth, Mathias; Lajus, Julia

    2014-01-01

    Historical data are widely used in river ecology to define reference conditions or to investigate the evolution of aquatic systems. Most studies rely on printed documents from the 19th century, thus missing pre-industrial states and human impacts. This article discusses historical sources that can be used to reconstruct the development of riverine fish communities from the Late Middle Ages until the mid-20th century. Based on the studies of the Austrian Danube and northern Russian rivers, we propose a classification scheme of printed and archival sources and describe their fish ecological contents. Five types of sources were identified using the origin of sources as the first criterion: (i) early scientific surveys, (ii) fishery sources, (iii) fish trading sources, (iv) fish consumption sources and (v) cultural representations of fish. Except for early scientific surveys, all these sources were produced within economic and administrative contexts. They did not aim to report about historical fish communities, but do contain information about commercial fish and their exploitation. All historical data need further analysis for a fish ecological interpretation. Three case studies from the investigated Austrian and Russian rivers demonstrate the use of different source types and underline the necessity for a combination of different sources and a methodology combining different disciplinary approaches. Using a large variety of historical sources to reconstruct the development of past fish ecological conditions can support future river management by going beyond the usual approach of static historical reference conditions. PMID:25284959

  9. Pylogeny: an open-source Python framework for phylogenetic tree reconstruction and search space heuristics

    Directory of Open Access Journals (Sweden)

    Alexander Safatli

    2015-06-01

    Full Text Available Summary. Pylogeny is a cross-platform library for the Python programming language that provides an object-oriented application programming interface for phylogenetic heuristic searches. Its primary function is to permit both heuristic search and analysis of the phylogenetic tree search space, as well as to enable the design of novel algorithms to search this space. To this end, the framework supports the structural manipulation of phylogenetic trees, in particular using rearrangement operators such as NNI, SPR, and TBR, the scoring of trees using parsimony and likelihood methods, the construction of a tree search space graph, and the programmatic execution of a few existing heuristic programs. The library supports a range of common phylogenetic file formats and can be used for both nucleotide and protein data. Furthermore, it is also capable of supporting GPU likelihood calculation on nucleotide character data through the BEAGLE library.Availability. Existing development and source code is available for contribution and for download by the public from GitHub (http://github.com/AlexSafatli/Pylogeny. A stable release of this framework is available for download through PyPi (Python Package Index at http://pypi.python.org/pypi/pylogeny.

  10. Spanish historical sources to reconstruct climate in the Americas during the XIXth Century

    Science.gov (United States)

    García-Herrera, R.; Rubio, F.; Prieto, M.; Hernández, E.; Gimeno, L.

    2001-12-01

    The Spanish colonization of the Americas expanded since the beginning of the XVIth century until the beginning of the XIXth century, when most of the colonies became independent. During this period, a large amount of documentary information was produced, due to the fact that the Spanish Empire was highly centralized and bureaucracy was one of its core elements. Most of these documents are well preserved either in local archives in the Americas or in the Archivo General de Indias in Sevilla, which keeps thousands of bundles relative to any relevant aspect of the ordinary life of the colonies. Different projects are now searching climatic information in this archive with very encouraging results. During the XIXth century Spain kept two colonies in the Americas: Cuba and Puerto Rico, which became independent in 1898. This has allowed that a lot of information survived in Spanish Archives for this period. After a preliminary inspection of different Spanish Archives: Archivo General de Indias, Archivo del Museo Naval and Archivo Histórico Nacional (General Archive of Indies, Archive of the Naval Museum and National Historic Archive), it has been possible to identify two main areas of climatic interest: 1) information from ship logbooks connecting Spain with Cuba and Puerto Rico and 2) reports about hurricanes. The information contained in the ship logbooks is very rich and could help to better characterize elements of the large-scale circulation in the Atlantic; the reports on hurricanes can be very detailed and were elaborated by very skilled personnel. The presentation will provide different examples of the potential of these sources and describe different Spanish projects involved in the abstraction of this type of data.

  11. Definition of the form of coal spontaneous combustion source as the inverse problem of geoelectrics

    Directory of Open Access Journals (Sweden)

    Sirota Dmitry

    2017-01-01

    Full Text Available The paper reviews the method of determining the shape and size of the coal self-heating source on coal pit benches and in coal piles during mining of coal by the open method. The method is based on the regularity found in the 1970s of the previous century and related to the distribution of potential of the natural electrical field arising from the temperature in the vicinity of the center of self-heating. The problem is reduced to the solution of inverse ill-posed problem of mathematical physics. The study presents the developed algorithm of its solution and the results of numerical simulation.

  12. An Adaptive Observer-Based Algorithm for Solving Inverse Source Problem for the Wave Equation

    KAUST Repository

    Asiri, Sharefa M.; Zayane, Chadia; Laleg-Kirati, Taous-Meriem

    2015-01-01

    Observers are well known in control theory. Originally designed to estimate the hidden states of dynamical systems given some measurements, the observers scope has been recently extended to the estimation of some unknowns, for systems governed by partial differential equations. In this paper, observers are used to solve inverse source problem for a one-dimensional wave equation. An adaptive observer is designed to estimate the state and source components for a fully discretized system. The effectiveness of the algorithm is emphasized in noise-free and noisy cases and an insight on the impact of measurements’ size and location is provided.

  13. An Adaptive Observer-Based Algorithm for Solving Inverse Source Problem for the Wave Equation

    KAUST Repository

    Asiri, Sharefa M.

    2015-08-31

    Observers are well known in control theory. Originally designed to estimate the hidden states of dynamical systems given some measurements, the observers scope has been recently extended to the estimation of some unknowns, for systems governed by partial differential equations. In this paper, observers are used to solve inverse source problem for a one-dimensional wave equation. An adaptive observer is designed to estimate the state and source components for a fully discretized system. The effectiveness of the algorithm is emphasized in noise-free and noisy cases and an insight on the impact of measurements’ size and location is provided.

  14. On increasing stability in the two dimensional inverse source scattering problem with many frequencies

    Science.gov (United States)

    Entekhabi, Mozhgan Nora; Isakov, Victor

    2018-05-01

    In this paper, we will study the increasing stability in the inverse source problem for the Helmholtz equation in the plane when the source term is assumed to be compactly supported in a bounded domain Ω with a sufficiently smooth boundary. Using the Fourier transform in the frequency domain, bounds for the Hankel functions and for scattering solutions in the complex plane, improving bounds for the analytic continuation, and the exact observability for the wave equation led us to our goals which are a sharp uniqueness and increasing stability estimate when the wave number interval is growing.

  15. On the comparsion of the Spherical Wave Expansion-to-Plane Wave Expansion and the Sources Reconstruction Method for Antenna Diagnostics

    DEFF Research Database (Denmark)

    Alvarez, Yuri; Cappellin, Cecilia; Las-Heras, Fernando

    2008-01-01

    A comparison between two recently developed methods for antenna diagnostics is presented. On one hand, the Spherical Wave Expansion-to-Plane Wave Expansion (SWE-PWE), based on the relationship between spherical and planar wave modes. On the other hand, the Sources Reconstruction Method (SRM), based...

  16. A dynamical regularization algorithm for solving inverse source problems of elliptic partial differential equations

    Science.gov (United States)

    Zhang, Ye; Gong, Rongfang; Cheng, Xiaoliang; Gulliksson, Mårten

    2018-06-01

    This study considers the inverse source problem for elliptic partial differential equations with both Dirichlet and Neumann boundary data. The unknown source term is to be determined by additional boundary conditions. Unlike the existing methods found in the literature, which usually employ the first-order in time gradient-like system (such as the steepest descent methods) for numerically solving the regularized optimization problem with a fixed regularization parameter, we propose a novel method with a second-order in time dissipative gradient-like system and a dynamical selected regularization parameter. A damped symplectic scheme is proposed for the numerical solution. Theoretical analysis is given for both the continuous model and the numerical algorithm. Several numerical examples are provided to show the robustness of the proposed algorithm.

  17. Problems of accuracy and sources of error in trace analysis of elements

    International Nuclear Information System (INIS)

    Porat, Ze'ev.

    1995-07-01

    The technological developments in the field of analytical chemistry in recent years facilitates trace analysis of materials in sub-ppb levels. This provides important information regarding the presence of various trace elements in the human body, in drinking water and in the environment. However, it also exposes the measurements to more severe problems of contamination and inaccuracy due to the high sensitivity of the analytical methods. The sources of error are numerous and can be included in three main groups: (a) impurities of various sources; (b) loss of material during sample processing; (c) problems of calibration and interference. These difficulties are discussed here in detail, together with some practical solutions and examples.(authors) 8 figs., 2 tabs., 18 refs.,

  18. Problems of accuracy and sources of error in trace analysis of elements

    Energy Technology Data Exchange (ETDEWEB)

    Porat, Ze` ev

    1995-07-01

    The technological developments in the field of analytical chemistry in recent years facilitates trace analysis of materials in sub-ppb levels. This provides important information regarding the presence of various trace elements in the human body, in drinking water and in the environment. However, it also exposes the measurements to more severe problems of contamination and inaccuracy due to the high sensitivity of the analytical methods. The sources of error are numerous and can be included in three main groups: (a) impurities of various sources; (b) loss of material during sample processing; (c) problems of calibration and interference. These difficulties are discussed here in detail, together with some practical solutions and examples.(authors) 8 figs., 2 tabs., 18 refs.,.

  19. An Inverse Source Problem for a One-dimensional Wave Equation: An Observer-Based Approach

    KAUST Repository

    Asiri, Sharefa M.

    2013-05-25

    Observers are well known in the theory of dynamical systems. They are used to estimate the states of a system from some measurements. However, recently observers have also been developed to estimate some unknowns for systems governed by Partial differential equations. Our aim is to design an observer to solve inverse source problem for a one dimensional wave equation. Firstly, the problem is discretized in both space and time and then an adaptive observer based on partial field measurements (i.e measurements taken form the solution of the wave equation) is applied to estimate both the states and the source. We see the effectiveness of this observer in both noise-free and noisy cases. In each case, numerical simulations are provided to illustrate the effectiveness of this approach. Finally, we compare the performance of the observer approach with Tikhonov regularization approach.

  20. Topical problems connected with the German act on electricity from renewable energy sources (StrEG)

    International Nuclear Information System (INIS)

    Pohlmann, M.

    1998-01-01

    The German act (StrEG) intended to enhance the use of renewable energy sources for electricity generation and to promote the relevant technologies raises some problems in connection with constitutional law that still await judicial review by the German Federal Constitutional Court. In addition, doubts as to the lawfulness of provisions of the act have been emerging in connection with EC laws governing the regime of subsidies and state aid. The article here summarizes the current situation. (orig./CB) [de

  1. The solar neutrino problem after the GALLEX artificial neutrino source experiment

    International Nuclear Information System (INIS)

    Vignaud, D.

    1995-01-01

    Using an intense 51 Cr artificial neutrino source (more than 60 PBq), the GALLEX solar neutrino collaboration has recently checked that its radiochemical detector was fully efficient for the detection of solar neutrinos. After this crucial result, the status of the solar neutrino problem is reviewed, with emphasis on how neutrino oscillations may explain (through the MSW effect) the different deficits observed in the four existing experiments. (author). 25 refs., 5 figs., 1 tab

  2. Social problems as sources of opportunity – antecedents of social entrepreneurship opportunities

    Directory of Open Access Journals (Sweden)

    Agnieszka Żur

    2016-02-01

    Full Text Available Objective: Based on extensive literature review, this paper aims to establish if, why and how, in given environmental and market contexts, social entrepreneurship (SE opportunities are discovered and exploited. It positions social problems as sources of entrepreneurial opportunity. The article presents an integrated process-based view of SE opportunity antecedents and concludes with a dynamic model of SE opportunity. Research Design & Methods: To fulfil its goal, the paper establishes opportunity as unit of research and explores the dynamics of opportunity recognition. To identify the components of SE opportunity through a process-based view, the study follows the steps of critical literature review method. The literature review follows with logical reasoning and inference, which results in the formulation of a model proposition of social entrepreneurship opportunity. Findings: The paper presents a holistic perspective on opportunity antecedents in SE context and introduces social problems, information, social awareness and entrepreneurial mindset  as fundamental components of social entrepreneurship opportunity equation. Implications & Recommendations: It is necessary to remember for policy makers, investors and partners involved within the social sector, that social problems can be the source of entrepreneurial opportunity. Training, assisting and engaging socially aware entrepreneurs is a promising line of development for all communities. Contribution & Value Added: The major contribution of this study lies in extending the existing body of social entrepreneurship research by providing a new perspective, placing social problem as opportunity in the centre of the discussion.

  3. An inverse-source problem for maximization of pore-fluid oscillation within poroelastic formations

    KAUST Repository

    Jeong, C.; Kallivokas, L. F.

    2016-01-01

    This paper discusses a mathematical and numerical modeling approach for identification of an unknown optimal loading time signal of a wave source, atop the ground surface, that can maximize the relative wave motion of a single-phase pore fluid within fluid-saturated porous permeable (poroelastic) rock formations, surrounded by non-permeable semi-infinite elastic solid rock formations, in a one-dimensional setting. The motivation stems from a set of field observations, following seismic events and vibrational tests, suggesting that shaking an oil reservoir is likely to improve oil production rates. This maximization problem is cast into an inverse-source problem, seeking an optimal loading signal that minimizes an objective functional – the reciprocal of kinetic energy in terms of relative pore-fluid wave motion within target poroelastic layers. We use the finite element method to obtain the solution of the governing wave physics of a multi-layered system, where the wave equations for the target poroelastic layers and the elastic wave equation for the surrounding non-permeable layers are coupled with each other. We use a partial-differential-equation-constrained-optimization framework (a state-adjoint-control problem approach) to tackle the minimization problem. The numerical results show that the numerical optimizer recovers optimal loading signals, whose dominant frequencies correspond to amplification frequencies, which can also be obtained by a frequency sweep, leading to larger amplitudes of relative pore-fluid wave motion within the target hydrocarbon formation than other signals.

  4. An inverse-source problem for maximization of pore-fluid oscillation within poroelastic formations

    KAUST Repository

    Jeong, C.

    2016-07-04

    This paper discusses a mathematical and numerical modeling approach for identification of an unknown optimal loading time signal of a wave source, atop the ground surface, that can maximize the relative wave motion of a single-phase pore fluid within fluid-saturated porous permeable (poroelastic) rock formations, surrounded by non-permeable semi-infinite elastic solid rock formations, in a one-dimensional setting. The motivation stems from a set of field observations, following seismic events and vibrational tests, suggesting that shaking an oil reservoir is likely to improve oil production rates. This maximization problem is cast into an inverse-source problem, seeking an optimal loading signal that minimizes an objective functional – the reciprocal of kinetic energy in terms of relative pore-fluid wave motion within target poroelastic layers. We use the finite element method to obtain the solution of the governing wave physics of a multi-layered system, where the wave equations for the target poroelastic layers and the elastic wave equation for the surrounding non-permeable layers are coupled with each other. We use a partial-differential-equation-constrained-optimization framework (a state-adjoint-control problem approach) to tackle the minimization problem. The numerical results show that the numerical optimizer recovers optimal loading signals, whose dominant frequencies correspond to amplification frequencies, which can also be obtained by a frequency sweep, leading to larger amplitudes of relative pore-fluid wave motion within the target hydrocarbon formation than other signals.

  5. An advanced boundary element method (BEM) implementation for the forward problem of electromagnetic source imaging

    International Nuclear Information System (INIS)

    Akalin-Acar, Zeynep; Gencer, Nevzat G

    2004-01-01

    The forward problem of electromagnetic source imaging has two components: a numerical model to solve the related integral equations and a model of the head geometry. This study is on the boundary element method (BEM) implementation for numerical solutions and realistic head modelling. The use of second-order (quadratic) isoparametric elements and the recursive integration technique increase the accuracy in the solutions. Two new formulations are developed for the calculation of the transfer matrices to obtain the potential and magnetic field patterns using realistic head models. The formulations incorporate the use of the isolated problem approach for increased accuracy in solutions. If a personal computer is used for computations, each transfer matrix is calculated in 2.2 h. After this pre-computation period, solutions for arbitrary source configurations can be obtained in milliseconds for a realistic head model. A hybrid algorithm that uses snakes, morphological operations, region growing and thresholding is used for segmentation. The scalp, skull, grey matter, white matter and eyes are segmented from the multimodal magnetic resonance images and meshes for the corresponding surfaces are created. A mesh generation algorithm is developed for modelling the intersecting tissue compartments, such as eyes. To obtain more accurate results quadratic elements are used in the realistic meshes. The resultant BEM implementation provides more accurate forward problem solutions and more efficient calculations. Thus it can be the firm basis of the future inverse problem solutions

  6. Sources of spurious force oscillations from an immersed boundary method for moving-body problems

    Science.gov (United States)

    Lee, Jongho; Kim, Jungwoo; Choi, Haecheon; Yang, Kyung-Soo

    2011-04-01

    When a discrete-forcing immersed boundary method is applied to moving-body problems, it produces spurious force oscillations on a solid body. In the present study, we identify two sources of these force oscillations. One source is from the spatial discontinuity in the pressure across the immersed boundary when a grid point located inside a solid body becomes that of fluid with a body motion. The addition of mass source/sink together with momentum forcing proposed by Kim et al. [J. Kim, D. Kim, H. Choi, An immersed-boundary finite volume method for simulations of flow in complex geometries, Journal of Computational Physics 171 (2001) 132-150] reduces the spurious force oscillations by alleviating this pressure discontinuity. The other source is from the temporal discontinuity in the velocity at the grid points where fluid becomes solid with a body motion. The magnitude of velocity discontinuity decreases with decreasing the grid spacing near the immersed boundary. Four moving-body problems are simulated by varying the grid spacing at a fixed computational time step and at a constant CFL number, respectively. It is found that the spurious force oscillations decrease with decreasing the grid spacing and increasing the computational time step size, but they depend more on the grid spacing than on the computational time step size.

  7. Evaluation of spatial dependence of point spread function-based PET reconstruction using a traceable point-like 22Na source

    Directory of Open Access Journals (Sweden)

    Taisuke Murata

    2016-10-01

    Full Text Available Abstract Background The point spread function (PSF of positron emission tomography (PET depends on the position across the field of view (FOV. Reconstruction based on PSF improves spatial resolution and quantitative accuracy. The present study aimed to quantify the effects of PSF correction as a function of the position of a traceable point-like 22Na source over the FOV on two PET scanners with a different detector design. Methods We used Discovery 600 and Discovery 710 (GE Healthcare PET scanners and traceable point-like 22Na sources (<1 MBq with a spherical absorber design that assures uniform angular distribution of the emitted annihilation photons. The source was moved in three directions at intervals of 1 cm from the center towards the peripheral FOV using a three-dimensional (3D-positioning robot, and data were acquired over a period of 2 min per point. The PET data were reconstructed by filtered back projection (FBP, the ordered subset expectation maximization (OSEM, OSEM + PSF, and OSEM + PSF + time-of-flight (TOF. Full width at half maximum (FWHM was determined according to the NEMA method, and total counts in regions of interest (ROI for each reconstruction were quantified. Results The radial FWHM of FBP and OSEM increased towards the peripheral FOV, whereas PSF-based reconstruction recovered the FWHM at all points in the FOV of both scanners. The radial FWHM for PSF was 30–50 % lower than that of OSEM at the center of the FOV. The accuracy of PSF correction was independent of detector design. Quantitative values were stable across the FOV in all reconstruction methods. The effect of TOF on spatial resolution and quantitation accuracy was less noticeable. Conclusions The traceable 22Na point-like source allowed the evaluation of spatial resolution and quantitative accuracy across the FOV using different reconstruction methods and scanners. PSF-based reconstruction reduces dependence of the spatial resolution on the

  8. Performance bounds for sparse signal reconstruction with multiple side information [arXiv

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Seiler, Jurgen; Kaup, Andre

    2016-01-01

    In the context of compressive sensing (CS), this paper considers the problem of reconstructing sparse signals with the aid of other given correlated sources as multiple side information (SI). To address this problem, we propose a reconstruction algorithm with multiple SI (RAMSI) that solves...

  9. DEEP WIDEBAND SINGLE POINTINGS AND MOSAICS IN RADIO INTERFEROMETRY: HOW ACCURATELY DO WE RECONSTRUCT INTENSITIES AND SPECTRAL INDICES OF FAINT SOURCES?

    Energy Technology Data Exchange (ETDEWEB)

    Rau, U.; Bhatnagar, S.; Owen, F. N., E-mail: rurvashi@nrao.edu [National Radio Astronomy Observatory, Socorro, NM-87801 (United States)

    2016-11-01

    Many deep wideband wide-field radio interferometric surveys are being designed to accurately measure intensities, spectral indices, and polarization properties of faint source populations. In this paper, we compare various wideband imaging methods to evaluate the accuracy to which intensities and spectral indices of sources close to the confusion limit can be reconstructed. We simulated a wideband single-pointing (C-array, L-Band (1–2 GHz)) and 46-pointing mosaic (D-array, C-Band (4–8 GHz)) JVLA observation using a realistic brightness distribution ranging from 1 μ Jy to 100 mJy and time-, frequency-, polarization-, and direction-dependent instrumental effects. The main results from these comparisons are (a) errors in the reconstructed intensities and spectral indices are larger for weaker sources even in the absence of simulated noise, (b) errors are systematically lower for joint reconstruction methods (such as Multi-Term Multi-Frequency-Synthesis (MT-MFS)) along with A-Projection for accurate primary beam correction, and (c) use of MT-MFS for image reconstruction eliminates Clean-bias (which is present otherwise). Auxiliary tests include solutions for deficiencies of data partitioning methods (e.g., the use of masks to remove clean bias and hybrid methods to remove sidelobes from sources left un-deconvolved), the effect of sources not at pixel centers, and the consequences of various other numerical approximations within software implementations. This paper also demonstrates the level of detail at which such simulations must be done in order to reflect reality, enable one to systematically identify specific reasons for every trend that is observed, and to estimate scientifically defensible imaging performance metrics and the associated computational complexity of the algorithms/analysis procedures.

  10. A coarse-mesh diffusion synthetic acceleration of the scattering source iteration scheme for one-speed slab-geometry discrete ordinates problems

    International Nuclear Information System (INIS)

    Santos, Frederico P.; Alves Filho, Hermes; Barros, Ricardo C.; Xavier, Vinicius S.

    2011-01-01

    The scattering source iterative (SI) scheme is traditionally applied to converge fine-mesh numerical solutions to fixed-source discrete ordinates (S N ) neutron transport problems. The SI scheme is very simple to implement under a computational viewpoint. However, the SI scheme may show very slow convergence rate, mainly for diffusive media (low absorption) with several mean free paths in extent. In this work we describe an acceleration technique based on an improved initial guess for the scattering source distribution within the slab. In other words, we use as initial guess for the fine-mesh scattering source, the coarse-mesh solution of the neutron diffusion equation with special boundary conditions to account for the classical S N prescribed boundary conditions, including vacuum boundary conditions. Therefore, we first implement a spectral nodal method that generates coarse-mesh diffusion solution that is completely free from spatial truncation errors, then we reconstruct this coarse-mesh solution within each spatial cell of the discretization grid, to further yield the initial guess for the fine-mesh scattering source in the first S N transport sweep (μm > 0 and μm < 0, m = 1:N) across the spatial grid. We consider a number of numerical experiments to illustrate the efficiency of the offered diffusion synthetic acceleration (DSA) technique. (author)

  11. Prograph Based Analysis of Single Source Shortest Path Problem with Few Distinct Positive Lengths

    Directory of Open Access Journals (Sweden)

    B. Bhowmik

    2011-08-01

    Full Text Available In this paper we propose an experimental study model S3P2 of a fast fully dynamic programming algorithm design technique in finite directed graphs with few distinct nonnegative real edge weights. The Bellman-Ford’s approach for shortest path problems has come out in various implementations. In this paper the approach once again is re-investigated with adjacency matrix selection in associate least running time. The model tests proposed algorithm against arbitrarily but positive valued weighted digraphs introducing notion of Prograph that speeds up finding the shortest path over previous implementations. Our experiments have established abstract results with the intention that the proposed algorithm can consistently dominate other existing algorithms for Single Source Shortest Path Problems. A comparison study is also shown among Dijkstra’s algorithm, Bellman-Ford algorithm, and our algorithm.

  12. Blahut-Arimoto algorithm and code design for action-dependent source coding problems

    DEFF Research Database (Denmark)

    Trillingsgaard, Kasper Fløe; Simeone, Osvaldo; Popovski, Petar

    2013-01-01

    The source coding problem with action-dependent side information at the decoder has recently been introduced to model data acquisition in resource-constrained systems. In this paper, an efficient Blahut-Arimoto-type algorithm for the numerical computation of the rate-distortion-cost function...... for this problem is proposed. Moreover, a simplified two-stage code structure based on multiplexing is put forth, whereby the first stage encodes the actions and the second stage is composed of an array of classical Wyner-Ziv codes, one for each action. Leveraging this structure, specific coding/decoding...... strategies are designed based on LDGM codes and message passing. Through numerical examples, the proposed code design is shown to achieve performance close to the rate-distortion-cost function....

  13. ONETRAN, 1-D Transport in Planar, Cylindrical, Spherical Geometry for Homogeneous, Inhomogeneous Problem, Anisotropic Source

    International Nuclear Information System (INIS)

    1982-01-01

    1 - Description of problem or function: ONETRAN solves the one- dimensional multigroup transport equation in plane, cylindrical, spherical, and two-angle plane geometries. Both regular and adjoint, inhomogeneous and homogeneous (K-eff and eigenvalue searches) problems subject to vacuum, reflective, periodic, white, albedo or inhomogeneous boundary flux conditions are solved. General anisotropic scattering is allowed and anisotropic inhomogeneous sources are permitted. 2 - Method of solution: The discrete ordinates approximation for the angular variable is used with the diamond (central) difference approximation for the angular extrapolation in curved geometries. A linear discontinuous finite element representation for the angular flux in each spatial mesh cell is used. Negative fluxes are eliminated by a local set-to-zero and correct algorithm. Standard inner (within-group) iteration cycles are accelerated by system re-balance, coarse mesh re-balance, or Chebyshev acceleration. Outer iteration cycles are accelerated by coarse-mesh re-balance. 3 - Restrictions on the complexity of the problem: Variable dimensioning is used so that any combination of problem parameters leading to a container array less than MAXCOR can be accommodated. On CDC machines MAXCOR can be about 25 000 words and peripheral storage is used for most group-dependent data

  14. Variance analysis of the Monte-Carlo perturbation source method in inhomogeneous linear particle transport problems

    International Nuclear Information System (INIS)

    Noack, K.

    1982-01-01

    The perturbation source method may be a powerful Monte-Carlo means to calculate small effects in a particle field. In a preceding paper we have formulated this methos in inhomogeneous linear particle transport problems describing the particle fields by solutions of Fredholm integral equations and have derived formulae for the second moment of the difference event point estimator. In the present paper we analyse the general structure of its variance, point out the variance peculiarities, discuss the dependence on certain transport games and on generation procedures of the auxiliary particles and draw conclusions to improve this method

  15. An improved cut-and-solve algorithm for the single-source capacitated facility location problem

    DEFF Research Database (Denmark)

    Gadegaard, Sune Lauth; Klose, Andreas; Nielsen, Lars Relund

    2018-01-01

    In this paper, we present an improved cut-and-solve algorithm for the single-source capacitated facility location problem. The algorithm consists of three phases. The first phase strengthens the integer program by a cutting plane algorithm to obtain a tight lower bound. The second phase uses a two......-level local branching heuristic to find an upper bound, and if optimality has not yet been established, the third phase uses the cut-and-solve framework to close the optimality gap. Extensive computational results are reported, showing that the proposed algorithm runs 10–80 times faster on average compared...

  16. Measuring impairment when diagnosing adolescent ADHD: Differentiating problems due to ADHD versus other sources.

    Science.gov (United States)

    Vazquez, Alejandro L; H Sibley, Margaret; Campez, Mileini

    2018-04-13

    The DSM-5 requires clinicians to link ADHD symptoms to clinically meaningful impairments in daily life functioning. Measuring impairment during ADHD assessments may be particularly challenging in adolescence, when ADHD is often not the sole source of a youth's difficulties. Existing impairment rating scales are criticized for not specifying ADHD as the source of impairment in their instructions, leading to potential problems with rating scale specificity. The current study utilized a within subjects design (N = 107) to compare parent report of impairment on two versions of a global impairment measure: one that specified ADHD as the source of impairment (Impairment Rating Scale-ADHD) and a standard version that did not (Impairment Rating Scale). On the standard family impairment item, parents endorsed greater impairment as compared to the IRS-ADHD. This finding was particularly pronounced when parents reported high levels of parenting stress. More severe ADHD symptoms were associated with greater concordance between the two versions. Findings indicate that adolescent family related impairments reported during ADHD assessments may be due to sources other than ADHD symptoms, such as developmental maladjustment. To prevent false positive diagnoses, symptom-specific wording may optimize impairment measures when assessing family functioning in diagnostic assessments for adolescents with ADHD. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. THE ALL-SOURCE GREEN’S FUNCTION AND ITS APPLICATIONS TO TSUNAMI PROBLEMS

    Directory of Open Access Journals (Sweden)

    ZHIGANG XU

    2007-01-01

    Full Text Available The classical Green’s function provides the global linear response to impulse forcing at a particular source location. It is a type of one-source-all-receiver Green’s function. This paper presents a new type of Green’s function, referred to as the all-source-one-receiver, or for short the all-source Green’s function (ASGF, in which the solution at a point of interest (POI can be written in terms of global forcing without requiring the solution at other locations. The ASGF is particularly applicable to tsunami problems. The response to forcing anywhere in the global ocean can be determined within a few seconds on an ordinary personal computer or on a web server. The ASGF also brings in two new types of tsunami charts, one for the arrival time and the second for the gain, without assuming the location of the epicenter or reversibility of the tsunami travel path. Thus it provides a useful tool for tsunami hazard preparedness and to rapidly calculate the real-time responses at selected POIs for a tsunami generated anywhere in the world’s oceans.

  18. Comparison of algebraic and analytical approaches to the formulation of the statistical model-based reconstruction problem for X-ray computed tomography.

    Science.gov (United States)

    Cierniak, Robert; Lorent, Anna

    2016-09-01

    The main aim of this paper is to investigate properties of our originally formulated statistical model-based iterative approach applied to the image reconstruction from projections problem which are related to its conditioning, and, in this manner, to prove a superiority of this approach over ones recently used by other authors. The reconstruction algorithm based on this conception uses a maximum likelihood estimation with an objective adjusted to the probability distribution of measured signals obtained from an X-ray computed tomography system with parallel beam geometry. The analysis and experimental results presented here show that our analytical approach outperforms the referential algebraic methodology which is explored widely in the literature and exploited in various commercial implementations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Reconstructive schemes for variational iteration method within Yang-Laplace transform with application to fractal heat conduction problem

    Directory of Open Access Journals (Sweden)

    Liu Chun-Feng

    2013-01-01

    Full Text Available A reconstructive scheme for variational iteration method using the Yang-Laplace transform is proposed and developed with the Yang-Laplace transform. The identification of fractal Lagrange multiplier is investigated by the Yang-Laplace transform. The method is exemplified by a fractal heat conduction equation with local fractional derivative. The results developed are valid for a compact solution domain with high accuracy.

  20. Solving seismological problems using SGRAPH program: I-source parameters and hypocentral location

    International Nuclear Information System (INIS)

    Abdelwahed, Mohamed F.

    2012-01-01

    SGRAPH program is considered one of the seismological programs that maintain seismic data. SGRAPH is considered unique for being able to read a wide range of data formats and manipulate complementary tools in different seismological subjects in a stand-alone Windows-based application. SGRAPH efficiently performs the basic waveform analysis and solves advanced seismological problems. The graphical user interface (GUI) utilities and the Windows facilities such as, dialog boxes, menus, and toolbars simplified the user interaction with data. SGRAPH supported the common data formats like, SAC, SEED, GSE, ASCII, and Nanometrics Y-format, and others. It provides the facilities to solve many seismological problems with the built-in inversion and modeling tools. In this paper, I discuss some of the inversion tools built-in SGRAPH related to source parameters and hypocentral location estimation. Firstly, a description of the SGRAPH program is given discussing some of its features. Secondly, the inversion tools are applied to some selected events of the Dahshour earthquakes as an example of estimating the spectral and source parameters of local earthquakes. In addition, the hypocentral location of these events are estimated using the Hypoinverse 2000 program operated by SGRAPH.

  1. A Reconstruction Method for the Estimation of Temperatures of Multiple Sources Applied for Nanoparticle-Mediated Hyperthermia

    Directory of Open Access Journals (Sweden)

    Idan Steinberg

    2018-03-01

    Full Text Available Solid malignant tumors are one of the leading causes of death worldwide. Many times complete removal is not possible and alternative methods such as focused hyperthermia are used. Precise control of the hyperthermia process is imperative for the successful application of such treatment. To that end, this research presents a fast method that enables the estimation of deep tissue heat distribution by capturing and processing the transient temperature at the boundary based on a bio-heat transfer model. The theoretical model is rigorously developed and thoroughly validated by a series of experiments. A 10-fold improvement is demonstrated in resolution and visibility on tissue mimicking phantoms. The inverse problem is demonstrated as well with a successful application of the model for imaging deep-tissue embedded heat sources. Thereby, allowing the physician then ability to dynamically evaluate the hyperthermia treatment efficiency in real time.

  2. A Reconstruction Method for the Estimation of Temperatures of Multiple Sources Applied for Nanoparticle-Mediated Hyperthermia.

    Science.gov (United States)

    Steinberg, Idan; Tamir, Gil; Gannot, Israel

    2018-03-16

    Solid malignant tumors are one of the leading causes of death worldwide. Many times complete removal is not possible and alternative methods such as focused hyperthermia are used. Precise control of the hyperthermia process is imperative for the successful application of such treatment. To that end, this research presents a fast method that enables the estimation of deep tissue heat distribution by capturing and processing the transient temperature at the boundary based on a bio-heat transfer model. The theoretical model is rigorously developed and thoroughly validated by a series of experiments. A 10-fold improvement is demonstrated in resolution and visibility on tissue mimicking phantoms. The inverse problem is demonstrated as well with a successful application of the model for imaging deep-tissue embedded heat sources. Thereby, allowing the physician then ability to dynamically evaluate the hyperthermia treatment efficiency in real time.

  3. Use of the 3D surgical modelling technique with open-source software for mandibular fibula free flap reconstruction and its surgical guides.

    Science.gov (United States)

    Ganry, L; Hersant, B; Quilichini, J; Leyder, P; Meningaud, J P

    2017-06-01

    Tridimensional (3D) surgical modelling is a necessary step to create 3D-printed surgical tools, and expensive professional software is generally needed. Open-source software are functional, reliable, updated, may be downloaded for free and used to produce 3D models. Few surgical teams have used free solutions for mastering 3D surgical modelling for reconstructive surgery with osseous free flaps. We described an Open-source software 3D surgical modelling protocol to perform a fast and nearly free mandibular reconstruction with microvascular fibula free flap and its surgical guides, with no need for engineering support. Four successive specialised Open-source software were used to perform our 3D modelling: OsiriX ® , Meshlab ® , Netfabb ® and Blender ® . Digital Imaging and Communications in Medicine (DICOM) data on patient skull and fibula, obtained with a computerised tomography (CT) scan, were needed. The 3D modelling of the reconstructed mandible and its surgical guides were created. This new strategy may improve surgical management in Oral and Craniomaxillofacial surgery. Further clinical studies are needed to demonstrate the feasibility, reproducibility, transfer of know how and benefits of this technique. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  4. Conjugate Gradient like methods and their application to fixed source neutron diffusion problems

    International Nuclear Information System (INIS)

    Suetomi, Eiichi; Sekimoto, Hiroshi

    1989-01-01

    This paper presents a number of fast iterative methods for solving systems of linear equations appearing in fixed source problems for neutron diffusion. We employed the conjugate gradient and conjugate residual methods. In order to accelerate the conjugate residual method, we proposed the conjugate residual squared method by transforming the residual polynomial of the conjugate residual method. Since the convergence of these methods depends on the spectrum of coefficient matrix, we employed the incomplete Choleski (IC) factorization and the modified IC (MIC) factorization as preconditioners. These methods were applied to some neutron diffusion problems and compared with the successive overrelaxation (SOR) method. The results of these numerical experiments showed superior convergence characteristics of the conjugate gradient like method with MIC factorization to the SOR method, especially for a problem involving void region. The CPU time of the MICCG, MICCR and MICCRS methods showed no great difference. In order to vectorize the conjugate gradient like methods based on (M)IC factorization, the hyperplane method was used and implemented on the vector computers, the HITAC S-820/80 and ETA10-E (one processor mode). Significant decrease of the CPU times was observed on the S-820/80. Since the scaled conjugate gradient (SCG) method can be vectorized with no manipulation, it was also compared with the above methods. It turned out the SCG method was the fastest with respect to the CPU times on the ETA10-E. These results suggest that one should implement suitable algorithm for different vector computers. (author)

  5. Patterns of source monitoring bias in incarcerated youths with and without conduct problems.

    Science.gov (United States)

    Morosan, Larisa; Badoud, Deborah; Salaminios, George; Eliez, Stephan; Van der Linden, Martial; Heller, Patrick; Debbané, Martin

    2018-01-01

    Antisocial individuals present behaviours that violate the social norms and the rights of others. In the present study, we examine whether biases in monitoring the self-generated cognitive material might be linked to antisocial manifestations during adolescence. We further examine the association with psychopathic traits and conduct problems (CPs). Sixty-five incarcerated adolescents (IAs; M age = 15.85, SD = 1.30) and 88 community adolescents (CAs; M age = 15.78, SD = 1.60) participated in our study. In the IA group, 28 adolescents presented CPs (M age = 16.06, SD = 1.41) and 19 did not meet the diagnostic criteria for CPs (M age = 15.97, SD = 1.20). Source monitoring was assessed through a speech-monitoring task, using items requiring different levels of cognitive effort; recognition and source-monitoring bias scores (internalising and externalising biases) were calculated. Between-group comparisons indicate greater overall biases and different patterns of biases in the source monitoring. IA participants manifest a greater externalising bias, whereas CA participants present a greater internalising bias. In addition, IA with CPs present different patterns of item recognition. These results indicate that the two groups of adolescents present different types of source-monitoring bias for self-generated speech. In addition, the IAs with CPs present impairments in item recognition. Future studies may examine the developmental implications of self-monitoring biases in the perseverance of antisocial behaviours from adolescence to adulthood.

  6. Escript: Open Source Environment For Solving Large-Scale Geophysical Joint Inversion Problems in Python

    Science.gov (United States)

    Gross, Lutz; Altinay, Cihan; Fenwick, Joel; Smith, Troy

    2014-05-01

    The program package escript has been designed for solving mathematical modeling problems using python, see Gross et al. (2013). Its development and maintenance has been funded by the Australian Commonwealth to provide open source software infrastructure for the Australian Earth Science community (recent funding by the Australian Geophysical Observing System EIF (AGOS) and the AuScope Collaborative Research Infrastructure Scheme (CRIS)). The key concepts of escript are based on the terminology of spatial functions and partial differential equations (PDEs) - an approach providing abstraction from the underlying spatial discretization method (i.e. the finite element method (FEM)). This feature presents a programming environment to the user which is easy to use even for complex models. Due to the fact that implementations are independent from data structures simulations are easily portable across desktop computers and scalable compute clusters without modifications to the program code. escript has been successfully applied in a variety of applications including modeling mantel convection, melting processes, volcanic flow, earthquakes, faulting, multi-phase flow, block caving and mineralization (see Poulet et al. 2013). The recent escript release (see Gross et al. (2013)) provides an open framework for solving joint inversion problems for geophysical data sets (potential field, seismic and electro-magnetic). The strategy bases on the idea to formulate the inversion problem as an optimization problem with PDE constraints where the cost function is defined by the data defect and the regularization term for the rock properties, see Gross & Kemp (2013). This approach of first-optimize-then-discretize avoids the assemblage of the - in general- dense sensitivity matrix as used in conventional approaches where discrete programming techniques are applied to the discretized problem (first-discretize-then-optimize). In this paper we will discuss the mathematical framework for

  7. Studies of Sensitivity in the Dictionary Learning Approach to Computed Tomography: Simplifying the Reconstruction Problem, Rotation, and Scale

    DEFF Research Database (Denmark)

    Soltani, Sara

    investigate the sensitivity and robustness of the reconstruction to variations of the scale and orientation in the training images and we suggest algorithms to estimate the correct relative scale and orientation of the unknown image to the training images from the data....... formulation in [22] enforces that the solution is an exact representation by the dictionary; in this report, we investigate this requirement. Furthermore, the underlying assumption that the scale and orientation of the training images are consistent with the unknown image of interest may not be realistic. We...

  8. Performance of popular open source databases for HEP related computing problems

    International Nuclear Information System (INIS)

    Kovalskyi, D; Sfiligoi, I; Wuerthwein, F; Yagil, A

    2014-01-01

    Databases are used in many software components of HEP computing, from monitoring and job scheduling to data storage and processing. It is not always clear at the beginning of a project if a problem can be handled by a single server, or if one needs to plan for a multi-server solution. Before a scalable solution is adopted, it helps to know how well it performs in a single server case to avoid situations when a multi-server solution is adopted mostly due to sub-optimal performance per node. This paper presents comparison benchmarks of popular open source database management systems. As a test application we use a user job monitoring system based on the Glidein workflow management system used in the CMS Collaboration.

  9. Inverse source problem and null controllability for multidimensional parabolic operators of Grushin type

    International Nuclear Information System (INIS)

    Beauchard, K; Cannarsa, P; Yamamoto, M

    2014-01-01

    The approach to Lipschitz stability for uniformly parabolic equations introduced by Imanuvilov and Yamamoto in 1998 based on Carleman estimates, seems hard to apply to the case of Grushin-type operators of interest to this paper. Indeed, such estimates are still missing for parabolic operators degenerating in the interior of the space domain. Nevertheless, we are able to prove Lipschitz stability results for inverse source problems for such operators, with locally distributed measurements in an arbitrary space dimension. For this purpose, we follow a mixed strategy which combines the approach due to Lebeau and Robbiano, relying on Fourier decomposition and Carleman inequalities for heat equations with non-smooth coefficients (solved by the Fourier modes). As a corollary, we obtain a direct proof of the observability of multidimensional Grushin-type parabolic equations, with locally distributed observations—which is equivalent to null controllability with locally distributed controls. (paper)

  10. An Adaptive B-Spline Method for Low-order Image Reconstruction Problems - Final Report - 09/24/1997 - 09/24/2000

    Energy Technology Data Exchange (ETDEWEB)

    Li, Xin; Miller, Eric L.; Rappaport, Carey; Silevich, Michael

    2000-04-11

    A common problem in signal processing is to estimate the structure of an object from noisy measurements linearly related to the desired image. These problems are broadly known as inverse problems. A key feature which complicates the solution to such problems is their ill-posedness. That is, small perturbations in the data arising e.g. from noise can and do lead to severe, non-physical artifacts in the recovered image. The process of stabilizing these problems is known as regularization of which Tikhonov regularization is one of the most common. While this approach leads to a simple linear least squares problem to solve for generating the reconstruction, it has the unfortunate side effect of producing smooth images thereby obscuring important features such as edges. Therefore, over the past decade there has been much work in the development of edge-preserving regularizers. This technique leads to image estimates in which the important features are retained, but computationally the y require the solution of a nonlinear least squares problem, a daunting task in many practical multi-dimensional applications. In this thesis we explore low-order models for reducing the complexity of the re-construction process. Specifically, B-Splines are used to approximate the object. If a ''proper'' collection B-Splines are chosen that the object can be efficiently represented using a few basis functions, the dimensionality of the underlying problem will be significantly decreased. Consequently, an optimum distribution of splines needs to be determined. Here, an adaptive refining and pruning algorithm is developed to solve the problem. The refining part is based on curvature information, in which the intuition is that a relatively dense set of fine scale basis elements should cluster near regions of high curvature while a spares collection of basis vectors are required to adequately represent the object over spatially smooth areas. The pruning part is a greedy

  11. Three-dimensional surgical modelling with an open-source software protocol: study of precision and reproducibility in mandibular reconstruction with the fibula free flap.

    Science.gov (United States)

    Ganry, L; Quilichini, J; Bandini, C M; Leyder, P; Hersant, B; Meningaud, J P

    2017-08-01

    Very few surgical teams currently use totally independent and free solutions to perform three-dimensional (3D) surgical modelling for osseous free flaps in reconstructive surgery. This study assessed the precision and technical reproducibility of a 3D surgical modelling protocol using free open-source software in mandibular reconstruction with fibula free flaps and surgical guides. Precision was assessed through comparisons of the 3D surgical guide to the sterilized 3D-printed guide, determining accuracy to the millimetre level. Reproducibility was assessed in three surgical cases by volumetric comparison to the millimetre level. For the 3D surgical modelling, a difference of less than 0.1mm was observed. Almost no deformations (free flap modelling was between 0.1mm and 0.4mm, and the average precision of the complete reconstructed mandible was less than 1mm. The open-source software protocol demonstrated high accuracy without complications. However, the precision of the surgical case depends on the surgeon's 3D surgical modelling. Therefore, surgeons need training on the use of this protocol before applying it to surgical cases; this constitutes a limitation. Further studies should address the transfer of expertise. Copyright © 2017 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  12. Source selection problem of competitive power plants under government intervention: a game theory approach

    Science.gov (United States)

    Mahmoudi, Reza; Hafezalkotob, Ashkan; Makui, Ahmad

    2014-06-01

    Pollution and environmental protection in the present century are extremely significant global problems. Power plants as the largest pollution emitting industry have been the cause of a great deal of scientific researches. The fuel or source type used to generate electricity by the power plants plays an important role in the amount of pollution produced. Governments should take visible actions to promote green fuel. These actions are often called the governmental financial interventions that include legislations such as green subsidiaries and taxes. In this paper, by considering the government role in the competition of two power plants, we propose a game theoretical model that will help the government to determine the optimal taxes and subsidies. The numerical examples demonstrate how government could intervene in a competitive market of electricity to achieve the environmental objectives and how power plants maximize their utilities in each energy source. The results also reveal that the government's taxes and subsidiaries effectively influence the selected fuel types of power plants in the competitive market.

  13. Source coding for transmission of reconstructed dynamic geometry: a rate-distortion-complexity analysis of different approaches

    NARCIS (Netherlands)

    R.N. Mekuria (Rufael); P.S. Cesar Garcia (Pablo Santiago); D.C.A. Bulterman (Dick)

    2014-01-01

    htmlabstractLive 3D reconstruction of a human as a 3D mesh with commodity electronics is becoming a reality. Immersive applications (i.e. cloud gaming, tele-presence) benefit from effective transmission of such content over a bandwidth limited link. In this paper we outline different approaches for

  14. Dose reduction in abdominal computed tomography: intraindividual comparison of image quality of full-dose standard and half-dose iterative reconstructions with dual-source computed tomography.

    Science.gov (United States)

    May, Matthias S; Wüst, Wolfgang; Brand, Michael; Stahl, Christian; Allmendinger, Thomas; Schmidt, Bernhard; Uder, Michael; Lell, Michael M

    2011-07-01

    We sought to evaluate the image quality of iterative reconstruction in image space (IRIS) in half-dose (HD) datasets compared with full-dose (FD) and HD filtered back projection (FBP) reconstruction in abdominal computed tomography (CT). To acquire data with FD and HD simultaneously, contrast-enhanced abdominal CT was performed with a dual-source CT system, both tubes operating at 120 kV, 100 ref.mAs, and pitch 0.8. Three different image datasets were reconstructed from the raw data: Standard FD images applying FBP which served as reference, HD images applying FBP and HD images applying IRIS. For the HD data sets, only data from 1 tube detector-system was used. Quantitative image quality analysis was performed by measuring image noise in tissue and air. Qualitative image quality was evaluated according to the European Guidelines on Quality criteria for CT. Additional assessment of artifacts, lesion conspicuity, and edge sharpness was performed. : Image noise in soft tissue was substantially decreased in HD-IRIS (-3.4 HU, -22%) and increased in HD-FBP (+6.2 HU, +39%) images when compared with the reference (mean noise, 15.9 HU). No significant differences between the FD-FBP and HD-IRIS images were found for the visually sharp anatomic reproduction, overall diagnostic acceptability (P = 0.923), lesion conspicuity (P = 0.592), and edge sharpness (P = 0.589), while HD-FBP was rated inferior. Streak artifacts and beam hardening was significantly more prominent in HD-FBP while HD-IRIS images exhibited a slightly different noise pattern. Direct intrapatient comparison of standard FD body protocols and HD-IRIS reconstruction suggest that the latest iterative reconstruction algorithms allow for approximately 50% dose reduction without deterioration of the high image quality necessary for confident diagnosis.

  15. Network reconstruction via graph blending

    Science.gov (United States)

    Estrada, Rolando

    2016-05-01

    Graphs estimated from empirical data are often noisy and incomplete due to the difficulty of faithfully observing all the components (nodes and edges) of the true graph. This problem is particularly acute for large networks where the number of components may far exceed available surveillance capabilities. Errors in the observed graph can render subsequent analyses invalid, so it is vital to develop robust methods that can minimize these observational errors. Errors in the observed graph may include missing and spurious components, as well fused (multiple nodes are merged into one) and split (a single node is misinterpreted as many) nodes. Traditional graph reconstruction methods are only able to identify missing or spurious components (primarily edges, and to a lesser degree nodes), so we developed a novel graph blending framework that allows us to cast the full estimation problem as a simple edge addition/deletion problem. Armed with this framework, we systematically investigate the viability of various topological graph features, such as the degree distribution or the clustering coefficients, and existing graph reconstruction methods for tackling the full estimation problem. Our experimental results suggest that incorporating any topological feature as a source of information actually hinders reconstruction accuracy. We provide a theoretical analysis of this phenomenon and suggest several avenues for improving this estimation problem.

  16. A response matrix method for slab-geometry discrete ordinates adjoint calculations in energy-dependent source-detector problems

    Energy Technology Data Exchange (ETDEWEB)

    Mansur, Ralph S.; Moura, Carlos A., E-mail: ralph@ime.uerj.br, E-mail: demoura@ime.uerj.br [Universidade do Estado do Rio de Janeiro (UERJ), RJ (Brazil). Departamento de Engenharia Mecanica; Barros, Ricardo C., E-mail: rcbarros@pq.cnpq.br [Universidade do Estado do Rio de Janeiro (UERJ), Nova Friburgo, RJ (Brazil). Departamento de Modelagem Computacional

    2017-07-01

    Presented here is an application of the Response Matrix (RM) method for adjoint discrete ordinates (S{sub N}) problems in slab geometry applied to energy-dependent source-detector problems. The adjoint RM method is free from spatial truncation errors, as it generates numerical results for the adjoint angular fluxes in multilayer slabs that agree with the numerical values obtained from the analytical solution of the energy multigroup adjoint SN equations. Numerical results are given for two typical source-detector problems to illustrate the accuracy and the efficiency of the offered RM computer code. (author)

  17. Weak unique continuation property and a related inverse source problem for time-fractional diffusion-advection equations

    Science.gov (United States)

    Jiang, Daijun; Li, Zhiyuan; Liu, Yikan; Yamamoto, Masahiro

    2017-05-01

    In this paper, we first establish a weak unique continuation property for time-fractional diffusion-advection equations. The proof is mainly based on the Laplace transform and the unique continuation properties for elliptic and parabolic equations. The result is weaker than its parabolic counterpart in the sense that we additionally impose the homogeneous boundary condition. As a direct application, we prove the uniqueness for an inverse problem on determining the spatial component in the source term by interior measurements. Numerically, we reformulate our inverse source problem as an optimization problem, and propose an iterative thresholding algorithm. Finally, several numerical experiments are presented to show the accuracy and efficiency of the algorithm.

  18. Safety Problems of Disposal of Disused Sealed Sources in the Baldone Near Surface Repository

    International Nuclear Information System (INIS)

    Dreimanis, A.

    2003-01-01

    long-term storage; 3. To dispose only those sources capable to decay during functioning of disposal site. 4. To revise and update Waste Acceptance Criteria (WAC), especially for disused sealed sources. 5. To build a 5 m thick cap over the vaults. The activity related criteria and predisposal packing of DSS are presented. The problems and solutions for the construction of new storage and disposal spaces are discussed

  19. Versatility of the CFR algorithm for limited angle reconstruction

    International Nuclear Information System (INIS)

    Fujieda, I.; Heiskanen, K.; Perez-Mendez, V.

    1990-01-01

    The constrained Fourier reconstruction (CFR) algorithm and the iterative reconstruction-reprojection (IRR) algorithm are evaluated based on their accuracy for three types of limited angle reconstruction problems. The cFR algorithm performs better for problems such as Xray CT imaging of a nuclear reactor core with one large data gap due to structural blocking of the source and detector pair. For gated heart imaging by Xray CT, radioisotope distribution imaging by PET or SPECT, using a polygonal array of gamma cameras with insensitive gaps between camera boundaries, the IRR algorithm has a slight advantage over the CFR algorithm but the difference is not significant

  20. High-Resolution Source Parameter and Site Characteristics Using Near-Field Recordings - Decoding the Trade-off Problems Between Site and Source

    Science.gov (United States)

    Chen, X.; Abercrombie, R. E.; Pennington, C.

    2017-12-01

    Recorded seismic waveforms include contributions from earthquake source properties and propagation effects, leading to long-standing trade-off problems between site/path effects and source effects. With near-field recordings, the path effect is relatively small, so the trade-off problem can be simplified to between source and site effects (commonly referred as "kappa value"). This problem is especially significant for small earthquakes where the corner frequencies are within similar ranges of kappa values, so direct spectrum fitting often leads to systematic biases due to corner frequency and magnitude. In response to the significantly increased seismicity rate in Oklahoma, several local networks have been deployed following major earthquakes: the Prague, Pawnee and Fairview earthquakes. Each network provides dense observations within 20 km surrounding the fault zone, recording tens of thousands of aftershocks between M1 to M3. Using near-field recordings in the Prague area, we apply a stacking approach to separate path/site and source effects. The resulting source parameters are consistent with parameters derived from ground motion and spectral ratio methods from other studies; they exhibit spatial coherence within the fault zone for different fault patches. We apply these source parameter constraints in an analysis of kappa values for stations within 20 km of the fault zone. The resulting kappa values show significantly reduced variability compared to those from direct spectral fitting without constraints on the source spectrum; they are not biased by earthquake magnitudes. With these improvements, we plan to apply the stacking analysis to other local arrays to analyze source properties and site characteristics. For selected individual earthquakes, we will also use individual-pair empirical Green's function (EGF) analysis to validate the source parameter estimations.

  1. Source-plane reconstruction of the giant gravitational arc in A2667: A candidate Wolf-Rayet galaxy at z ∼ 1

    International Nuclear Information System (INIS)

    Cao, Shuo; Zhu, Zong-Hong; Federico II, Via Cinthia, I-80126 Napoli (Italy))" data-affiliation=" (Dipartimento di Scienze Fisiche, Università di Napoli Federico II, Via Cinthia, I-80126 Napoli (Italy))" >Covone, Giovanni; Jullo, Eric; Richard, Johan; Izzo, Luca

    2015-01-01

    We present a new analysis of Hubble Space Telescope, Spitzer Space Telescope, and Very Large Telescope imaging and spectroscopic data of a bright lensed galaxy at z = 1.0334 in the lensing cluster A2667. Using this high-resolution imaging, we present an updated lens model that allows us to fully understand the lensing geometry and reconstruct the lensed galaxy in the source plane. This giant arc gives a unique opportunity to view the structure of a high-redshift disk galaxy. We find that the lensed galaxy of A2667 is a typical spiral galaxy with a morphology similar to the structure of its counterparts at higher redshift, z ∼ 2. The surface brightness of the reconstructed source galaxy in the z 850 band reveals the central surface brightness I(0) = 20.28 ± 0.22 mag arcsec –2 and a characteristic radius r s = 2.01 ± 0.16 kpc at redshift z ∼ 1. The morphological reconstruction in different bands shows obvious negative radial color gradients for this galaxy. Moreover, the redder central bulge tends to contain a metal-rich stellar population, rather than being heavily reddened by dust due to high and patchy obscuration. We analyze the VIMOS/integral field unit spectroscopic data and find that, in the given wavelength range (∼1800-3200 Å), the combined arc spectrum of the source galaxy is characterized by a strong continuum emission with strong UV absorption lines (Fe II and Mg II) and shows the features of a typical starburst Wolf-Rayet galaxy, NGC 5253. More specifically, we have measured the equivalent widths of Fe II and Mg II lines in the A2667 spectrum, and obtained similar values for the same wavelength interval of the NGC 5253 spectrum. Marginal evidence for [C III] 1909 emission at the edge of the grism range further confirms our expectation.

  2. Problems of heat sources modeling on stage of isolated power systems expansion planning

    International Nuclear Information System (INIS)

    Malenkov, A.V.; Reshetnikova, L.N.; Sergeev, Yu.A.

    1998-01-01

    It is necessary to use computer codes for evaluation of possible applications and role of nuclear district heating plants in the local self-balancing power and heating systems, which are to be located in the remote isolated and hardly accessible regions in the Far North of Russia. Key factors in determining system configurations and its performances are: (1) interdependency of electricity, heat and fuel supply; (2) long distance between energy consumer centres (from several tens up to some hundred kilometers); and (3) difficulty in export and import of the electricity, especially the fuel in and from neighbouring and remote regions. The problem to challenge is to work out an optimum expansion plan of the local electricity and heat supply system. The ENPEP (ENergy and Power Evaluation Program) software package, which was developed by IAEA together with the USA Argonne National Laboratory, was chosen for this purpose. The Chaun-Bilibino power system (CBPS), an isolated power system in far North-East region of Russia, was selected as the first case of the ENPEP study. ENPEP allows a complex approach in the system expansion optimization planning in the time frame of planning period of up to 30 years. The key ENPEP module, ELECTRIC, considers electricity as the only product. The cogeneration part (heat production) must be considered outside the ELECTRIC model and then the results to be transfer ed to ELECTRIC. The ENPEP study on the Chaun-Bilibino isolated power system has shown that the modelling of the heat supply sources in ENPEP is not a trivial problem. It is very important and difficult to correctly represent specific features of cogeneration process at the same time. (author)

  3. Experimental results and first {sup 22}Na source image reconstruction by two prototype modules in coincidence of a liquid xenon positron emission tomograph for small animal imaging

    Energy Technology Data Exchange (ETDEWEB)

    Gallin-Martel, M.-L., E-mail: mlgallin@lpsc.in2p3.fr [Laboratoire de Physique Subatomique et de Cosmologie, Universite Joseph Fourier Grenoble 1, CNRS/IN2P3, Institut National Polytechnique de Grenoble, 53 avenue des Martyrs 38026 Grenoble Cedex (France); Grondin, Y. [Laboratoire TIMC/IMAG, CNRS et Universite Joseph Fourier, Pavillon Taillefer 38706 La Tronche Cedex (France); Gac, N. [Laboratoire L2S, UMR 8506 CNRS - SUPELEC - Univ Paris-Sud, Gif sur Yvette F-91192 (France); Carcagno, Y.; Gallin-Martel, L.; Grondin, D.; Marton, M.; Muraz, J.-F; Rossetto, O.; Vezzu, F. [Laboratoire de Physique Subatomique et de Cosmologie, Universite Joseph Fourier Grenoble 1, CNRS/IN2P3, Institut National Polytechnique de Grenoble, 53 avenue des Martyrs 38026 Grenoble Cedex (France)

    2012-08-01

    A detector with a very specific design using liquid Xenon (LXe) in the scintillation mode is studied for Positron Emission Tomography (PET) of small animals. Two prototype modules equipped with Position Sensitive Photo Multiplier Tubes (PSPMTs) operating in the VUV range (178 nm) and at 165 K were built and studied in coincidence. This paper reports on energy, time and spatial resolution capabilities of this experimental test bench. Furthermore, these experimental results were used to perform the first image reconstruction of a {sup 22}Na source placed in the experimental setup.

  4. Experimental results and first 22Na source image reconstruction by two prototype modules in coincidence of a liquid xenon positron emission tomograph for small animal imaging

    International Nuclear Information System (INIS)

    Gallin-Martel, M.-L.; Grondin, Y.; Gac, N.; Carcagno, Y.; Gallin-Martel, L.; Grondin, D.; Marton, M.; Muraz, J.-F; Rossetto, O.; Vezzu, F.

    2012-01-01

    A detector with a very specific design using liquid Xenon (LXe) in the scintillation mode is studied for Positron Emission Tomography (PET) of small animals. Two prototype modules equipped with Position Sensitive Photo Multiplier Tubes (PSPMTs) operating in the VUV range (178 nm) and at 165 K were built and studied in coincidence. This paper reports on energy, time and spatial resolution capabilities of this experimental test bench. Furthermore, these experimental results were used to perform the first image reconstruction of a 22 Na source placed in the experimental setup.

  5. Reconstruction of atmospheric trace metals pollution in Southwest China using sediments from a large and deep alpine lake: Historical trends, sources and sediment focusing.

    Science.gov (United States)

    Lin, Qi; Liu, Enfeng; Zhang, Enlou; Nath, Bibhash; Shen, Ji; Yuan, Hezhong; Wang, Rong

    2018-02-01

    Atmospheric pollution, one of the leading environmental problems in South and East Asia, and its impact on the terrestrial environmental quality remain poorly understood particularly in alpine areas where both historical and present-day mining and smelting operations might leave an imprint. Here, we reconstructed atmospheric trace metals pollution during the past century using core sediments from a large and deep alpine lake in Southwest China. The implication of in lake and/or in watershed sediment focusing in pollution quantification is discussed by analyzing 15 sediment cores. Factor analysis and enrichment factor indicated Cd, Pb and Sb as the typical pollutants. Distinct peaks of Pb and Sb pollution were observed around the 1920s, but little Pb pollution was detected in recent decades, different from other studies in similar regions. Cadmium pollution was observed until the mid-1980s synchronized with Sb. The distinctive variations in atmospheric trace metal pollution process in Southwest China highlight the regional and sub-regional sources of metal pollutants, which should be primarily attributed to non-ferrous metal smelting emissions. Both natural and anthropogenic metals showed wide concentration ranges though exhibited similar temporal trends in the 15 cores. Spatial variations of anthropogenic metals were influenced by the in-watershed pollutants remobilization, whereas, natural metals were regulated by the detrital materials in the sub-basin. In-lake sediment focusing had little influence on the spatial distributions of all metals, different from the traditional sediment focusing pattern observed in small lakes. Anthropogenic Cd accumulation in sediments ranged from 1.5 to 10.1mgm -2 in a specific core with an average of 6.5mgm -2 for the entire lake, highlighting that a reliable whole-lake pollutant budget requires an analysis of multiple cores. Our study suggests that the management of aquatic ecosystem health should take the remobilization of in

  6. A Multi-Data Source and Multi-Sensor Approach for the 3D Reconstruction and Web Visualization of a Complex Archaelogical Site: The Case Study of “Tolmo De Minateda”

    Directory of Open Access Journals (Sweden)

    Jose Alberto Torres-Martínez

    2016-06-01

    Full Text Available The complexity of archaeological sites hinders creation of an integral model using the current Geomatic techniques (i.e., aerial, close-range photogrammetry and terrestrial laser scanner individually. A multi-sensor approach is therefore proposed as the optimal solution to provide a 3D reconstruction and visualization of these complex sites. Sensor registration represents a riveting milestone when automation is required and when aerial and terrestrial datasets must be integrated. To this end, several problems must be solved: coordinate system definition, geo-referencing, co-registration of point clouds, geometric and radiometric homogeneity, etc. The proposed multi-data source and multi-sensor approach is applied to the study case of the “Tolmo de Minateda” archaeological site. A total extension of 9 ha is reconstructed, with an adapted level of detail, by an ultralight aerial platform (paratrike, an unmanned aerial vehicle, a terrestrial laser scanner and terrestrial photogrammetry. Finally, a mobile device (e.g., tablet or smartphone has been used to integrate, optimize and visualize all this information, providing added value to archaeologists and heritage managers who want to use an efficient tool for their works at the site, and even for non-expert users who just want to know more about the archaeological settlement.

  7. Mathematical methods in the problem of reconstruction of hadron interaction characteristics and primary cosmic ray spectra at superhigh energies

    International Nuclear Information System (INIS)

    Astaf'ev, V.A.

    1985-01-01

    The paper reviews the mathematical methods used for analyzing the experimental data obtained in investigations of cosmic rays of superhigh energies (10 14 -10 19 eV). The analysis is carried out on the basis of the direct problem solution, i.e. calculation of the characteristics of nuclear-electromagnetic cascade showers developed in the atmosphere with regard to the specific features of experimental devices. The analytical and numerical metods for solving equations describing shower development, as well as simulation of cascade processes by the Monte Carlo method are applied herein

  8. THE ROLE OF RADIATION ACCIDENTS AND INDUSTRIAL APPLICATIONS OF IONIZING RADIATION SOURCES IN THE PROBLEM OF RADIATION DAMAGE

    OpenAIRE

    Кіхтенко, Ігор Миколайович

    2016-01-01

    Subject of research – the relevance of radiation damage at modern development of industry and medicine. In the world of radiation sources used in different fields of practice and their application in the future will increase, which greatly increases the likelihood of injury in a significant contingent of people.Research topic – the definition of the role of nuclear energy and the industrial use of ionizing radiation sources in the problem of radiation damage. The purpose of research – identif...

  9. Maxillary reconstruction

    Directory of Open Access Journals (Sweden)

    Brown James

    2007-12-01

    Full Text Available This article aims to discuss the various defects that occur with maxillectomy with a full review of the literature and discussion of the advantages and disadvantages of the various techniques described. Reconstruction of the maxilla can be relatively simple for the standard low maxillectomy that does not involve the orbital floor (Class 2. In this situation the structure of the face is less damaged and the there are multiple reconstructive options for the restoration of the maxilla and dental alveolus. If the maxillectomy includes the orbit (Class 4 then problems involving the eye (enopthalmos, orbital dystopia, ectropion and diplopia are avoided which simplifies the reconstruction. Most controversy is associated with the maxillectomy that involves the orbital floor and dental alveolus (Class 3. A case is made for the use of the iliac crest with internal oblique as an ideal option but there are other methods, which may provide a similar result. A multidisciplinary approach to these patients is emphasised which should include a prosthodontist with a special expertise for these defects.

  10. Problems in the fingerprints based polycyclic aromatic hydrocarbons source apportionment analysis and a practical solution.

    Science.gov (United States)

    Zou, Yonghong; Wang, Lixia; Christensen, Erik R

    2015-10-01

    This work intended to explain the challenges of the fingerprints based source apportionment method for polycyclic aromatic hydrocarbons (PAH) in the aquatic environment, and to illustrate a practical and robust solution. The PAH data detected in the sediment cores from the Illinois River provide the basis of this study. Principal component analysis (PCA) separates PAH compounds into two groups reflecting their possible airborne transport patterns; but it is not able to suggest specific sources. Not all positive matrix factorization (PMF) determined sources are distinguishable due to the variability of source fingerprints. However, they constitute useful suggestions for inputs for a Bayesian chemical mass balance (CMB) analysis. The Bayesian CMB analysis takes into account the measurement errors as well as the variations of source fingerprints, and provides a credible source apportionment. Major PAH sources for Illinois River sediments are traffic (35%), coke oven (24%), coal combustion (18%), and wood combustion (14%). Copyright © 2015. Published by Elsevier Ltd.

  11. Problems in the fingerprints based polycyclic aromatic hydrocarbons source apportionment analysis and a practical solution

    International Nuclear Information System (INIS)

    Zou, Yonghong; Wang, Lixia; Christensen, Erik R.

    2015-01-01

    This work intended to explain the challenges of the fingerprints based source apportionment method for polycyclic aromatic hydrocarbons (PAH) in the aquatic environment, and to illustrate a practical and robust solution. The PAH data detected in the sediment cores from the Illinois River provide the basis of this study. Principal component analysis (PCA) separates PAH compounds into two groups reflecting their possible airborne transport patterns; but it is not able to suggest specific sources. Not all positive matrix factorization (PMF) determined sources are distinguishable due to the variability of source fingerprints. However, they constitute useful suggestions for inputs for a Bayesian chemical mass balance (CMB) analysis. The Bayesian CMB analysis takes into account the measurement errors as well as the variations of source fingerprints, and provides a credible source apportionment. Major PAH sources for Illinois River sediments are traffic (35%), coke oven (24%), coal combustion (18%), and wood combustion (14%). - Highlights: • Fingerprint variability poses challenges in PAH source apportionment analysis. • PCA can be used to group compounds or cluster measurements. • PMF requires results validation but is useful for source suggestion. • Bayesian CMB provide practical and credible solution. - A Bayesian CMB model combined with PMF is a practical and credible fingerprints based PAH source apportionment method

  12. Radiation Dose Reduction of Chest CT with Iterative Reconstruction in Image Space - Part I: Studies on Image Quality Using Dual Source CT

    International Nuclear Information System (INIS)

    Hwang, Hye Jeon; Seo, Joon Beom; Lee, Jin Seong; Song, Jae Woo; Lee, Hyun Joo; Lim, Chae Hun; Kim, Song Soo

    2012-01-01

    To determine whether the image quality (IQ) is improved with iterative reconstruction in image space (IRIS), and whether IRIS can be used for radiation reduction in chest CT. Standard dose chest CT (SDCT) in 50 patients and low dose chest CT (LDCT) in another 50 patients were performed, using a dual-source CT, with 120 kVp and same reference mAs (50 mAs for SDCT and 25 mAs for LDCT) employed to both tubes by modifying a dual-energy scan mode. Full-dose data were obtained by combining the data from both tubes and half-dose data were separated from a single tube. These were reconstructed by using a filtered back projection (FBP) and IRIS: full-dose FBP (F-FBP); full-dose IRIS (F-IRIS); half-dose FBP (H-FBP) and half-dose IRIS (H-IRIS). Objective noise was measured. The subjective IQ was evaluated by radiologists for the followings: noise, contrast and sharpness of mediastinum and lung. Objective noise was significantly lower in H-IRIS than in F-FBP (p < 0.01). In both SDCT and LDCT, the IQ scores were highest in F-IRIS, followed by F-FBP, H-IRIS and H-FBP, except those for sharpness of mediastinum, which tended to be higher in FBP. When comparing CT images between the same dose and different reconstruction (F-IRIS/F-FBP and H-IRIS/H-FBP) algorithms, scores tended to be higher in IRIS than in FBP, being more distinct in half-dose images. However, despite the use of IRIS, the scores were lower in H-IRIS than in F-FBP. IRIS generally helps improve the IQ, being more distinct at the reduced radiation. However, reduced radiation by half results in IQ decrease even when using IRIS in chest CT.

  13. Diagnostic accuracy of second-generation dual-source computed tomography coronary angiography with iterative reconstructions: a real-world experience.

    Science.gov (United States)

    Maffei, E; Martini, C; Rossi, A; Mollet, N; Lario, C; Castiglione Morelli, M; Clemente, A; Gentile, G; Arcadi, T; Seitun, S; Catalano, O; Aldrovandi, A; Cademartiri, F

    2012-08-01

    The authors evaluated the diagnostic accuracy of second-generation dual-source (DSCT) computed tomography coronary angiography (CTCA) with iterative reconstructions for detecting obstructive coronary artery disease (CAD). Between June 2010 and February 2011, we enrolled 160 patients (85 men; mean age 61.2±11.6 years) with suspected CAD. All patients underwent CTCA and conventional coronary angiography (CCA). For the CTCA scan (Definition Flash, Siemens), we use prospective tube current modulation and 70-100 ml of iodinated contrast material (Iomeprol 400 mgI/ ml, Bracco). Data sets were reconstructed with iterative reconstruction algorithm (IRIS, Siemens). CTCA and CCA reports were used to evaluate accuracy using the threshold for significant stenosis at ≥50% and ≥70%, respectively. No patient was excluded from the analysis. Heart rate was 64.3±11.9 bpm and radiation dose was 7.2±2.1 mSv. Disease prevalence was 30% (48/160). Sensitivity, specificity and positive and negative predictive values of CTCA in detecting significant stenosis were 90.1%, 93.3%, 53.2% and 99.1% (per segment), 97.5%, 91.2%, 61.4% and 99.6% (per vessel) and 100%, 83%, 71.6% and 100% (per patient), respectively. Positive and negative likelihood ratios at the per-patient level were 5.89 and 0.0, respectively. CTCA with second-generation DSCT in the real clinical world shows a diagnostic performance comparable with previously reported validation studies. The excellent negative predictive value and likelihood ratio make CTCA a first-line noninvasive method for diagnosing obstructive CAD.

  14. The regulatory action in the problem of radioactive sources processed as scrap

    International Nuclear Information System (INIS)

    Truppa, Walter Adrian; Cateriano, Miguel Angel

    2005-01-01

    The loss of control of a radioactive source can result in a radiological emergency, especially if that source is treated as scrap. This paper presents a case registered in Argentina about discovery of a radioactive source of Kr-85, 9.25 GBq, used in a computer for industrial measurement of thickness. The radioactive source, without registration or identification, was registered by a portal for detection of radioactive material in the middle of the scrap that entered daily in the oven of a important steel company. From there, the Nuclear Regulatory Authority (RNA) conducted an investigation to determine the origin of the radioactive source, and in parallel made, in the laboratories of measurement, identification of radioactive material inside the source. This led to a company in financial and judicial bankruptcy, which had not notified the RNA about this situation, and also possessed, according to records, other eleven sources with similar characteristics. Finally the actions and regulatory effort allowed the localization of all the radioactive sources of this company, and its storage and deposit in an authorised repository

  15. The problem of electric sources in Einstein's Hermite-symmetric field theory

    International Nuclear Information System (INIS)

    Kreisel, E.

    1986-01-01

    The possibility is investigated to introduce a geometric source without A-invariance and Hermite-symmetry breaking of Einstein's Hermitian relativity. It would be very meaningful to interpret a source of this kind as electric current. With this extension Einstein's unitary field theory contains Einstein's gravitation, electromagnetism and the gluonic vacuum of chromodynamics. (author)

  16. Catch-up validation study of an in vitro skin irritation test method based on an open source reconstructed epidermis (phase II).

    Science.gov (United States)

    Groeber, F; Schober, L; Schmid, F F; Traube, A; Kolbus-Hernandez, S; Daton, K; Hoffmann, S; Petersohn, D; Schäfer-Korting, M; Walles, H; Mewes, K R

    2016-10-01

    To replace the Draize skin irritation assay (OECD guideline 404) several test methods based on reconstructed human epidermis (RHE) have been developed and were adopted in the OECD test guideline 439. However, all validated test methods in the guideline are linked to RHE provided by only three companies. Thus, the availability of these test models is dependent on the commercial interest of the producer. To overcome this limitation and thus to increase the accessibility of in vitro skin irritation testing, an open source reconstructed epidermis (OS-REp) was introduced. To demonstrate the capacity of the OS-REp in regulatory risk assessment, a catch-up validation study was performed. The participating laboratories used in-house generated OS-REp to assess the set of 20 reference substances according to the performance standards amending the OECD test guideline 439. Testing was performed under blinded conditions. The within-laboratory reproducibility of 87% and the inter-laboratory reproducibility of 85% prove a high reliability of irritancy testing using the OS-REp protocol. In addition, the prediction capacity was with an accuracy of 80% comparable to previous published RHE based test protocols. Taken together the results indicate that the OS-REp test method can be used as a standalone alternative skin irritation test replacing the OECD test guideline 404. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  17. Application of historical, topographic maps and remote sensing data for reconstruction of gully network development as source of information for gully erosion modeling

    Science.gov (United States)

    Belyaev, Vladimir; Kuznetsova, Yulia

    2017-04-01

    Central parts of European Russia are characterized by relatively shorter history of intensive agriculture in comparison to the Western Europe. As a result of that, significant part of the time period of large-scale cultivation is covered by different types of historical documents. For the last about 150 years reasonably good-quality maps are available. Gully erosion history for the European Russia is more or less well-established, with known peaks of activity associated with initial cultivation (400-200 years ago for the territory of Central Russian Upland) and change of land ownership in 1861 that caused splitting large landlords-owned fields into numerous small parcels owned by individual peasant families. The latter was the most important trigger for dramatic growth of gully erosion intensity as most of such parcels were oriented downslope. It is believed that by detailed reconstructions of gully network development using all the available information sources it can be possible to obtain information suitable for gully erosion models testing. Such models can later be applied for predicting further development of the existing gully networks for several different land use and climate change scenarios. Reconstructions for the two case study areas located in different geographic and historical settings will be presented.

  18. On the question of 3D seed reconstruction in prostate brachytherapy: the determination of x-ray source and film locations

    International Nuclear Information System (INIS)

    Zhang Mutian; Zaider, Marco; Worman, Michael; Cohen, Gilad

    2004-01-01

    Inaccuracy in seed placement during permanent prostate implants may lead to significant dosimetric deviations from the intended plan. In two recent publications (Todor et al 2002 Phys. Med. Biol. 47 2031-48, Todor et al 2003 Phys. Med. Biol. 48 1153-71), methodology was described for identifying intraoperatively the positions of seeds already implanted, thus allowing re-optimization of the treatment plan and correcting for such seed misplacement. Seed reconstruction is performed using fluoroscopic images and an important (and non-trivial) component of this approach is the ability to accurately determine the position of the gantry relative to the treatment volume. We describe the methodology for acquiring this information, based on the known geometry of six markers attached to the ultrasound probe. This method does not require the C-arm unit to be isocentric and films can be taken with the gantry set at any arbitrary position. This is significant because the patient positioning on the operating table (in the lithotomy position) restricts the range of angles at which films can be taken to a quite narrow (typically ±10 0 ) interval and, as a general rule, the closer the angles the larger the uncertainty in the seed location reconstruction along the direction from the x-ray source to the film. (note)

  19. 3D computed tomography using a microfocus X-ray source: Analysis of artifact formation in the reconstructed images using simulated as well as experimental projection data

    International Nuclear Information System (INIS)

    Krimmel, S.; Stephan, J.; Baumann, J.

    2005-01-01

    The scope of this contribution is to identify and to quantify the influence of different parameters on the formation of image artifacts in X-ray computed tomography (CT) resulting for example, from beam hardening or from partial lack of information using 3D cone beam CT. In general, the reconstructed image quality depends on a number of acquisition parameters concerning the X-ray source (e.g. X-ray spectrum), the geometrical setup (e.g. cone beam angle), the sample properties (e.g. absorption characteristics) and the detector properties. While it is difficult to distinguish the influence of different effects clearly in experimental projection data, they can be selected individually with the help of simulated projection data by varying the parameter set. The reconstruction of the 3D data set is performed with the filtered back projection algorithm according to Feldkamp, Davis and Kress for experimental as well as for simulated projection data. The experimental data are recorded with an industrial microfocus CT system which features a focal spot size of a few micrometers and uses a digital flat panel detector for data acquisition

  20. Parallelization of the AliRoot event reconstruction by performing a semi- automatic source-code transformation

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    side bus or processor interconnections. Parallelism can only result in performance gain, if the memory usage is optimized, memory locality improved and the communication between threads is minimized. But the domain of concurrent programming has become a field for highly skilled experts, as the implementation of multithreading is difficult, error prone and labor intensive. A full re-implementation for parallel execution of existing offline frameworks, like AliRoot in ALICE, is thus unaffordable. An alternative method, is to use a semi-automatic source-to-source transformation for getting a simple parallel design, with almost no interference between threads. This reduces the need of rewriting the develop...

  1. Reconstructing the plinian and co-ignimbrite sources of large volcanic eruptions: A novel approach for the Campanian Ignimbrite.

    Science.gov (United States)

    Marti, Alejandro; Folch, Arnau; Costa, Antonio; Engwell, Samantha

    2016-02-17

    The 39 ka Campanian Ignimbrite (CI) super-eruption was the largest volcanic eruption of the past 200 ka in Europe. Tephra deposits indicate two distinct plume forming phases, Plinian and co-ignimbrite, characteristic of many caldera-forming eruptions. Previous numerical studies have characterized the eruption as a single-phase event, potentially leading to inaccurate assessment of eruption dynamics. To reconstruct the volume, intensity, and duration of the tephra dispersal, we applied a computational inversion method that explicitly accounts for the Plinian and co-ignimbrite phases and for gravitational spreading of the umbrella cloud. To verify the consistency of our results, we performed an additional single-phase inversion using an independent thickness dataset. Our better-fitting two-phase model suggests a higher mass eruption rate than previous studies, and estimates that 3/4 of the total fallout volume is co-ignimbrite in origin. Gravitational spreading of the umbrella cloud dominates tephra transport only within the first hundred kilometres due to strong stratospheric winds in our best-fit wind model. Finally, tephra fallout impacts would have interrupted the westward migration of modern hominid groups in Europe, possibly supporting the hypothesis of prolonged Neanderthal survival in South-Western Europe during the Middle to Upper Palaeolithic transition.

  2. Practical adjoint Monte Carlo technique for fixed-source and eigenfunction neutron transport problems

    International Nuclear Information System (INIS)

    Hoogenboom, J.E.

    1981-01-01

    An adjoint Monte Carlo technique is described for the solution of neutron transport problems. The optimum biasing function for a zero-variance collision estimator is derived. The optimum treatment of an analog of a non-velocity thermal group has also been derived. The method is extended to multiplying systems, especially for eigenfunction problems to enable the estimate of averages over the unknown fundamental neutron flux distribution. A versatile computer code, FOCUS, has been written, based on the described theory. Numerical examples are given for a shielding problem and a critical assembly, illustrating the performance of the FOCUS code. 19 refs

  3. Certification of model spectrometric alpha sources (MSAS) and problems of the MSAS system improvement

    International Nuclear Information System (INIS)

    Belyatskij, A.F.; Gejdel'man, A.M.; Egorov, Yu.S.; Nedovesov, V.G.; Chechev, V.P.

    1984-01-01

    Results of certification of standard spectrometric alpha sources (SSAS) of industrial production are presented: methods for certification by main radiation physical parameters: proper halfwidth of α-lines, activity of radionuclides in the source, energies of α-particle emitting sources and relative intensity of different energy α-particle groups - are analysed. The advantage for the SSAS system improvement - a set of model measures for α-radiation, a collection of interconnected data units on physical, engineering and design characteristics of SSAS, methods for their obtaining and determination, on instruments used, is considered

  4. Experience and problems of the automated measuring and sorting of sealed radiation sources

    International Nuclear Information System (INIS)

    Shmidt, G.

    1979-01-01

    It has been shown that with the help of a serial device for samples changing and a mini-computer with a suitable software it is possible to organize the radioactivity measuring and sorting of sealed gamma-sources with activity in the microcuri region. Application of the computer permits to rise accuracy of the data on the radiation sources radioactivity, sorted according to the preset activity level groups and, in the casa of necessity, to perform the activity measurements with lower error. The method listed, gives the working-time economy of nearly 4 hours in measuring and sorting of some 500 sealed radiation sources [ru

  5. Solar energy and nuclear power. Energy sources, environmental pollution and CO{sub 2} - problem; Solarenergie und Atomstrom. Energiequellen, Umweltbelastung und das CO{sub 2}-Problem

    Energy Technology Data Exchange (ETDEWEB)

    Metzner, H.

    1999-07-01

    In this volume the energy sources used today and possible alternatives like solar-, wind-, and hydro power, geothermal energy and renewable fuels are presented. The environmental pollution due to fossil fuel application (e.g. sulfur- and nitrogen oxides) as the use of nuclear power are discussed in detail. An extra chapter covers the CO2 problem (greenhouse effect, ice cover on earth, sea level, influence on plant growth and agricultural crop) as climatic forecasting. [German] In diesem Band werden die heute nutzbaren Energiequellen und die dazu moeglichen Alternativen wie Solarenergie, Wind-, und Wasserkraft, Erdwaerme und nachwachsende Rohstoffe aufgezeigt. Die Umweltbelastungen aus der Nutzung fossiler Brennstoffe (z.B. Schwefel- und Stickoxide) sowie der Kernenergie sind ausfuehrlich besprochen. Dem CO2-Problem (Treibhauseffekt, Eisbedeckung der Erde, Hoehe des Meeresspiegels, Auswirkungen auf Pflanzenwuchs und Agrarertraege) sowie den Klimaprognosen ist ein eigenes Kapitel gewidmet.

  6. Medio-Frontal and Anterior Temporal abnormalities in children with attention deficit hyperactivity disorder (ADHD during an acoustic antisaccade task as revealed by electro-cortical source reconstruction

    Directory of Open Access Journals (Sweden)

    Rockstroh Brigitte

    2011-01-01

    Full Text Available Abstract Background Attention Deficit Hyperactivity Disorder (ADHD is one of the most prevalent disorders in children and adolescence. Impulsivity is one of three core symptoms and likely associated with inhibition difficulties. To date the neural correlate of the antisaccade task, a test of response inhibition, has not been studied in children with (or without ADHD. Methods Antisaccade responses to visual and acoustic cues were examined in nine unmedicated boys with ADHD (mean age 122.44 ± 20.81 months and 14 healthy control children (mean age 115.64 ± 22.87 months, three girls while an electroencephalogram (EEG was recorded. Brain activity before saccade onset was reconstructed using a 23-source-montage. Results When cues were acoustic, children with ADHD had a higher source activity than control children in Medio-Frontal Cortex (MFC between -230 and -120 ms and in the left-hemispheric Temporal Anterior Cortex (TAC between -112 and 0 ms before saccade onset, despite both groups performing similarly behaviourally (antisaccades errors and saccade latency. When visual cues were used EEG-activity preceding antisaccades did not differ between groups. Conclusion Children with ADHD exhibit altered functioning of the TAC and MFC during an antisaccade task elicited by acoustic cues. Children with ADHD need more source activation to reach the same behavioural level as control children.

  7. Some problems of neutron source multiplication method for site measurement technology in nuclear critical safety

    International Nuclear Information System (INIS)

    Shi Yongqian; Zhu Qingfu; Hu Dingsheng; He Tao; Yao Shigui; Lin Shenghuo

    2004-01-01

    The paper gives experiment theory and experiment method of neutron source multiplication method for site measurement technology in the nuclear critical safety. The measured parameter by source multiplication method actually is a sub-critical with source neutron effective multiplication factor k s , but not the neutron effective multiplication factor k eff . The experiment research has been done on the uranium solution nuclear critical safety experiment assembly. The k s of different sub-criticality is measured by neutron source multiplication experiment method, and k eff of different sub-criticality, the reactivity coefficient of unit solution level, is first measured by period method, and then multiplied by difference of critical solution level and sub-critical solution level and obtained the reactivity of sub-critical solution level. The k eff finally can be extracted from reactivity formula. The effect on the nuclear critical safety and different between k eff and k s are discussed

  8. Plasma focus as an heavy ion source in the problem of heavy ion fusion

    International Nuclear Information System (INIS)

    Gribkov, V.A.; Dubrovskij, A.V.; Kalachev, N.V.; Krokhin, O.N.; Silin, P.V.; Nikulin, V.Ya.; Cheblukov, Yu.N.

    1984-01-01

    Results of experiments on the ion flux formation in a plasma focus (PF) to develop a multicharged ion source for thermonuclear facility driver are presented. In plasma focus accelerating section copper ions were injected. Advantages of the suggested method of ion beam formation are demonstrated. Beam emittance equalling < 0.1 cmxmrad is obtained. Plasma focus ion energy exceeds 1 MeV. Plasma focus in combination with a neodymium laser is thought to be a perspective ion source for heavy ion fusion

  9. Direct reconstruction and associated uncertainties of 192Ir source dwell positions in ring applicators using gafchromic film in the treatment planning of HDR brachytherapy cervix patients

    Science.gov (United States)

    Awunor, O. A.; Dixon, B.; Walker, C.

    2013-05-01

    This paper details a practical method for the direct reconstruction of high dose rate 192Ir source dwell positions in ring applicators using gafchromic film in the treatment planning of brachytherapy cervix patients. It also details the uncertainties associated with such a process. Eight Nucletron interstitial ring applicators—Ø26 mm (×4), Ø30 mm (×3) and Ø34 mm (×1), and one 60 mm intrauterine tube were used in this study. RTQA2 and XRQA2 gafchromic films were irradiated at pre-programmed dwell positions with three successive 192Ir sources and used to derive the coordinates of the source dwell positions. The source was observed to deviate significantly from its expected position by up to 6.1 mm in all ring sizes. Significant inter applicator differences of up to 2.6 mm were observed between a subset of ring applicators. Also, the measured data were observed to differ significantly from commercially available source path models provided by Nucletron with differences of up to 3.7 mm across all ring applicator sizes. The total expanded uncertainty (k = 2) averaged over all measured dwell positions in the rings was observed to be 1.1 ± 0.1 mm (Ø26 mm and Ø30 mm rings) and 1.0 ± 0.3 mm (Ø34 mm ring) respectively, and when transferred to the treatment planning system, equated to maximum %dose changes of 1.9%, 13.2% and 1.5% at regions representative of the parametrium, lateral fornix and organs at risk respectively.

  10. Vertex reconstruction in CMS

    International Nuclear Information System (INIS)

    Chabanat, E.; D'Hondt, J.; Estre, N.; Fruehwirth, R.; Prokofiev, K.; Speer, T.; Vanlaer, P.; Waltenberger, W.

    2005-01-01

    Due to the high track multiplicity in the final states expected in proton collisions at the LHC experiments, novel vertex reconstruction algorithms are required. The vertex reconstruction problem can be decomposed into a pattern recognition problem ('vertex finding') and an estimation problem ('vertex fitting'). Starting from least-squares methods, robustifications of the classical algorithms are discussed and the statistical properties of the novel methods are shown. A whole set of different approaches for the vertex finding problem is presented and compared in relevant physics channels

  11. Vertex Reconstruction in CMS

    CERN Document Server

    Chabanat, E; D'Hondt, J; Vanlaer, P; Prokofiev, K; Speer, T; Frühwirth, R; Waltenberger, W

    2005-01-01

    Because of the high track multiplicity in the final states expected in proton collisions at the LHC experiments, novel vertex reconstruction algorithms are required. The vertex reconstruction problem can be decomposed into a pattern recognition problem ("vertex finding") and an estimation problem ("vertex fitting"). Starting from least-square methods, ways to render the classical algorithms more robust are discussed and the statistical properties of the novel methods are shown. A whole set of different approaches for the vertex finding problem is presented and compared in relevant physics channels.

  12. On the Problem Related to Reconstructing the Social Structure of the Population that Had Founded Seliksa-Trofimovka (Ancient Mordovian Burial Ground in 4th—5th Centuries

    Directory of Open Access Journals (Sweden)

    Grishakov Valeriy V.

    2013-12-01

    Full Text Available The article is devoted to the problem of reconstruction of the social structure of the ancient Monrovian population that had established the 4th-5th-century Seliksa-Trofimovka burial ground in the Upper Sura river region. The materials of the male burials of the necropolis have been chosen for analysis as most socially informative. An attempt has been made to determine the relationship between the social status of the individual and its expression in ritual rites. The differences in the composition and quantity of grave goods made it possible to distinguish three groups of burials conventionally termed as "the poor", "the ordinary" and "the warriors." The latter group included three graves with swords. The necropolis has a row-based order layout; all the burials are on the ground level, with no traces of gravestones, and have the same northeast orientation. The property-based stratification in the analyzed community was apparently insignificant, while social stratification depended primarily on professional activities.

  13. A Heuristic Algorithm for Constrain Single-Source Problem with Constrained Customers

    Directory of Open Access Journals (Sweden)

    S. A. Raisi Dehkordi∗

    2012-09-01

    Full Text Available The Fermat-Weber location problem is to find a point in R n that minimizes the sum of the weighted Euclidean distances from m given points in R n . In this paper we consider the Fermat-Weber problem of one new facilitiy with respect to n unknown customers in order to minimizing the sum of transportation costs between this facility and the customers. We assumed that each customer is located in a nonempty convex closed bounded subset of R n .

  14. The long-term problems of contaminated land: Sources, impacts and countermeasures

    Energy Technology Data Exchange (ETDEWEB)

    Baes, C.F. III

    1986-11-01

    This report examines the various sources of radiological land contamination; its extent; its impacts on man, agriculture, and the environment; countermeasures for mitigating exposures; radiological standards; alternatives for achieving land decontamination and cleanup; and possible alternatives for utilizing the land. The major potential sources of extensive long-term land contamination with radionuclides, in order of decreasing extent, are nuclear war, detonation of a single nuclear weapon (e.g., a terrorist act), serious reactor accidents, and nonfission nuclear weapons accidents that disperse the nuclear fuels (termed ''broken arrows'').

  15. The long-term problems of contaminated land: Sources, impacts and countermeasures

    International Nuclear Information System (INIS)

    Baes, C.F. III.

    1986-11-01

    This report examines the various sources of radiological land contamination; its extent; its impacts on man, agriculture, and the environment; countermeasures for mitigating exposures; radiological standards; alternatives for achieving land decontamination and cleanup; and possible alternatives for utilizing the land. The major potential sources of extensive long-term land contamination with radionuclides, in order of decreasing extent, are nuclear war, detonation of a single nuclear weapon (e.g., a terrorist act), serious reactor accidents, and nonfission nuclear weapons accidents that disperse the nuclear fuels (termed ''broken arrows'')

  16. Upper Palaeolithic lithic raw material sourcing in Central and Northern Portugal as an aid to reconstructing hunter-gatherer societies

    Directory of Open Access Journals (Sweden)

    Thierry Aubry

    2016-09-01

    Full Text Available We present the results of the study of lithic raw materials used in Upper Palaeolithic occupations preserved in caves, rockshelters and open-air sites from two different geological environments in Portugal. For the sites located in the Lusitanian Basin, flint or silcrete sources are easily available in close vicinity. The Côa Valley sites, located in the Iberian Massif, are within a geological environment where restricted fine-grained vein quartz and siliceous metamorphic rocks are available, but no flint or silcrete, even though both are present in the archaeological assemblages. Data from the two clusters of sites are compared with a third newly located site in the Lower Vouga valley, at the limit of the Iberian Massif with the Lusitanian Basin, where quartz vein raw material types are locally available and flint is about 40 kilometres distant. This study reveals prehistoric adaptations to these different geological contexts, with shorter networks for the Lusitanian basin sites contrasting with the long distance ones for the Côa Valley, and the Vouga site at an intermediary position. Finally, we propose that lithic raw material supply networks, defined by a GIS least-cost algorithm, could be used as a proxy not only for territoriality in the case of local and regional lithic raw material sources, but also to infer long-distance social networks between different Palaeolithic human groups, created and maintained to promote the access to asymmetrically distributed resources.

  17. Image Reconstruction. Chapter 13

    Energy Technology Data Exchange (ETDEWEB)

    Nuyts, J. [Department of Nuclear Medicine and Medical Imaging Research Center, Katholieke Universiteit Leuven, Leuven (Belgium); Matej, S. [Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, PA (United States)

    2014-12-15

    This chapter discusses how 2‑D or 3‑D images of tracer distribution can be reconstructed from a series of so-called projection images acquired with a gamma camera or a positron emission tomography (PET) system [13.1]. This is often called an ‘inverse problem’. The reconstruction is the inverse of the acquisition. The reconstruction is called an inverse problem because making software to compute the true tracer distribution from the acquired data turns out to be more difficult than the ‘forward’ direction, i.e. making software to simulate the acquisition. There are basically two approaches to image reconstruction: analytical reconstruction and iterative reconstruction. The analytical approach is based on mathematical inversion, yielding efficient, non-iterative reconstruction algorithms. In the iterative approach, the reconstruction problem is reduced to computing a finite number of image values from a finite number of measurements. That simplification enables the use of iterative instead of mathematical inversion. Iterative inversion tends to require more computer power, but it can cope with more complex (and hopefully more accurate) models of the acquisition process.

  18. Security problems arising from the use of radioactive sources in the study of the wear in refractory linings; Les problemes de securite dans l'emploi de sources radioactives pour l'etude de l'usure de revetements refractaires

    Energy Technology Data Exchange (ETDEWEB)

    Courtois, G; Hours, R; Le Clerc, P; Pons, A [Commissariat a l' Energie Atomique, Saclay (France).Centre d' Etudes Nucleaires

    1960-07-01

    The determination of the wear in refractories is a problem to which these are at present only a few solutions, and these limited or delicate to use. That is the reason why the use of radioactive tracers contained in the refractory has met with rapid success. Unfortunately, the development of the method has been retarded by the need to limit severely the amounts of radioelement incorporated and also by the observation that diffusion of the radioactive product occurred in the refractory. As a result, the limiting amount of 1 mc/500 metric tons of cast-iron has been adopted in France, with the proviso that no single source exceeds 3 mc. Further, we have made special sources with a view to avoiding diffusion phenomena. The essential feature of these sources is that they use pyro-ceramic a non porous material having a high melting point and being, very inert chemically. In these sources, the radioelement can either be entirely encased in the pyro-ceramic or be an integral part of its composition. A comparative study of the two types of sources is actually under way. (author) [French] La determination de l'usure des refractaires est un probleme dont les solutions sont jusqu'a maintenant peu nombreuses, limitees ou delicates d'emploi. C'est pour cela que l'utilisation de traceurs radioactifs noyes dans le refractaire a connu un vif succes. Malheureusement, le developpement de la methode a ete freine par la necessite de limiter severement la teneur des radioelements incorpores aux produits metallurgiques et egalement par des constatations de diffusion dans le refractaire du produit radioactif. Par suite, la teneur limite de 1 mc/500 tonnes de fonte est maintenant adoptee en France, sans qu'une source individuelle puisse depasser 3 mc. De plus, nous avons realise des sources particulierement etudiees en vue d'eviter des phenomenes de diffusion. Le caractere essentiel de cette realisation est l'utilisation de pyrocerame, materiau non poreux, a haut point de fusion et de

  19. Solution of multi-element LED light sources development automation problem

    Science.gov (United States)

    Chertov, Aleksandr N.; Gorbunova, Elena V.; Korotaev, Valery V.; Peretyagin, Vladimir S.

    2014-09-01

    The intensive development of LED technologies resulted in the creation of multicomponent light sources in the form of controlled illumination devices based on usage of mentioned LED technologies. These light sources are used in different areas of production (for example, in the food industry for sorting products or in the textile industry for quality control, etc.). The use of LED lighting products in the devices used in specialized lighting, became possible due to wide range of colors of light, LED structures (which determines the direction of radiation, the spatial distribution and intensity of the radiation, electrical, heat, power and other characteristics), and of course, the possibility of obtaining any shade in a wide dynamic range of brightness values. LED-based lighting devices are notable for the diversity of parameters and characteristics, such as color radiation, location and number of emitters, etc. Although LED technologies have several advantages, however, they require more attention if you need to ensure a certain character of illumination distribution and/or distribution of the color picture at a predetermined distance (for example, at flat surface, work zone, area of analysis or observation). This paper presents software designed for the development of the multicomponent LED light sources. The possibility of obtaining the desired color and energy distribution at the zone of analysis by specifying the spatial parameters of the created multicomponent light source and using of real power, spectral and color parameters and characteristics of the LEDs is shown as well.

  20. UV Reconstruction Algorithm And Diurnal Cycle Variability

    Science.gov (United States)

    Curylo, Aleksander; Litynska, Zenobia; Krzyscin, Janusz; Bogdanska, Barbara

    2009-03-01

    UV reconstruction is a method of estimation of surface UV with the use of available actinometrical and aerological measurements. UV reconstruction is necessary for the study of long-term UV change. A typical series of UV measurements is not longer than 15 years, which is too short for trend estimation. The essential problem in the reconstruction algorithm is the good parameterization of clouds. In our previous algorithm we used an empirical relation between Cloud Modification Factor (CMF) in global radiation and CMF in UV. The CMF is defined as the ratio between measured and modelled irradiances. Clear sky irradiance was calculated with a solar radiative transfer model. In the proposed algorithm, the time variability of global radiation during the diurnal cycle is used as an additional source of information. For elaborating an improved reconstruction algorithm relevant data from Legionowo [52.4 N, 21.0 E, 96 m a.s.l], Poland were collected with the following instruments: NILU-UV multi channel radiometer, Kipp&Zonen pyranometer, radiosonde profiles of ozone, humidity and temperature. The proposed algorithm has been used for reconstruction of UV at four Polish sites: Mikolajki, Kolobrzeg, Warszawa-Bielany and Zakopane since the early 1960s. Krzyscin's reconstruction of total ozone has been used in the calculations.

  1. Solution to Two-Dimensional Steady Inverse Heat Transfer Problems with Interior Heat Source Based on the Conjugate Gradient Method

    Directory of Open Access Journals (Sweden)

    Shoubin Wang

    2017-01-01

    Full Text Available The compound variable inverse problem which comprises boundary temperature distribution and surface convective heat conduction coefficient of two-dimensional steady heat transfer system with inner heat source is studied in this paper applying the conjugate gradient method. The introduction of complex variable to solve the gradient matrix of the objective function obtains more precise inversion results. This paper applies boundary element method to solve the temperature calculation of discrete points in forward problems. The factors of measuring error and the number of measuring points zero error which impact the measurement result are discussed and compared with L-MM method in inverse problems. Instance calculation and analysis prove that the method applied in this paper still has good effectiveness and accuracy even if measurement error exists and the boundary measurement points’ number is reduced. The comparison indicates that the influence of error on the inversion solution can be minimized effectively using this method.

  2. Explicit formulation of a nodal transport method for discrete ordinates calculations in two-dimensional fixed-source problems

    Energy Technology Data Exchange (ETDEWEB)

    Tres, Anderson [Universidade Federal do Rio Grande do Sul, Porto Alegre, RS (Brazil). Programa de Pos-Graduacao em Matematica Aplicada; Becker Picoloto, Camila [Universidade Federal do Rio Grande do Sul, Porto Alegre, RS (Brazil). Programa de Pos-Graduacao em Engenharia Mecanica; Prolo Filho, Joao Francisco [Universidade Federal do Rio Grande do Sul, Porto Alegre, RS (Brazil). Inst de Matematica, Estatistica e Fisica; Dias da Cunha, Rudnei; Basso Barichello, Liliane [Universidade Federal do Rio Grande do Sul, Porto Alegre, RS (Brazil). Inst de Matematica

    2014-04-15

    In this work a study of two-dimensional fixed-source neutron transport problems, in Cartesian geometry, is reported. The approach reduces the complexity of the multidimensional problem using a combination of nodal schemes and the Analytical Discrete Ordinates Method (ADO). The unknown leakage terms on the boundaries that appear from the use of the derivation of the nodal scheme are incorporated to the problem source term, such as to couple the one-dimensional integrated solutions, made explicit in terms of the x and y spatial variables. The formulation leads to a considerable reduction of the order of the associated eigenvalue problems when combined with the usual symmetric quadratures, thereby providing solutions that have a higher degree of computational efficiency. Reflective-type boundary conditions are introduced to represent the domain on a simpler form than that previously considered in connection with the ADO method. Numerical results obtained with the technique are provided and compared to those present in the literature. (orig.)

  3. NOTE: Development of modified voxel phantoms for the numerical dosimetric reconstruction of radiological accidents involving external sources: implementation in SESAME tool

    Science.gov (United States)

    Courageot, Estelle; Sayah, Rima; Huet, Christelle

    2010-05-01

    Estimating the dose distribution in a victim's body is a relevant indicator in assessing biological damage from exposure in the event of a radiological accident caused by an external source. When the dose distribution is evaluated with a numerical anthropomorphic model, the posture and morphology of the victim have to be reproduced as realistically as possible. Several years ago, IRSN developed a specific software application, called the simulation of external source accident with medical images (SESAME), for the dosimetric reconstruction of radiological accidents by numerical simulation. This tool combines voxel geometry and the MCNP(X) Monte Carlo computer code for radiation-material interaction. This note presents a new functionality in this software that enables the modelling of a victim's posture and morphology based on non-uniform rational B-spline (NURBS) surfaces. The procedure for constructing the modified voxel phantoms is described, along with a numerical validation of this new functionality using a voxel phantom of the RANDO tissue-equivalent physical model.

  4. MODIS Land Surface Temperature time series reconstruction with Open Source GIS: A new quality of temperature based ecological indicators in complex terrain (Invited)

    Science.gov (United States)

    Neteler, M.

    2009-12-01

    In complex terrain like the Central European Alps, meteorological stations and ground surveys are usually sparsely and/or irregularly distributed and often favor agricultural areas. The application of traditional geospatial interpolation methods in complex terrain remains challenging and difficult to optimize. An alternative data source is remote sensing: high temporal resolution satellite data are continuously gaining interest since these data are intrinsically spatialized: continuous field of observations is obtained with this tool instead of point data. The increasing data availability suggests using these time series as surrogate to certain measures from meteorological stations, especially for temperature and related derivatives. The Terra and Aqua satellites with the Moderate Resolution Imaging Spectroradiometer (MODIS) provide four Earth coverages per day at various resolutions. We analyzed 8 years (2000 to 2008) of daily land surface temperature (LST) data from MODIS in an area located in the Southern European Alps. A method was developed to reconstruct incomplete maps (cloud coverage, invalid pixels) based on image statistics and on a model that includes additional GIS layers. The original LST map resolution of 1000m could be improved to 200m in this process which renders the resulting LST maps applicable at regional scales. We propose the use of these reconstructed daily LST time series as surrogate to meteorological observations especially in the area of epidemiological modeling where data are typically aggregated to decadal indicators. From these daily LST map series, derivable indicators include: 1) temperatures minima, means and maxima for annual/monthly/decadal periods; 2) unusual hot summers;3) the calculation of growing degree days, and 4) spring temperature increase or autumnal temperature decrease. Since more than 8 years of MODIS LST data are available today, even preliminary gradients can be extracted to assess multi-annual temperature trends

  5. Generalizable open source urban water portfolio simulation framework demonstrated using a multi-objective risk-based planning benchmark problem.

    Science.gov (United States)

    Trindade, B. C.; Reed, P. M.

    2017-12-01

    The growing access and reduced cost for computing power in recent years has promoted rapid development and application of multi-objective water supply portfolio planning. As this trend continues there is a pressing need for flexible risk-based simulation frameworks and improved algorithm benchmarking for emerging classes of water supply planning and management problems. This work contributes the Water Utilities Management and Planning (WUMP) model: a generalizable and open source simulation framework designed to capture how water utilities can minimize operational and financial risks by regionally coordinating planning and management choices, i.e. making more efficient and coordinated use of restrictions, water transfers and financial hedging combined with possible construction of new infrastructure. We introduce the WUMP simulation framework as part of a new multi-objective benchmark problem for planning and management of regionally integrated water utility companies. In this problem, a group of fictitious water utilities seek to balance the use of the mentioned reliability driven actions (e.g., restrictions, water transfers and infrastructure pathways) and their inherent financial risks. Several traits of this problem make it ideal for a benchmark problem, namely the presence of (1) strong non-linearities and discontinuities in the Pareto front caused by the step-wise nature of the decision making formulation and by the abrupt addition of storage through infrastructure construction, (2) noise due to the stochastic nature of the streamflows and water demands, and (3) non-separability resulting from the cooperative formulation of the problem, in which decisions made by stakeholder may substantially impact others. Both the open source WUMP simulation framework and its demonstration in a challenging benchmarking example hold value for promoting broader advances in urban water supply portfolio planning for regions confronting change.

  6. 3D Reconstruction of a Fluvial Sediment Slug from Source to Sink: reach-scale modeling of the Dart River, NZ

    Science.gov (United States)

    Brasington, J.; Cook, S.; Cox, S.; James, J.; Lehane, N.; McColl, S. T.; Quincey, D. J.; Williams, R. D.

    2014-12-01

    Following heavy rainfall on 4/1/14, a debris flow at Slip Stream (44.59 S 168.34 E) introduced >106 m3 of sediment to the Dart River valley floor in NZ Southern Alps. Runout over an existing fan dammed the Dart River causing a sudden drop in discharge downstream. This broad dam was breached quickly; however the temporary loss of conveyance impounded a 3 km lake with a volume of 6 x 106 m3 and depths that exceed 10 m. Quantifying the impact of this large sediment pulse on the Dart River is urgently needed to assess potential sedimentation downstream and will also provide an ideal vehicle to test theories of bed wave migration in large, extensively braided rivers. Recent advances in geomatics offer the opportunity to study these impacts directly through the production of high-resolution DEMs. These 3D snapshots can then be compared through time to quantify the morphodynamic response of the channel as it adjusts to the change in sediment supply. In this study we describe the methods and results of a novel survey strategy designed to capture of the complex morphology of the Dart River along a remote 40 km reach, from the upstream landslide source to its distal sediment sink in Lake Wakatipu. The scale of this system presents major logistical and methodological challenges, and hitherto would have conventionally be addressed with airborne laser scanning, bringing with it significant deployment constraints and costs. By contrast, we present sub-metre 3D reconstructions of the system (Figure 1), derived from highly redundant aerial photography shot with a non-metric camera from a helicopter survey that extended over an 80 km2 area. Structure-from-Motion photogrammetry was used to solve simultaneously camera position, pose and derive a 3D point cloud based on over 4000 images. Reconstructions were found to exhibit significant systematic error resulting from the implicit estimation of the internal camera orientation parameters, and we show how these effects can be minimized

  7. Mesozoic–Cenozoic Climate and Neotectonic Events as Factors in Reconstructing the Thermal History of the Source-Rock Bazhenov Formation, Arctic Region, West Siberia, by the Example of the Yamal Peninsula

    Science.gov (United States)

    Isaev, V. I.; Iskorkina, A. A.; Lobova, G. A.; Starostenko, V. I.; Tikhotskii, S. A.; Fomin, A. N.

    2018-03-01

    Schemes and criteria are developed for using the measured and modeled geotemperatures for studying the thermal regime of the source rock formations, as well as the tectonic and sedimentary history of sedimentary basins, by the example of the oil fields of the Yamal Peninsula. The method of paleotemperature modeling based on the numerical solution of the heat conduction equation for a horizontally layered solid with a movable upper boundary is used. The mathematical model directly includes the climatic secular trend of the Earth's surface temperature as the boundary condition and the paleotemperatures determined from the vitrinite reflectance as the measurement data. The method does not require a priori information about the nature and intensities of the heat flow from the Earth's interior; the flow is determined by solving the inverse problem of geothermy with a parametric description of the of the sedimentation history and the history of the thermophysical properties of the sedimentary stratum. The rate of sedimentation is allowed to be zero and negative which provides the possibility to take into account the gaps in sedimentation and denudation. The formation, existence, and degradation of the permafrost stratum and ice cover are taken into account as dynamical lithological-stratigraphic complexes with anomalously high thermal conductivity. It is established that disregarding the paleoclimatic factors precludes an adequate reconstruction of thermal history of the source-rock deposits. Revealing and taking into account the Late Eocene regression provided the computationally optimal and richest thermal history of the source-rock Bazhenov Formation, which led to more correct volumetric-genetic estimates of the reserves. For estimating the hydrocarbon reserves in the land territories of the Arctic region of West Siberia by the volumetric-genetic technique, it is recommended to use the Arctic secular trend of temperatures and take into account the dynamics of the

  8. Three-dimensional image reconstruction with free open-source OsiriX software in video-assisted thoracoscopic lobectomy and segmentectomy.

    Science.gov (United States)

    Yao, Fei; Wang, Jian; Yao, Ju; Hang, Fangrong; Lei, Xu; Cao, Yongke

    2017-03-01

    The aim of this retrospective study was to evaluate the practice and the feasibility of Osirix, a free and open-source medical imaging software, in performing accurate video-assisted thoracoscopic lobectomy and segmentectomy. From July 2014 to April 2016, 63 patients received anatomical video-assisted thoracoscopic surgery (VATS), either lobectomy or segmentectomy, in our department. Three-dimensional (3D) reconstruction images of 61 (96.8%) patients were preoperatively obtained with contrast-enhanced computed tomography (CT). Preoperative resection simulations were accomplished with patient-individual reconstructed 3D images. For lobectomy, pulmonary lobar veins, arteries and bronchi were identified meticulously by carefully reviewing the 3D images on the display. For segmentectomy, the intrasegmental veins in the affected segment for division and the intersegmental veins to be preserved were identified on the 3D images. Patient preoperative characteristics, surgical outcomes and postoperative data were reviewed from a prospective database. The study cohort of 63 patients included 33 (52.4%) men and 30 (47.6%) women, of whom 46 (73.0%) underwent VATS lobectomy and 17 (27.0%) underwent VATS segmentectomy. There was 1 conversion from VATS lobectomy to open thoracotomy because of fibrocalcified lymph nodes. A VATS lobectomy was performed in 1 case after completing the segmentectomy because invasive adenocarcinoma was detected by intraoperative frozen-section analysis. There were no 30-day or 90-day operative mortalities CONCLUSIONS: The free, simple, and user-friendly software program Osirix can provide a 3D anatomic structure of pulmonary vessels and a clear vision into the space between the lesion and adjacent tissues, which allows surgeons to make preoperative simulations and improve the accuracy and safety of actual surgery. Copyright © 2017 IJS Publishing Group Ltd. Published by Elsevier Ltd. All rights reserved.

  9. Fundamental Problems of Neutron Physics at the Spallation Neutron Source at the ORNL

    International Nuclear Information System (INIS)

    Gudkov, Vladimir

    2008-01-01

    We propose to provide theoretical support for the experimental program in fundamental neutron physics at the SNS. This includes the study of neutron properties, neutron beta-decay, parity violation effects and time reversal violation effects. The main purpose of the proposed research is to work on theoretical problems related to experiments which have a high priority at the SNS. Therefore, we will make a complete analysis of beta-decay process including calculations of radiative corrections and recoil corrections for angular correlations for polarized neutron decay, with an accuracy better that is supposed to be achieved in the planning experiments. Based on the results of the calculations, we will provide analysis of sensitivity of angular correlations to be able to search for the possible extensions of the Standard model. Also we will help to plan other experiments to address significant problems of modern physics and will work on their theoretical support.

  10. Johann Jakob Wettstein. New Sources, New Problems, and New Possibilities for Digital Research

    Directory of Open Access Journals (Sweden)

    Jan Krans

    2016-04-01

    Full Text Available Johann Jakob Wettstein (1683-1754 worked almost all his life toward the publication of his landmark 1751-52 edition of the Greek New Testament. In recent years, a large number of previously unknown sources on and by Wettstein has come to light, scattered over libraries in Europe, that provide new insights into his life and his New Testament project. This paper explores the diversity of these sources, their genres, their connections, their state of conservation and accessibility and the like. Starting from the idea that the collection offers an excellent opportunity for mapping a single scholar’s projects and international networks over time and space, it envisages a project that brings together this wealth of material. It asks what challenges and possibilities for international digital research the collection entails and formulates the desiderata concerning the necessary digital infrastructure and collaboration across traditional scholarly boundaries.

  11. IYPT problems as an efficient source of ideas for first-year project laboratory tasks

    Energy Technology Data Exchange (ETDEWEB)

    Planinsic, Gorazd [Faculty for Mathematics and Physics, University of Ljubljana (Slovenia)], E-mail: gorazd.planinsic@fmf.uni-lj.si

    2009-11-15

    In the project laboratory, a group of students are given a well-defined task but the path to the solution is entirely left to the students. The paper reports on some basic strategies in designing successful project tasks that are based on modified problems from International Young Physicists Tournament (IYPT). In addition, an integration of first-year project laboratory in in-service teacher training is also briefly presented.

  12. IYPT problems as an efficient source of ideas for first-year project laboratory tasks

    International Nuclear Information System (INIS)

    Planinsic, Gorazd

    2009-01-01

    In the project laboratory, a group of students are given a well-defined task but the path to the solution is entirely left to the students. The paper reports on some basic strategies in designing successful project tasks that are based on modified problems from International Young Physicists Tournament (IYPT). In addition, an integration of first-year project laboratory in in-service teacher training is also briefly presented.

  13. Problems and management of radioactive sources and measures against illicit trafficking of nuclear materials in Bulgaria

    International Nuclear Information System (INIS)

    Strezov, A.

    1998-01-01

    Illicit trafficking of nuclear materials continues to pose a danger to public health and safety and to nuclear non proliferation efforts. The majority of cases so far have involved only small amounts of fissile materials or mainly radioactive sources in Bulgaria. A proper scheme for analysis of seized nuclear materials will be developed based on existing equipment for NDA analysis of nuclear materials supplemented by new system through PHARE project assistance by EU experts. (author)

  14. Non ionizing radiations: Sources, fields of application, problem issues and normatives

    International Nuclear Information System (INIS)

    Raganella, L.

    1988-11-01

    The purpose of this paper is briefly to review radiation sources, in work and life places, and national standards, proposed or enforced in different countries, with particular reference to ELF. RF and MW electromagnetic fields. It is aimed to give a help to qualitative valutation on the work we can carry out for the development of an effective health protection of workers and general public. (author)

  15. Non ionizing radiations Sources, fields of application, problem issues and normatives

    CERN Document Server

    Raganella, L

    1988-01-01

    The purpose of this paper is briefly to review radiation sources, in work and life places, and national standards, proposed or enforced in different countries, with particular reference to ELF. RF and MW electromagnetic fields. It is aimed to give a help to qualitative valutation on the work we can carry out for the development of an effective health protection of workers and general public.

  16. Long-term Problems of Land Contaminated by Nonradioactive Hazardous Chemicals: Sources, Impacts, and Countermeasures

    Science.gov (United States)

    1987-01-01

    Chromium deficiency in animals produces symptoms similar to those for diabetes . Deficiencies of chromiu,n have also been associated with heart disease...widespread mercury poisonings from environmental sources. In the early 1960s several instances of methyl mercury poisoning occurred in Iraq, Guatemala , and...carbon dioxide, wpter vapor, and ash; minor effluents include sulfur-, nitrogen-, andPi ’:uy,-n -ccntallnhi( products that may be of signifi..ant en ~ron6

  17. An Efficient SF-ISF Approach for the Slepian-Wolf Source Coding Problem

    Directory of Open Access Journals (Sweden)

    Tu Zhenyu

    2005-01-01

    Full Text Available A simple but powerful scheme exploiting the binning concept for asymmetric lossless distributed source coding is proposed. The novelty in the proposed scheme is the introduction of a syndrome former (SF in the source encoder and an inverse syndrome former (ISF in the source decoder to efficiently exploit an existing linear channel code without the need to modify the code structure or the decoding strategy. For most channel codes, the construction of SF-ISF pairs is a light task. For parallelly and serially concatenated codes and particularly parallel and serial turbo codes where this appear less obvious, an efficient way for constructing linear complexity SF-ISF pairs is demonstrated. It is shown that the proposed SF-ISF approach is simple, provenly optimal, and generally applicable to any linear channel code. Simulation using conventional and asymmetric turbo codes demonstrates a compression rate that is only 0.06 bit/symbol from the theoretical limit, which is among the best results reported so far.

  18. Characterization of Fetal Keratinocytes, Showing Enhanced Stem Cell-Like Properties: A Potential Source of Cells for Skin Reconstruction

    Directory of Open Access Journals (Sweden)

    Kenneth K.B. Tan

    2014-08-01

    Full Text Available Epidermal stem cells have been in clinical application as a source of culture-generated grafts. Although applications for such cells are increasing due to aging populations and the greater incidence of diabetes, current keratinocyte grafting technology is limited by immunological barriers and the time needed for culture amplification. We studied the feasibility of using human fetal skin cells for allogeneic transplantation and showed that fetal keratinocytes have faster expansion times, longer telomeres, lower immunogenicity indicators, and greater clonogenicity with more stem cell indicators than adult keratinocytes. The fetal cells did not induce proliferation of T cells in coculture and were able to suppress the proliferation of stimulated T cells. Nevertheless, fetal keratinocytes could stratify normally in vitro. Experimental transplantation of fetal keratinocytes in vivo seeded on an engineered plasma scaffold yielded a well-stratified epidermal architecture and showed stable skin regeneration. These results support the possibility of using fetal skin cells for cell-based therapeutic grafting.

  19. Source Reconstruction of Brain Potentials Using Bayesian Model Averaging to Analyze Face Intra-Domain vs. Face-Occupation Cross-Domain Processing.

    Science.gov (United States)

    Olivares, Ela I; Lage-Castellanos, Agustín; Bobes, María A; Iglesias, Jaime

    2018-01-01

    We investigated the neural correlates of the access to and retrieval of face structure information in contrast to those concerning the access to and retrieval of person-related verbal information, triggered by faces. We experimentally induced stimulus familiarity via a systematic learning procedure including faces with and without associated verbal information. Then, we recorded event-related potentials (ERPs) in both intra-domain (face-feature) and cross-domain (face-occupation) matching tasks while N400-like responses were elicited by incorrect eyes-eyebrows completions and occupations, respectively. A novel Bayesian source reconstruction approach plus conjunction analysis of group effects revealed that in both cases the generated N170s were of similar amplitude but had different neural origin. Thus, whereas the N170 of faces was associated predominantly to right fusiform and occipital regions (the so-called "Fusiform Face Area", "FFA" and "Occipital Face Area", "OFA", respectively), the N170 of occupations was associated to a bilateral very posterior activity, suggestive of basic perceptual processes. Importantly, the right-sided perceptual P200 and the face-related N250 were evoked exclusively in the intra-domain task, with sources in OFA and extensively in the fusiform region, respectively. Regarding later latencies, the intra-domain N400 seemed to be generated in right posterior brain regions encompassing mainly OFA and, to some extent, the FFA, likely reflecting neural operations triggered by structural incongruities. In turn, the cross-domain N400 was related to more anterior left-sided fusiform and temporal inferior sources, paralleling those described previously for the classic verbal N400. These results support the existence of differentiated neural streams for face structure and person-related verbal processing triggered by faces, which can be activated differentially according to specific task demands.

  20. Source Reconstruction of Brain Potentials Using Bayesian Model Averaging to Analyze Face Intra-Domain vs. Face-Occupation Cross-Domain Processing

    Directory of Open Access Journals (Sweden)

    Ela I. Olivares

    2018-03-01

    Full Text Available We investigated the neural correlates of the access to and retrieval of face structure information in contrast to those concerning the access to and retrieval of person-related verbal information, triggered by faces. We experimentally induced stimulus familiarity via a systematic learning procedure including faces with and without associated verbal information. Then, we recorded event-related potentials (ERPs in both intra-domain (face-feature and cross-domain (face-occupation matching tasks while N400-like responses were elicited by incorrect eyes-eyebrows completions and occupations, respectively. A novel Bayesian source reconstruction approach plus conjunction analysis of group effects revealed that in both cases the generated N170s were of similar amplitude but had different neural origin. Thus, whereas the N170 of faces was associated predominantly to right fusiform and occipital regions (the so-called “Fusiform Face Area”, “FFA” and “Occipital Face Area”, “OFA”, respectively, the N170 of occupations was associated to a bilateral very posterior activity, suggestive of basic perceptual processes. Importantly, the right-sided perceptual P200 and the face-related N250 were evoked exclusively in the intra-domain task, with sources in OFA and extensively in the fusiform region, respectively. Regarding later latencies, the intra-domain N400 seemed to be generated in right posterior brain regions encompassing mainly OFA and, to some extent, the FFA, likely reflecting neural operations triggered by structural incongruities. In turn, the cross-domain N400 was related to more anterior left-sided fusiform and temporal inferior sources, paralleling those described previously for the classic verbal N400. These results support the existence of differentiated neural streams for face structure and person-related verbal processing triggered by faces, which can be activated differentially according to specific task demands.

  1. Permutationally invariant state reconstruction

    DEFF Research Database (Denmark)

    Moroder, Tobias; Hyllus, Philipp; Tóth, Géza

    2012-01-01

    Feasible tomography schemes for large particle numbers must possess, besides an appropriate data acquisition protocol, an efficient way to reconstruct the density operator from the observed finite data set. Since state reconstruction typically requires the solution of a nonlinear large-scale opti...... optimization, which has clear advantages regarding speed, control and accuracy in comparison to commonly employed numerical routines. First prototype implementations easily allow reconstruction of a state of 20 qubits in a few minutes on a standard computer.......-scale optimization problem, this is a major challenge in the design of scalable tomography schemes. Here we present an efficient state reconstruction scheme for permutationally invariant quantum state tomography. It works for all common state-of-the-art reconstruction principles, including, in particular, maximum...

  2. Atmospheric inverse modeling via sparse reconstruction

    Science.gov (United States)

    Hase, Nils; Miller, Scot M.; Maaß, Peter; Notholt, Justus; Palm, Mathias; Warneke, Thorsten

    2017-10-01

    Many applications in atmospheric science involve ill-posed inverse problems. A crucial component of many inverse problems is the proper formulation of a priori knowledge about the unknown parameters. In most cases, this knowledge is expressed as a Gaussian prior. This formulation often performs well at capturing smoothed, large-scale processes but is often ill equipped to capture localized structures like large point sources or localized hot spots. Over the last decade, scientists from a diverse array of applied mathematics and engineering fields have developed sparse reconstruction techniques to identify localized structures. In this study, we present a new regularization approach for ill-posed inverse problems in atmospheric science. It is based on Tikhonov regularization with sparsity constraint and allows bounds on the parameters. We enforce sparsity using a dictionary representation system. We analyze its performance in an atmospheric inverse modeling scenario by estimating anthropogenic US methane (CH4) emissions from simulated atmospheric measurements. Different measures indicate that our sparse reconstruction approach is better able to capture large point sources or localized hot spots than other methods commonly used in atmospheric inversions. It captures the overall signal equally well but adds details on the grid scale. This feature can be of value for any inverse problem with point or spatially discrete sources. We show an example for source estimation of synthetic methane emissions from the Barnett shale formation.

  3. Security problems arising from the use of radioactive sources in the study of the wear in refractory linings

    International Nuclear Information System (INIS)

    Courtois, G.; Hours, R.; Le Clerc, P.; Pons, A.

    1960-01-01

    The determination of the wear in refractories is a problem to which these are at present only a few solutions, and these limited or delicate to use. That is the reason why the use of radioactive tracers contained in the refractory has met with rapid success. Unfortunately, the development of the method has been retarded by the need to limit severely the amounts of radioelement incorporated and also by the observation that diffusion of the radioactive product occurred in the refractory. As a result, the limiting amount of 1 mc/500 metric tons of cast-iron has been adopted in France, with the proviso that no single source exceeds 3 mc. Further, we have made special sources with a view to avoiding diffusion phenomena. The essential feature of these sources is that they use pyro-ceramic a non porous material having a high melting point and being, very inert chemically. In these sources, the radioelement can either be entirely encased in the pyro-ceramic or be an integral part of its composition. A comparative study of the two types of sources is actually under way. (author) [fr

  4. An optimal control problem by controlling heat source of the surface of tissue

    OpenAIRE

    Dhar, 1Rikhiya; Dhar, Ranajit; Dhar, Piyanka

    2013-01-01

    A distributed optimal control problem for a system described by bio-heat equation for a homogeneous plane tissue is analytically investigated such that a desired temperature of the tissue at a particular point of location of tumour in hyperthermia can be attained at the end of a total time of operation of the process due to induced microwave on the surface of the tissue which is taken as control. Here the temperature of the tissue along the length of the tissue at different times of operation...

  5. SHIFTING TO ALTERNATIVE FOOD SOURCE: POTENTIAL TO OVERCOME ETHIOPIAS' MALNUTRITION AND POVERTY PROBLEMS

    OpenAIRE

    Gelmesa , Dandena

    2010-01-01

    N° ISBN - 978-2-7380-1284-5; International audience; The currently population of more than 70 million people in Ethiopia is expected to double within the next 30 years. Almost 80% of the populations are living in the countryside while the rest situated in urban area. An estimated five million people are suffering from lack of vitamins and essential minerals, of which 80% are children for the next generation. Every year, on the average, about five million people have problems securing enough f...

  6. Problems and progress in the preparation of sources for the alpha spectrometry of plutonium

    International Nuclear Information System (INIS)

    Miguel, M.; Deron, S.; Swietly, H.; Heinonen, O.J.

    1981-01-01

    The interpretation of non-destructive measurements of plutonium materials require more accurate determinations of the isotopic abundance of Pu-238 than conventional chemical assays. The requirements of calorimetry, passive neutron and conventional chemical assays are presented and compared. When Pu-238 is measured by alpha spectrometry, these requirements define how well the plutonium must be separated from americium, and what should be the accuracy of the spectrometry. The latter can strongly depend upon the resolution of the alpha spectrum. The authors describe a procedure to produce sources by drop deposition which ensure a resolution of 17 keV with commercial instrumentation

  7. Magnetic Resonance Elastography: Measurement of Hepatic Stiffness Using Different Direct Inverse Problem Reconstruction Methods in Healthy Volunteers and Patients with Liver Disease.

    Science.gov (United States)

    Saito, Shigeyoshi; Tanaka, Keiko; Hashido, Takashi

    2016-02-01

    The purpose of this study was to compare the mean hepatic stiffness values obtained by the application of two different direct inverse problem reconstruction methods to magnetic resonance elastography (MRE). Thirteen healthy men (23.2±2.1 years) and 16 patients with liver diseases (78.9±4.3 years; 12 men and 4 women) were examined for this study using a 3.0 T-MRI. The healthy volunteers underwent three consecutive scans, two 70-Hz waveform and a 50-Hz waveform scans. On the other hand, the patients with liver disease underwent scanning using the 70-Hz waveform only. The MRE data for each subject was processed twice for calculation of the mean hepatic stiffness (Pa), once using the multiscale direct inversion (MSDI) and once using the multimodel direct inversion (MMDI). There were no significant differences in the mean stiffness values among the scans obtained with two 70-Hz and different waveforms. However, the mean stiffness values obtained with the MSDI technique (with mask: 2895.3±255.8 Pa, without mask: 2940.6±265.4 Pa) were larger than those obtained with the MMDI technique (with mask: 2614.0±242.1 Pa, without mask: 2699.2±273.5 Pa). The reproducibility of measurements obtained using the two techniques was high for both the healthy volunteers [intraclass correlation coefficients (ICCs): 0.840-0.953] and the patients (ICC: 0.830-0.995). These results suggest that knowledge of the characteristics of different direct inversion algorithms is important for longitudinal liver stiffness assessments such as the comparison of different scanners and evaluation of the response to fibrosis therapy.

  8. Sources

    International Nuclear Information System (INIS)

    Duffy, L.P.

    1991-01-01

    This paper discusses the sources of radiation in the narrow perspective of radioactivity and the even narrow perspective of those sources that concern environmental management and restoration activities at DOE facilities, as well as a few related sources. Sources of irritation, Sources of inflammatory jingoism, and Sources of information. First, the sources of irritation fall into three categories: No reliable scientific ombudsman to speak without bias and prejudice for the public good, Technical jargon with unclear definitions exists within the radioactive nomenclature, and Scientific community keeps a low-profile with regard to public information. The next area of personal concern are the sources of inflammation. This include such things as: Plutonium being described as the most dangerous substance known to man, The amount of plutonium required to make a bomb, Talk of transuranic waste containing plutonium and its health affects, TMI-2 and Chernobyl being described as Siamese twins, Inadequate information on low-level disposal sites and current regulatory requirements under 10 CFR 61, Enhanced engineered waste disposal not being presented to the public accurately. Numerous sources of disinformation regarding low level radiation high-level radiation, Elusive nature of the scientific community, The Federal and State Health Agencies resources to address comparative risk, and Regulatory agencies speaking out without the support of the scientific community

  9. Application of wireless sensor network to problems of detection and tracking of radioactive sources

    International Nuclear Information System (INIS)

    Dupuy, P

    2006-01-01

    International efforts are being conducted to warranty a continuous control of radioactive sources. A theoretical and practical study has been achieved about the feasibility of installing wireless sensor networks on nuclear installations, or plants that uses radioactive material. The study is faced through the implementation of a system designed over the relatively new platform of motes, that gives a great flexibility for distributing sensors taking advantages of new wireless technologies and high-level programming. The work shows an analysis of the state of the technique of sensors, detectors, antennas and power supply including nuclear power supply. It also shows contributions on these fields by experimentation and proposed designs. Three applications that justify the technology are shown and a demonstration project is proposed. The social improvements of the system basically are a technical approach to the continuous control of radioactive sources during their life cycle and the online monitoring of the staff with the possibility of identifying and optimizing the procedures that are the maximum of expositions in practice or detecting potentials expositions [es

  10. Methods and problems of determination of paleoearthquake magnitudes from fault source parameters

    International Nuclear Information System (INIS)

    Chang, C. J.; Choi, W. H.; Yeon, K. H.; Park, D. H.; Im, C. B.

    2004-01-01

    It has been debated that some of the Quaternary faults which were discovered near the nuclear power plant site whether are capable or not, SE Korea peninsula, thereby, it was necessary to estimate the maximum earthquake potential from the fault source parameters. In this study, we reviewed and analyzed the methods of evaluation of the maximum earthquake potential and also evaluated the maximum credible earthquake from the fault source parameters to the exclusion for the factor of faulting time. We obtained the paleomagnitude range of M 6.82∼7.21 and mean of M 6.98 from a certain fault with 1.5 m displacement of the Quaternary faults have been surveyed along the coast line of the East Sea. And, we also obtained the mean values of M 5.36, M 7.47 and M 6.46 from the other fault which is the fault surface length of 1.5 km, displacement of 4 m and the rate of seismic moment-release, respectively. We consider that a cause of the different paleomagnitudes is due to including the factors of over- and under-estimation in estimating the earthquake potential, and also may not fully identify the detailed geometry and dynamics of the faults

  11. Overview of image reconstruction

    International Nuclear Information System (INIS)

    Marr, R.B.

    1980-04-01

    Image reconstruction (or computerized tomography, etc.) is any process whereby a function, f, on R/sup n/ is estimated from empirical data pertaining to its integrals, ∫f(x) dx, for some collection of hyperplanes of dimension k < n. The paper begins with background information on how image reconstruction problems have arisen in practice, and describes some of the application areas of past or current interest; these include radioastronomy, optics, radiology and nuclear medicine, electron microscopy, acoustical imaging, geophysical tomography, nondestructive testing, and NMR zeugmatography. Then the various reconstruction algorithms are discussed in five classes: summation, or simple back-projection; convolution, or filtered back-projection; Fourier and other functional transforms; orthogonal function series expansion; and iterative methods. Certain more technical mathematical aspects of image reconstruction are considered from the standpoint of uniqueness, consistency, and stability of solution. The paper concludes by presenting certain open problems. 73 references

  12. Dual-source spiral CT with pitch up to 3.2 and 75 ms temporal resolution: Image reconstruction and assessment of image quality

    International Nuclear Information System (INIS)

    Flohr, Thomas G.; Leng Shuai; Yu Lifeng; Allmendinger, Thomas; Bruder, Herbert; Petersilka, Martin; Eusemann, Christian D.; Stierstorfer, Karl; Schmidt, Bernhard; McCollough, Cynthia H.

    2009-01-01

    Purpose: To present the theory for image reconstruction of a high-pitch, high-temporal-resolution spiral scan mode for dual-source CT (DSCT) and evaluate its image quality and dose. Methods: With the use of two x-ray sources and two data acquisition systems, spiral CT exams having a nominal temporal resolution per image of up to one-quarter of the gantry rotation time can be acquired using pitch values up to 3.2. The scan field of view (SFOV) for this mode, however, is limited to the SFOV of the second detector as a maximum, depending on the pitch. Spatial and low contrast resolution, image uniformity and noise, CT number accuracy and linearity, and radiation dose were assessed using the ACR CT accreditation phantom, a 30 cm diameter cylindrical water phantom or a 32 cm diameter cylindrical PMMA CTDI phantom. Slice sensitivity profiles (SSPs) were measured for different nominal slice thicknesses, and an anthropomorphic phantom was used to assess image artifacts. Results were compared between single-source scans at pitch=1.0 and dual-source scans at pitch=3.2. In addition, image quality and temporal resolution of an ECG-triggered version of the DSCT high-pitch spiral scan mode were evaluated with a moving coronary artery phantom, and radiation dose was assessed in comparison with other existing cardiac scan techniques. Results: No significant differences in quantitative measures of image quality were found between single-source scans at pitch=1.0 and dual-source scans at pitch=3.2 for spatial and low contrast resolution, CT number accuracy and linearity, SSPs, image uniformity, and noise. The pitch value (1.6≤pitch≤3.2) had only a minor impact on radiation dose and image noise when the effective tube current time product (mA s/pitch) was kept constant. However, while not severe, artifacts were found to be more prevalent for the dual-source pitch=3.2 scan mode when structures varied markedly along the z axis, particularly for head scans. Images of the moving

  13. Dual-source spiral CT with pitch up to 3.2 and 75 ms temporal resolution: image reconstruction and assessment of image quality.

    Science.gov (United States)

    Flohr, Thomas G; Leng, Shuai; Yu, Lifeng; Aiimendinger, Thomas; Bruder, Herbert; Petersilka, Martin; Eusemann, Christian D; Stierstorfer, Karl; Schmidt, Bernhard; McCollough, Cynthia H

    2009-12-01

    To present the theory for image reconstruction of a high-pitch, high-temporal-resolution spiral scan mode for dual-source CT (DSCT) and evaluate its image quality and dose. With the use of two x-ray sources and two data acquisition systems, spiral CT exams having a nominal temporal resolution per image of up to one-quarter of the gantry rotation time can be acquired using pitch values up to 3.2. The scan field of view (SFOV) for this mode, however, is limited to the SFOV of the second detector as a maximum, depending on the pitch. Spatial and low contrast resolution, image uniformity and noise, CT number accuracy and linearity, and radiation dose were assessed using the ACR CT accreditation phantom, a 30 cm diameter cylindrical water phantom or a 32 cm diameter cylindrical PMMA CTDI phantom. Slice sensitivity profiles (SSPs) were measured for different nominal slice thicknesses, and an anthropomorphic phantom was used to assess image artifacts. Results were compared between single-source scans at pitch = 1.0 and dual-source scans at pitch = 3.2. In addition, image quality and temporal resolution of an ECG-triggered version of the DSCT high-pitch spiral scan mode were evaluated with a moving coronary artery phantom, and radiation dose was assessed in comparison with other existing cardiac scan techniques. No significant differences in quantitative measures of image quality were found between single-source scans at pitch = 1.0 and dual-source scans at pitch = 3.2 for spatial and low contrast resolution, CT number accuracy and linearity, SSPs, image uniformity, and noise. The pitch value (1.6 pitch 3.2) had only a minor impact on radiation dose and image noise when the effective tube current time product (mA s/pitch) was kept constant. However, while not severe, artifacts were found to be more prevalent for the dual-source pitch = 3.2 scan mode when structures varied markedly along the z axis, particularly for head scans. Images of the moving coronary artery phantom

  14. Dual-source spiral CT with pitch up to 3.2 and 75 ms temporal resolution: Image reconstruction and assessment of image quality

    Energy Technology Data Exchange (ETDEWEB)

    Flohr, Thomas G.; Leng Shuai; Yu Lifeng; Allmendinger, Thomas; Bruder, Herbert; Petersilka, Martin; Eusemann, Christian D.; Stierstorfer, Karl; Schmidt, Bernhard; McCollough, Cynthia H. [Siemens Healthcare, Computed Tomography, 91301 Forchheim, Germany and Department of Diagnostic Radiology, Eberhard-Karls-Universitaet, 72076 Tuebingen (Germany); Department of Radiology, Mayo Clinic, Rochester, Minnesota 55905 (United States); Siemens Healthcare, Computed Tomography, 91301 Forchheim (Germany); Department of Radiology, Mayo Clinic, Rochester, Minnesota 55905 (United States)

    2009-12-15

    Purpose: To present the theory for image reconstruction of a high-pitch, high-temporal-resolution spiral scan mode for dual-source CT (DSCT) and evaluate its image quality and dose. Methods: With the use of two x-ray sources and two data acquisition systems, spiral CT exams having a nominal temporal resolution per image of up to one-quarter of the gantry rotation time can be acquired using pitch values up to 3.2. The scan field of view (SFOV) for this mode, however, is limited to the SFOV of the second detector as a maximum, depending on the pitch. Spatial and low contrast resolution, image uniformity and noise, CT number accuracy and linearity, and radiation dose were assessed using the ACR CT accreditation phantom, a 30 cm diameter cylindrical water phantom or a 32 cm diameter cylindrical PMMA CTDI phantom. Slice sensitivity profiles (SSPs) were measured for different nominal slice thicknesses, and an anthropomorphic phantom was used to assess image artifacts. Results were compared between single-source scans at pitch=1.0 and dual-source scans at pitch=3.2. In addition, image quality and temporal resolution of an ECG-triggered version of the DSCT high-pitch spiral scan mode were evaluated with a moving coronary artery phantom, and radiation dose was assessed in comparison with other existing cardiac scan techniques. Results: No significant differences in quantitative measures of image quality were found between single-source scans at pitch=1.0 and dual-source scans at pitch=3.2 for spatial and low contrast resolution, CT number accuracy and linearity, SSPs, image uniformity, and noise. The pitch value (1.6{<=}pitch{<=}3.2) had only a minor impact on radiation dose and image noise when the effective tube current time product (mA s/pitch) was kept constant. However, while not severe, artifacts were found to be more prevalent for the dual-source pitch=3.2 scan mode when structures varied markedly along the z axis, particularly for head scans. Images of the moving

  15. Problems with the sources of the observed gravitational waves and their resolution

    Directory of Open Access Journals (Sweden)

    Dolgov A.D.

    2017-01-01

    Full Text Available Recent direct registration of gravitational waves by LIGO and astronomical observations of the universe at redshifts 5-10 demonstrate that the standard astrophysics and cosmology are in tension with the data. The origin of the source of the GW150914 event, which presumably is a binary of coalescing black holes with masses about 30 solar masses, each with zero spin, as well as the densely populated universe at z= 5-10 by superheavy black holes, blight galaxies, supernovae, and dust does not fit the standard astrophysical picture. It is shown here that the model of primordial black hole (PBH formation, suggested in 1993, nicely explains all these and more puzzles, including those in contemporary universe, such as MACHOs and the mass spectrum of the observed solar mass black holes.. The mass spectrum and density of PBH is predicted. The scenario may possibly lead to abundant antimatter in the universe and even in the Galaxy.

  16. A One-Dimensional Thermoelastic Problem due to a Moving Heat Source under Fractional Order Theory of Thermoelasticity

    Directory of Open Access Journals (Sweden)

    Tianhu He

    2014-01-01

    Full Text Available The dynamic response of a one-dimensional problem for a thermoelastic rod with finite length is investigated in the context of the fractional order theory of thermoelasticity in the present work. The rod is fixed at both ends and subjected to a moving heat source. The fractional order thermoelastic coupled governing equations for the rod are formulated. Laplace transform as well as its numerical inversion is applied to solving the governing equations. The variations of the considered temperature, displacement, and stress in the rod are obtained and demonstrated graphically. The effects of time, velocity of the moving heat source, and fractional order parameter on the distributions of the considered variables are of concern and discussed in detail.

  17. Computational science and re-discovery: open-source implementation of ellipsoidal harmonics for problems in potential theory

    International Nuclear Information System (INIS)

    Bardhan, Jaydeep P; Knepley, Matthew G

    2012-01-01

    We present two open-source (BSD) implementations of ellipsoidal harmonic expansions for solving problems of potential theory using separation of variables. Ellipsoidal harmonics are used surprisingly infrequently, considering their substantial value for problems ranging in scale from molecules to the entire solar system. In this paper, we suggest two possible reasons for the paucity relative to spherical harmonics. The first is essentially historical—ellipsoidal harmonics developed during the late 19th century and early 20th, when it was found that only the lowest-order harmonics are expressible in closed form. Each higher-order term requires the solution of an eigenvalue problem, and tedious manual computation seems to have discouraged applications and theoretical studies. The second explanation is practical: even with modern computers and accurate eigenvalue algorithms, expansions in ellipsoidal harmonics are significantly more challenging to compute than those in Cartesian or spherical coordinates. The present implementations reduce the 'barrier to entry' by providing an easy and free way for the community to begin using ellipsoidal harmonics in actual research. We demonstrate our implementation using the specific and physiologically crucial problem of how charged proteins interact with their environment, and ask: what other analytical tools await re-discovery in an era of inexpensive computation?

  18. Segmentation-DrivenTomographic Reconstruction

    DEFF Research Database (Denmark)

    Kongskov, Rasmus Dalgas

    such that the segmentation subsequently can be carried out by use of a simple segmentation method, for instance just a thresholding method. We tested the advantages of going from a two-stage reconstruction method to a one stage segmentation-driven reconstruction method for the phase contrast tomography reconstruction......The tomographic reconstruction problem is concerned with creating a model of the interior of an object from some measured data, typically projections of the object. After reconstructing an object it is often desired to segment it, either automatically or manually. For computed tomography (CT...

  19. FRED (a Framework for Reconstructing Epidemic Dynamics): an open-source software system for modeling infectious diseases and control strategies using census-based populations.

    Science.gov (United States)

    Grefenstette, John J; Brown, Shawn T; Rosenfeld, Roni; DePasse, Jay; Stone, Nathan T B; Cooley, Phillip C; Wheaton, William D; Fyshe, Alona; Galloway, David D; Sriram, Anuroop; Guclu, Hasan; Abraham, Thomas; Burke, Donald S

    2013-10-08

    Mathematical and computational models provide valuable tools that help public health planners to evaluate competing health interventions, especially for novel circumstances that cannot be examined through observational or controlled studies, such as pandemic influenza. The spread of diseases like influenza depends on the mixing patterns within the population, and these mixing patterns depend in part on local factors including the spatial distribution and age structure of the population, the distribution of size and composition of households, employment status and commuting patterns of adults, and the size and age structure of schools. Finally, public health planners must take into account the health behavior patterns of the population, patterns that often vary according to socioeconomic factors such as race, household income, and education levels. FRED (a Framework for Reconstructing Epidemic Dynamics) is a freely available open-source agent-based modeling system based closely on models used in previously published studies of pandemic influenza. This version of FRED uses open-access census-based synthetic populations that capture the demographic and geographic heterogeneities of the population, including realistic household, school, and workplace social networks. FRED epidemic models are currently available for every state and county in the United States, and for selected international locations. State and county public health planners can use FRED to explore the effects of possible influenza epidemics in specific geographic regions of interest and to help evaluate the effect of interventions such as vaccination programs and school closure policies. FRED is available under a free open source license in order to contribute to the development of better modeling tools and to encourage open discussion of modeling tools being used to evaluate public health policies. We also welcome participation by other researchers in the further development of FRED.

  20. Star Formation at z = 2.481 in the Lensed Galaxy SDSS J1110 = 6459. I. Lens Modeling and Source Reconstruction

    Science.gov (United States)

    Johnson, Traci L.; Sharon, Keren; Gladders, Michael D.; Rigby, Jane R.; Bayliss, Matthew B.; Wuyts, Eva; Whitaker, Katherine E.; Florian, Michael; Murray, Katherine T.

    2017-07-01

    Using the combined resolving power of the Hubble Space Telescope and gravitational lensing, we resolve star-forming structures in a z˜ 2.5 galaxy on scales much smaller than the usual kiloparsec diffraction limit of HST. SGAS J111020.0+645950.8 is a clumpy, star-forming galaxy lensed by the galaxy cluster SDSS J1110+6459 at z=0.659, with a total magnification ˜ 30× across the entire arc. We use a hybrid parametric/non-parametric strong lensing mass model to compute the deflection and magnification of this giant arc, reconstruct the light distribution of the lensed galaxy in the source plane, and resolve the star formation into two dozen clumps. We develop a forward-modeling technique to model each clump in the source plane. We ray-trace the model to the image plane, convolve with the instrumental point-spread function (PSF), and compare with the GALFIT model of the clumps in the image plane, which decomposes clump structure from more extended emission. This technique has the advantage, over ray-tracing, of accounting for the asymmetric lensing shear of the galaxy in the image plane and the instrument PSF. At this resolution, we can begin to study star formation on a clump-by-clump basis, toward the goal of understanding feedback mechanisms and the buildup of exponential disks at high redshift. Based on observations made with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. These observations are associated with program # 13003.

  1. Problem of medical follow-up and assessment of occupational disease in personnel handling radiation sources

    International Nuclear Information System (INIS)

    Klener, V.

    1983-01-01

    The long-term change in the health condition of 120 recorded cases of occupational disease owing to ionizing radiation in the years 1961 to 1981 was evaluated on the basis of the analysis of out-patient records in three regions of the Czech Socialist Republic. In the group the prevalent incidence was of carcinoma of the skin (86), alterations in blood formation (19), cataract (4) leukemia (2) and changes owing to single exposure usually with acute skin manifestations (9). Owing to the inadequate development of radiobiological knowledge and the lack of objective data on exposure, cases of transient leukopenia used to be put in direct relation with occupational exposure to ionizing radiation - this disorder always had a good long-term prognosis. At the present level of protection the determination of peripheral blood count made within preventive medical check-ups of personnel handling radiation sources has only partial significance and should be considered as complementary to the overall complex examination. (author)

  2. sources

    Directory of Open Access Journals (Sweden)

    Shu-Yin Chiang

    2002-01-01

    Full Text Available In this paper, we study the simplified models of the ATM (Asynchronous Transfer Mode multiplexer network with Bernoulli random traffic sources. Based on the model, the performance measures are analyzed by the different output service schemes.

  3. Detecting Intracranial Hemorrhage Using Automatic Tube Current Modulation With Advanced Modeled Iterative Reconstruction in Unenhanced Head Single- and Dual-Energy Dual-Source CT.

    Science.gov (United States)

    Scholtz, Jan-Erik; Wichmann, Julian L; Bennett, Dennis W; Leithner, Doris; Bauer, Ralf W; Vogl, Thomas J; Bodelle, Boris

    2017-05-01

    The purpose of our study was to determine diagnostic accuracy, image quality, and radiation dose of low-dose single- and dual-energy unenhanced third-generation dual-source head CT for detection of intracranial hemorrhage (ICH). A total of 123 patients with suspected ICH were examined using a dual-source 192-MDCT scanner. Standard-dose 120-kVp single-energy CT (SECT; n = 36) and 80-kVp and 150-kVp dual-energy CT (DECT; n = 30) images were compared with low-dose SECT (n = 32) and DECT (n = 25) images obtained using automated tube current modulation (ATCM). Advanced modeled iterative reconstruction (ADMIRE) was used for all protocols. Detection of ICH was performed by three readers who were blinded to the image acquisition parameters of each image series. Image quality was assessed both quantitatively and qualitatively. Interobserver agreement was calculated using the Fleiss kappa. Radiation dose was measured as dose-length product (DLP). Detection of ICH was excellent (sensitivity, 94.9-100%; specificity, 94.7-100%) in all protocols (p = 1.00) with perfect interobserver agreement (0.83-0.96). Qualitative ratings showed significantly better ratings for both standard-dose protocols regarding gray matter-to-white matter contrast (p ≤ 0.014), whereas highest gray matter-to-white matter contrast-to-noise ratio was observed with low-dose DECT images (p ≥ 0.057). The lowest posterior fossa artifact index was measured for standard-dose DECT, which showed significantly lower values compared with low-dose protocols (p ≤ 0.034). Delineation of ventricular margins and sharpness of subarachnoidal spaces were rated excellent in all protocols (p ≥ 0.096). Low-dose techniques lowered radiation dose by 26% for SECT images (DLP, 575.0 ± 72.3 mGy · cm vs 771.5 ± 146.8 mGy · cm; p dual-source CT while allowing significant radiation dose reduction.

  4. Reconstruction of War Damaged Buildings - A Problem that Still Stands. The Case of the National Economy Bank in Warsaw Restored During the Second World War

    Science.gov (United States)

    Łotysz, Sławomir

    2016-12-01

    The Polish national historiography remains silent on the reconstruction of damaged towns and cities that was undertaken by the German administration after capturing Poland in September 1939. This paper, on the war-time restoration of the National Economy Bank's headquarters in Warsaw, is an attempt to at least partially fill the gap. Designed by celebrated architect Rudolf Świerczyński in the late 1920s in accordance with contemporary air raid defence regulations, it was bombed and nevertheless seriously damaged during the September Campaign. Under the German management of the bank, the building was reconstructed and even modernized by commissioned Polish engineers.

  5. Reconstruction of War Damaged Buildings - A Problem that Still Stands. The Case of the National Economy Bank in Warsaw Restored During the Second World War

    Directory of Open Access Journals (Sweden)

    Łotysz Sławomir

    2016-12-01

    Full Text Available The Polish national historiography remains silent on the reconstruction of damaged towns and cities that was undertaken by the German administration after capturing Poland in September 1939. This paper, on the war-time restoration of the National Economy Bank’s headquarters in Warsaw, is an attempt to at least partially fill the gap. Designed by celebrated architect Rudolf Świerczyński in the late 1920s in accordance with contemporary air raid defence regulations, it was bombed and nevertheless seriously damaged during the September Campaign. Under the German management of the bank, the building was reconstructed and even modernized by commissioned Polish engineers.

  6. Developing milk industry estimates for dose reconstruction projects

    International Nuclear Information System (INIS)

    Beck, D.M.; Darwin, R.F.

    1991-01-01

    One of the most important contributors to radiation doses from hanford during the 1944-1947 period was radioactive iodine. Consumption of milk from cows that ate vegetation contaminated with iodine is likely the dominant pathway of human exposure. To estimate the doses people could have received from this pathway, it is necessary to reconstruct the amount of milk consumed by people living near Hanford, the source of the milk, and the type of feed that the milk cows ate. This task is challenging because the dairy industry has undergone radical changes since the end of World War 2, and records that document the impact of these changes on the study area are scarce. Similar problems are faced by researchers on most dose reconstruction efforts. The purpose of this work is to document and evaluate the methods used on the Hanford Environmental Dose Reconstruction (HEDR) Project to reconstruct the milk industry and to present preliminary results

  7. Direct and inverse problems of infrared tomography

    DEFF Research Database (Denmark)

    Sizikov, Valery S.; Evseev, Vadim; Fateev, Alexander

    2016-01-01

    The problems of infrared tomography-direct (the modeling of measured functions) and inverse (the reconstruction of gaseous medium parameters)-are considered with a laboratory burner flame as an example of an application. The two measurement modes are used: active (ON) with an external IR source...

  8. Performance analysis of a cellular automation algorithm for the solution of the track reconstruction problem on a manycore server at LIT, JINR

    International Nuclear Information System (INIS)

    Kulakov, I.S.; Baginyan, S.A.; Ivanov, V.V.; Kisel', P.I.

    2013-01-01

    The results of the tests for the tracks reconstruction efficiency, the speed of the algorithm and its scalability with respect to the number of cores of the server with two Intel Xeon E5640 CPUs (in total 8 physical or 16 logical cores) are presented and discussed

  9. Fact Sheet on Sources and Uses of U.S. Funding Provided in Fiscal Year 2006 for Iraq Relief and Reconstruction

    National Research Council Canada - National Science Library

    2007-01-01

    ... about $5.4 billion, in additional non-IRRF funding made available in FY 2006, to four separate funds, for various relief and reconstruction projects in Iraq, including training of Iraqi security...

  10. Hybrid spectral CT reconstruction.

    Directory of Open Access Journals (Sweden)

    Darin P Clark

    Full Text Available Current photon counting x-ray detector (PCD technology faces limitations associated with spectral fidelity and photon starvation. One strategy for addressing these limitations is to supplement PCD data with high-resolution, low-noise data acquired with an energy-integrating detector (EID. In this work, we propose an iterative, hybrid reconstruction technique which combines the spectral properties of PCD data with the resolution and signal-to-noise characteristics of EID data. Our hybrid reconstruction technique is based on an algebraic model of data fidelity which substitutes the EID data into the data fidelity term associated with the PCD reconstruction, resulting in a joint reconstruction problem. Within the split Bregman framework, these data fidelity constraints are minimized subject to additional constraints on spectral rank and on joint intensity-gradient sparsity measured between the reconstructions of the EID and PCD data. Following a derivation of the proposed technique, we apply it to the reconstruction of a digital phantom which contains realistic concentrations of iodine, barium, and calcium encountered in small-animal micro-CT. The results of this experiment suggest reliable separation and detection of iodine at concentrations ≥ 5 mg/ml and barium at concentrations ≥ 10 mg/ml in 2-mm features for EID and PCD data reconstructed with inherent spatial resolutions of 176 μm and 254 μm, respectively (point spread function, FWHM. Furthermore, hybrid reconstruction is demonstrated to enhance spatial resolution within material decomposition results and to improve low-contrast detectability by as much as 2.6 times relative to reconstruction with PCD data only. The parameters of the simulation experiment are based on an in vivo micro-CT experiment conducted in a mouse model of soft-tissue sarcoma. Material decomposition results produced from this in vivo data demonstrate the feasibility of distinguishing two K-edge contrast agents with

  11. Hybrid spectral CT reconstruction

    Science.gov (United States)

    Clark, Darin P.

    2017-01-01

    Current photon counting x-ray detector (PCD) technology faces limitations associated with spectral fidelity and photon starvation. One strategy for addressing these limitations is to supplement PCD data with high-resolution, low-noise data acquired with an energy-integrating detector (EID). In this work, we propose an iterative, hybrid reconstruction technique which combines the spectral properties of PCD data with the resolution and signal-to-noise characteristics of EID data. Our hybrid reconstruction technique is based on an algebraic model of data fidelity which substitutes the EID data into the data fidelity term associated with the PCD reconstruction, resulting in a joint reconstruction problem. Within the split Bregman framework, these data fidelity constraints are minimized subject to additional constraints on spectral rank and on joint intensity-gradient sparsity measured between the reconstructions of the EID and PCD data. Following a derivation of the proposed technique, we apply it to the reconstruction of a digital phantom which contains realistic concentrations of iodine, barium, and calcium encountered in small-animal micro-CT. The results of this experiment suggest reliable separation and detection of iodine at concentrations ≥ 5 mg/ml and barium at concentrations ≥ 10 mg/ml in 2-mm features for EID and PCD data reconstructed with inherent spatial resolutions of 176 μm and 254 μm, respectively (point spread function, FWHM). Furthermore, hybrid reconstruction is demonstrated to enhance spatial resolution within material decomposition results and to improve low-contrast detectability by as much as 2.6 times relative to reconstruction with PCD data only. The parameters of the simulation experiment are based on an in vivo micro-CT experiment conducted in a mouse model of soft-tissue sarcoma. Material decomposition results produced from this in vivo data demonstrate the feasibility of distinguishing two K-edge contrast agents with a spectral

  12. On a second order of accuracy stable difference scheme for the solution of a source identification problem for hyperbolic-parabolic equations

    Science.gov (United States)

    Ashyralyyeva, Maral; Ashyraliyev, Maksat

    2016-08-01

    In the present paper, a second order of accuracy difference scheme for the approximate solution of a source identification problem for hyperbolic-parabolic equations is constructed. Theorem on stability estimates for the solution of this difference scheme and their first and second order difference derivatives is presented. In applications, this abstract result permits us to obtain the stability estimates for the solutions of difference schemes for approximate solutions of two source identification problems for hyperbolic-parabolic equations.

  13. Understanding enabling capacities for managing the 'wicked problem' of nonpoint source water pollution in catchments: a conceptual framework.

    Science.gov (United States)

    Patterson, James J; Smith, Carl; Bellamy, Jennifer

    2013-10-15

    Nonpoint source (NPS) water pollution in catchments is a 'wicked' problem that threatens water quality, water security, ecosystem health and biodiversity, and thus the provision of ecosystem services that support human livelihoods and wellbeing from local to global scales. However, it is a difficult problem to manage because water catchments are linked human and natural systems that are complex, dynamic, multi-actor, and multi-scalar in nature. This in turn raises questions about understanding and influencing change across multiple levels of planning, decision-making and action. A key challenge in practice is enabling implementation of local management action, which can be influenced by a range of factors across multiple levels. This paper reviews and synthesises important 'enabling' capacities that can influence implementation of local management action, and develops a conceptual framework for understanding and analysing these in practice. Important enabling capacities identified include: history and contingency; institutional arrangements; collaboration; engagement; vision and strategy; knowledge building and brokerage; resourcing; entrepreneurship and leadership; and reflection and adaptation. Furthermore, local action is embedded within multi-scalar contexts and therefore, is highly contextual. The findings highlight the need for: (1) a systemic and integrative perspective for understanding and influencing change for managing the wicked problem of NPS water pollution; and (2) 'enabling' social and institutional arenas that support emergent and adaptive management structures, processes and innovations for addressing NPS water pollution in practice. These findings also have wider relevance to other 'wicked' natural resource management issues facing similar implementation challenges. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. Reconstructing Neutrino Mass Spectrum

    OpenAIRE

    Smirnov, A. Yu.

    1999-01-01

    Reconstruction of the neutrino mass spectrum and lepton mixing is one of the fundamental problems of particle physics. In this connection we consider two central topics: (i) the origin of large lepton mixing, (ii) possible existence of new (sterile) neutrino states. We discuss also possible relation between large mixing and existence of sterile neutrinos.

  15. Climate Reconstructions

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The NOAA Paleoclimatology Program archives reconstructions of past climatic conditions derived from paleoclimate proxies, in addition to the Program's large holdings...

  16. Intra nodal reconstruction of the numerical solution generated by the spectro nodal constant for Sn problems of eigenvalues in two-dimensional rectangular geometry

    International Nuclear Information System (INIS)

    Menezes, Welton Alves de

    2009-01-01

    In this dissertation the spectral nodal method SD-SGF-CN, cf. spectral diamond - spectral Green's function - constant nodal, is used to determine the angular fluxes averaged along the edges of the homogenized nodes in heterogeneous domains. Using these results, we developed an algorithm for the reconstruction of the node-edge average angular fluxes within the nodes of the spatial grid set up on the domain, since more localized numerical solutions are not generated by coarse-mesh numerical methods. Numerical results are presented to illustrate the accuracy of the algorithm we offer. (author)

  17. A variational study on BRDF reconstruction in a structured light scanner

    DEFF Research Database (Denmark)

    Nielsen, Jannik Boll; Stets, Jonathan Dyssel; Lyngby, Rasmus Ahrenkiel

    2017-01-01

    Time-efficient acquisition of reflectance behavior together with surface geometry is a challenging problem. In this study, we investigate the impact of system parameter uncertainties when incorporating a data-driven BRDF reconstruction approach into the standard pipeline of a structured light...... setup. Results show that while uncertainties in vertex positions and normals have a high impact on the quality of reconstructed BRDFs, object geometry and light source properties have very little influence on the reconstructed BRDFs. With this analysis, practitioners now have insight in the tolerances...... required for accurate BRDF acquisition to work....

  18. Potentials and Problems of Internet as a Source of Purchasing Information – Experiences and Attitudes of University Students in Croatia

    Directory of Open Access Journals (Sweden)

    Blaženka Knežević

    2014-06-01

    Full Text Available Gathering information online prior to offline purchase became the com - mon way of using Internet within student population. On the other hand, there are more and more Internet users and online shoppers at all Central European Countries. In the CEE region companies are searching the way how to approach students as a target group via their web sites. The purpose of this research was to explain (1 how student population in Croatia use Internet as a tool for gathering information on products and services and (2 to assess perceived problems and potential of Internet as a retail information source. The paper is based on a primary research – a survey on attitudes of Croatian students towards Internet and online shopping. Results are analyzed by using descriptive and inferential statistical method. Discussion of the results brings us to conclusions that there are statistically different attitudes among groups according to gender and according previous experience with the on-line shopping. For illustration: (a males and females differ in assortment that they are choosing and buying online, (b male students have a more positive attitude towards online shopping benefits than female students, and (c online shoppers have more positive attitudes towards security issues than non-online shoppers.

  19. Sources of Stress and Coping Strategies among Undergraduate Medical Students Enrolled in a Problem-Based Learning Curriculum

    Directory of Open Access Journals (Sweden)

    Samira S. Bamuhair

    2015-01-01

    Full Text Available Background. Medical education is rated as one of the most difficult trainings to endure. Throughout their undergraduate years, medical students face numerous stressors. Coping with these stressors requires access to a variety of resources, varying from personal strengths to social support. We aimed to explore the perceived stress, stressors, and coping strategies employed by medical students studying in a problem-based learning curriculum. Methodology. This is a cross-sectional study of randomly selected medical students that explored demographics, perceived stress scale, sources of stress, and coping strategies. Results. Of the 378 medical students that participated in the study, males were 59.3% and females 40.7%. Nearly 53% of the students often felt stressed, and a third felt that they could not cope with stress. Over 82% found studying stressful and 64.3% were not sleeping well. Half of the students reported low self-esteem. Perceived stress scores were statistically significantly high for specific stressors of studying in general, worrying about future, interpersonal conflict, and having low self-esteem. Coping strategies that were statistically significantly applied more often were blaming oneself and being self-critical, seeking advice and help from others, and finding comfort in religion. Female students were more stressed than males but they employ more coping strategies as well. Conclusions. Stress is very common among medical students. Most of the stressors are from coursework and interpersonal relationships. Low self-esteem coupled with self-blame and self-criticism is quite common.

  20. Diagnostic Performance of an Advanced Modeled Iterative Reconstruction Algorithm for Low-Contrast Detectability with a Third-Generation Dual-Source Multidetector CT Scanner: Potential for Radiation Dose Reduction in a Multireader Study.

    Science.gov (United States)

    Solomon, Justin; Mileto, Achille; Ramirez-Giraldo, Juan Carlos; Samei, Ehsan

    2015-06-01

    To assess the effect of radiation dose reduction on low-contrast detectability by using an advanced modeled iterative reconstruction (ADMIRE; Siemens Healthcare, Forchheim, Germany) algorithm in a contrast-detail phantom with a third-generation dual-source multidetector computed tomography (CT) scanner. A proprietary phantom with a range of low-contrast cylindrical objects, representing five contrast levels (range, 5-20 HU) and three sizes (range, 2-6 mm) was fabricated with a three-dimensional printer and imaged with a third-generation dual-source CT scanner at various radiation dose index levels (range, 0.74-5.8 mGy). Image data sets were reconstructed by using different section thicknesses (range, 0.6-5.0 mm) and reconstruction algorithms (filtered back projection [FBP] and ADMIRE with a strength range of three to five). Eleven independent readers blinded to technique and reconstruction method assessed all data sets in two reading sessions by measuring detection accuracy with a two-alternative forced choice approach (first session) and by scoring the total number of visible object groups (second session). Dose reduction potentials based on both reading sessions were estimated. Results between FBP and ADMIRE were compared by using both paired t tests and analysis of variance tests at the 95% significance level. During the first session, detection accuracy increased with increasing contrast, size, and dose index (diagnostic accuracy range, 50%-87%; interobserver variability, ±7%). When compared with FBP, ADMIRE improved detection accuracy by 5.2% on average across the investigated variables (P material is available for this article. RSNA, 2015

  1. Colonial legacy as a source and a problem in South-South Cooperation: The Case of Brazil

    Directory of Open Access Journals (Sweden)

    Elena Borisovna Pavlova

    2017-12-01

    Full Text Available The aim of this article is to identify the key narratives of colonial legacy as nodal points in the articulation of the Brazilian agenda in the South-South cooperation. Brazil’s aspirations for regional leadership and attempts to increase its international prestige are analyzed in a constructivist framework. Most importantly, our theoretical reasoning departs from the constructivist take on the agent-structure debate, proposed by A. Wendt in 1987. This approach, firstly, helps to clarify the main difficulties which Brazil faced in Latin America and to examine Brazilian efforts to overcome colonial legacy in order to secure regional leadership, one of the principal goals of its foreign policy. Secondly, colonial legacy can be seen as an unconditional source for the geographical expansion of Brazil’s influence and the increasing the number if its allies by means of the South-South cooperation. Thirdly, we demonstrate the very complex relationship on the structural level between the BRICS project and the problem of colonial legacy. Using the example of Brazil as a state which, on the one hand, has not been able to overcome the postcolonial complex, and on the other, is actively performing political rituals aimed at demonstrating its increasing power, we can more clearly outline the possibilities and limitations inherent in the structure of contemporary international system. Finally, this research suggests that BRICS, a group that claims to articulate the principles of the new world order, more beneficial for the states of the South, has limited chances to succeed with that mission.

  2. GRASP. Development of an event reconstruction method using a gamma ray air shower parameterisation and applications to γ-ray sources with H.E.S.S

    Energy Technology Data Exchange (ETDEWEB)

    Hillert, Andreas

    2014-07-24

    The H.E.S.S. experiment, with its high sensitivity and large field-of-view, is an ideal instrument to survey the Milky Way in VHE γ-rays. An accurate reconstruction of the γ-ray direction as well as a strong reduction of the hadronic background is essential for the analysis of the data. In this work a reconstruction algorithm is developed that applies a fit of pixel amplitudes to an expected image obtained from a Gamma Ray Air Shower Parameterisation (GRASP). This parameterisation was obtained using Monte Carlo air shower simulations by parameterising the angular Cherenkov photon distribution with suitable analytical functions. Furthermore, it provides new classifying variables to differentiate γ-ray induced air showers from hadronic ones. The reconstruction of air shower parameters is achieved by a maximum likelihood fit and improves the angular resolution by 20-30% with respect to traditional image moment analysis methods. In combination with a MVA-based background rejection method using these new classifying variables the sensitivity can be improved by about 70%. An analysis of the Pulsar Wind Nebula MSH 15-5-2 and investigation of its morphology and spectral properties show an indication of energy dependent morphology in VHE γ-rays.

  3. Solving the forward problem in EEG source analysis by spherical and fdm head modeling: a comparative analysis - biomed 2009

    NARCIS (Netherlands)

    Vatta, F.; Meneghini, F.; Esposito, F.; Mininel, S.; Di Salle, F.

    2009-01-01

    Neural source localization techniques based on electroencephalography (EEG) use scalp potential data to infer the location of underlying neural activity. This procedure entails modeling the sources of EEG activity and modeling the head volume conduction process to link the modeled sources to the

  4. Vaginal reconstruction

    International Nuclear Information System (INIS)

    Lesavoy, M.A.

    1985-01-01

    Vaginal reconstruction can be an uncomplicated and straightforward procedure when attention to detail is maintained. The Abbe-McIndoe procedure of lining the neovaginal canal with split-thickness skin grafts has become standard. The use of the inflatable Heyer-Schulte vaginal stent provides comfort to the patient and ease to the surgeon in maintaining approximation of the skin graft. For large vaginal and perineal defects, myocutaneous flaps such as the gracilis island have been extremely useful for correction of radiation-damaged tissue of the perineum or for the reconstruction of large ablative defects. Minimal morbidity and scarring ensue because the donor site can be closed primarily. With all vaginal reconstruction, a compliant patient is a necessity. The patient must wear a vaginal obturator for a minimum of 3 to 6 months postoperatively and is encouraged to use intercourse as an excellent obturator. In general, vaginal reconstruction can be an extremely gratifying procedure for both the functional and emotional well-being of patients

  5. ACL Reconstruction

    Science.gov (United States)

    ... in moderate exercise and recreational activities, or play sports that put less stress on the knees. ACL reconstruction is generally recommended if: You're an athlete and want to continue in your sport, especially if the sport involves jumping, cutting or ...

  6. Industrial dynamic tomographic reconstruction

    International Nuclear Information System (INIS)

    Oliveira, Eric Ferreira de

    2016-01-01

    The state of the art methods applied to industrial processes is currently based on the principles of classical tomographic reconstructions developed for tomographic patterns of static distributions, or is limited to cases of low variability of the density distribution function of the tomographed object. Noise and motion artifacts are the main problems caused by a mismatch in the data from views acquired in different instants. All of these add to the known fact that using a limited amount of data can result in the presence of noise, artifacts and some inconsistencies with the distribution under study. One of the objectives of the present work is to discuss the difficulties that arise from implementing reconstruction algorithms in dynamic tomography that were originally developed for static distributions. Another objective is to propose solutions that aim at reducing a temporal type of information loss caused by employing regular acquisition systems to dynamic processes. With respect to dynamic image reconstruction it was conducted a comparison between different static reconstruction methods, like MART and FBP, when used for dynamic scenarios. This comparison was based on a MCNPx simulation as well as an analytical setup of an aluminum cylinder that moves along the section of a riser during the process of acquisition, and also based on cross section images from CFD techniques. As for the adaptation of current tomographic acquisition systems for dynamic processes, this work established a sequence of tomographic views in a just-in-time fashion for visualization purposes, a form of visually disposing density information as soon as it becomes amenable to image reconstruction. A third contribution was to take advantage of the triple color channel necessary to display colored images in most displays, so that, by appropriately scaling the acquired values of each view in the linear system of the reconstruction, it was possible to imprint a temporal trace into the regularly

  7. An analytical spatial reconstruction algorithm for the SD-SGF-CN hybrid nodal method for one-speed X,Y-geometry SN eigenvalue problems

    International Nuclear Information System (INIS)

    Menezes, Welton Alves; Alves Filho, Hermes; Barros, Ricardo C.

    2009-01-01

    In this paper the X,Y-geometry SD-SGF-CN spectral nodal method, cf. spectral diamond-spectral Green's function-constant nodal, is used to determine the one-speed node-edge average angular fluxes in heterogeneous domains. This hybrid spectral nodal method uses the spectral diamond (SD) auxiliary equation for the multiplying regions and the spectral Green's function (SGF) auxiliary equation for the non-multiplying regions of the domain. Moreover, we consider constant approximations for the transverse-leakage terms in the transverse integrated S N nodal equations. We solve the SD-SGF-CN equations using the one-node block inversion (NBI) iterative scheme, which uses the most recent estimates available for the node-entering fluxes to evaluate the node-exiting fluxes in the directions that constitute the incoming fluxes for the adjacent node. Using these results, we offer an algorithm for analytical reconstruction of the coarse-mesh nodal solution within each spatial node, as localized numerical solutions are not generated by usual accurate nodal methods. Numerical results are presented to illustrate the accuracy of the present algorithm. (author)

  8. NESTLE: Few-group neutron diffusion equation solver utilizing the nodal expansion method for eigenvalue, adjoint, fixed-source steady-state and transient problems

    International Nuclear Information System (INIS)

    Turinsky, P.J.; Al-Chalabi, R.M.K.; Engrand, P.; Sarsour, H.N.; Faure, F.X.; Guo, W.

    1994-06-01

    NESTLE is a FORTRAN77 code that solves the few-group neutron diffusion equation utilizing the Nodal Expansion Method (NEM). NESTLE can solve the eigenvalue (criticality); eigenvalue adjoint; external fixed-source steady-state; or external fixed-source. or eigenvalue initiated transient problems. The code name NESTLE originates from the multi-problem solution capability, abbreviating Nodal Eigenvalue, Steady-state, Transient, Le core Evaluator. The eigenvalue problem allows criticality searches to be completed, and the external fixed-source steady-state problem can search to achieve a specified power level. Transient problems model delayed neutrons via precursor groups. Several core properties can be input as time dependent. Two or four energy groups can be utilized, with all energy groups being thermal groups (i.e. upscatter exits) if desired. Core geometries modelled include Cartesian and Hexagonal. Three, two and one dimensional models can be utilized with various symmetries. The non-linear iterative strategy associated with the NEM method is employed. An advantage of the non-linear iterative strategy is that NSTLE can be utilized to solve either the nodal or Finite Difference Method representation of the few-group neutron diffusion equation

  9. Use of the isolated problem approach for multi-compartment BEM models of electro-magnetic source imaging

    International Nuclear Information System (INIS)

    Gencer, Nevzat G; Akalin-Acar, Zeynep

    2005-01-01

    The isolated problem approach (IPA) is a method used in the boundary element method (BEM) to overcome numerical inaccuracies caused by the high-conductivity difference in the skull and the brain tissues in the head. Haemaelaeinen and Sarvas (1989 IEEE Trans. Biomed. Eng. 36 165-71) described how the source terms can be updated to overcome these inaccuracies for a three-layer head model. Meijs et al (1989 IEEE Trans. Biomed. Eng. 36 1038-49) derived the integral equations for the general case where there are an arbitrary number of layers inside the skull. However, the IPA is used in the literature only for three-layer head models. Studies that use complex boundary element head models that investigate the inhomogeneities in the brain or model the cerebrospinal fluid (CSF) do not make use of the IPA. In this study, the generalized formulation of the IPA for multi-layer models is presented in terms of integral equations. The discretized version of these equations are presented in two different forms. In a previous study (Akalin-Acar and Gencer 2004 Phys. Med. Biol. 49 5011-28), we derived formulations to calculate the electroencephalography and magnetoencephalography transfer matrices assuming a single layer in the skull. In this study, the transfer matrix formulations are updated to incorporate the generalized IPA. The effects of the IPA are investigated on the accuracy of spherical and realistic models when the CSF layer and a tumour tissue are included in the model. It is observed that, in the spherical model, for a radial dipole 1 mm close to the brain surface, the relative difference measure (RDM*) drops from 1.88 to 0.03 when IPA is used. For the realistic model, the inclusion of the CSF layer does not change the field pattern significantly. However, the inclusion of an inhomogeneity changes the field pattern by 25% for a dipole oriented towards the inhomogeneity. The effect of the IPA is also investigated when there is an inhomogeneity in the brain. In addition

  10. Noniterative MAP reconstruction using sparse matrix representations.

    Science.gov (United States)

    Cao, Guangzhi; Bouman, Charles A; Webb, Kevin J

    2009-09-01

    We present a method for noniterative maximum a posteriori (MAP) tomographic reconstruction which is based on the use of sparse matrix representations. Our approach is to precompute and store the inverse matrix required for MAP reconstruction. This approach has generally not been used in the past because the inverse matrix is typically large and fully populated (i.e., not sparse). In order to overcome this problem, we introduce two new ideas. The first idea is a novel theory for the lossy source coding of matrix transformations which we refer to as matrix source coding. This theory is based on a distortion metric that reflects the distortions produced in the final matrix-vector product, rather than the distortions in the coded matrix itself. The resulting algorithms are shown to require orthonormal transformations of both the measurement data and the matrix rows and columns before quantization and coding. The second idea is a method for efficiently storing and computing the required orthonormal transformations, which we call a sparse-matrix transform (SMT). The SMT is a generalization of the classical FFT in that it uses butterflies to compute an orthonormal transform; but unlike an FFT, the SMT uses the butterflies in an irregular pattern, and is numerically designed to best approximate the desired transforms. We demonstrate the potential of the noniterative MAP reconstruction with examples from optical tomography. The method requires offline computation to encode the inverse transform. However, once these offline computations are completed, the noniterative MAP algorithm is shown to reduce both storage and computation by well over two orders of magnitude, as compared to a linear iterative reconstruction methods.

  11. Brachytherapy reconstruction using orthogonal scout views from the CT

    International Nuclear Information System (INIS)

    Perez, J.; Lliso, F.; Carmona, V.; Bea, J.; Tormo, A.; Petschen, I.

    1996-01-01

    Introduction: CT assisted brachytherapy planning is demonstrating to have great advantages as external RT planning does. One of the problems we have found in this approach with the conventional gynecological Fletcher applicators is the high amount of artefacts (ovoids with rectal and vessical protections) in the CT slice. We have introduced a reconstruction method based on scout views in order to avoid this problem, allowing us to perform brachytherapy reconstruction completely CT assisted. We use a virtual simulation chain by General Electric Medical Systems. Method and discussion: Two orthogonal scout views (0 and 90 tube positions) are performed. The reconstruction method takes into account the virtual position of the focus and the fact that there is only divergence in the transverse plane. Algorithms developed for sources as well as for reference points localisation (A, B, lymphatic Fletcher trapezoid, pelvic wall, etc.) are presented. This method has the following practical advantages: the porte-cassette is not necessary, the image quality can be improved (it is very helpful in pelvic lateral views that are critical in conventional radiographs), the total time to get the data is smaller than for conventional radiographs (reduction of patient motion effects) and problems that appear in CT-slice based reconstruction in the case of strongly curved intrauterine applicators are avoided. Even though the resolution is smaller than in conventional radiographs it is good enough for brachytherapy. Regarding the CT planning this method presents the interesting feature that the co-ordinate system is the same for the reconstruction process that for the CT-slices set. As the application can be reconstructed from scout views and the doses can be evaluated on CT slices it is easier to correlate the dose values obtained for the traditional points with those provided by the CT information

  12. Implementation of a cone-beam reconstruction algorithm for the single-circle source orbit with embedded misalignment correction using homogeneous coordinates

    International Nuclear Information System (INIS)

    Karolczak, Marek; Schaller, Stefan; Engelke, Klaus; Lutz, Andreas; Taubenreuther, Ulrike; Wiesent, Karl; Kalender, Willi

    2001-01-01

    We present an efficient implementation of an approximate cone-beam image reconstruction algorithm for application in tomography, which accounts for scanner mechanical misalignment. The implementation is based on the algorithm proposed by Feldkamp et al. [J. Opt. Soc. Am. A 6, 612-619 (1984)] and is directed at circular scan paths. The algorithm has been developed for the purpose of reconstructing volume data from projections acquired in an experimental x-ray microtomography (μCT) scanner [Engelke et al., Der Radiologe 39, 203-212 (1999)]. To mathematically model misalignment we use matrix notation with homogeneous coordinates to describe the scanner geometry, its misalignment, and the acquisition process. For convenience analysis is carried out for x-ray CT scanners, but it is applicable to any tomographic modality, where two-dimensional projection acquisition in cone beam geometry takes place, e.g., single photon emission computerized tomography. We derive an algorithm assuming misalignment errors to be small enough to weight and filter original projections and to embed compensation for misalignment in the backprojection. We verify the algorithm on simulations of virtual phantoms and scans of a physical multidisk (Defrise) phantom

  13. Low concentration contrast medium for dual-source computed tomography coronary angiography by a combination of iterative reconstruction and low-tube-voltage technique: Feasibility study

    Energy Technology Data Exchange (ETDEWEB)

    Zheng, Minwen, E-mail: zhengmw2007@163.com; Liu, Ying, E-mail: yingyinglyly@126.com; Wei, Mengqi, E-mail: weimengqi2008@163.com; Wu, Yongjie, E-mail: wu18291988526@163.com; Zhao, Hongliang, E-mail: zhaohl1980@163.com; Li, Jian, E-mail: xjyylj@yeah.net

    2014-02-15

    Objectives: To assess the impact of low-concentration contrast medium on vascular enhancement, image quality and radiation dose of coronary CT angiography (cCTA) by using a combination of iterative reconstruction (IR) and low-tube-voltage technique. Materials and methods: One hundred patients were prospectively randomized to two types of contrast medium and underwent prospective electrocardiogram-triggering cCTA (Definition Flash, Siemens Healthcare; collimation: 128 mm × 0.6 mm; tube current: 300 mA s). Fifty patients received Iopromide 370 were scanned using the conventional tube setting (100 kVp or 120 kVp if BMI ≥ 25 kg/m{sup 2}) and reconstructed with filtered back projection (FBP). Fifty patients received Iodixanol 270 were scanned using the low-tube-voltage (80 kVp or 100 kVp if BMI ≥ 25 kg/m{sup 2}) technique and reconstructed with IR. CT attenuation was measured in coronary artery and other anatomical regions. Noise, image quality and radiation dose were compared. Results: Compared with two Iopromide 370 subgroups, Iomeprol 270 subgroups showed no significant difference in CT attenuation (576.63 ± 95.50 vs. 569.51 ± 118.93 for BMI < 25 kg/m{sup 2}, p = 0.647 and 394.19 ± 68.09 vs. 383.72 ± 63.11 for BMI ≥ 25 kg/m{sup 2}, p = 0.212), noise (in various anatomical regions of interest) and image quality (3.5 vs. 4.0, p = 0.13), but significantly (0.41 ± 0.17 vs. 0.94 ± 0.45 for BMI < 25 kg/m{sup 2}, p < 0.001 and 1.14 ± 0.24 vs. 2.37 ± 0.69 for BMI ≥ 25 kg/m{sup 2}, p < 0.001) lower radiation dose, which reflects dose saving of 56.4% and 51.9%, respectively. Conclusions: Combined IR with low-tube-voltage technique, a low-concentration contrast medium of 270 mg I/ml can still maintain the contrast enhancement without impairing image quality, as well as significantly lower the radiation dose.

  14. Very low-dose (0.15 mGy) chest CT protocols using the COPDGene 2 test object and a third-generation dual-source CT scanner with corresponding third-generation iterative reconstruction software.

    Science.gov (United States)

    Newell, John D; Fuld, Matthew K; Allmendinger, Thomas; Sieren, Jered P; Chan, Kung-Sik; Guo, Junfeng; Hoffman, Eric A

    2015-01-01

    The purpose of this study was to evaluate the impact of ultralow radiation dose single-energy computed tomographic (CT) acquisitions with Sn prefiltration and third-generation iterative reconstruction on density-based quantitative measures of growing interest in phenotyping pulmonary disease. The effects of both decreasing dose and different body habitus on the accuracy of the mean CT attenuation measurements and the level of image noise (SD) were evaluated using the COPDGene 2 test object, containing 8 different materials of interest ranging from air to acrylic and including various density foams. A third-generation dual-source multidetector CT scanner (Siemens SOMATOM FORCE; Siemens Healthcare AG, Erlangen, Germany) running advanced modeled iterative reconstruction (ADMIRE) software (Siemens Healthcare AG) was used.We used normal and very large body habitus rings at dose levels varying from 1.5 to 0.15 mGy using a spectral-shaped (0.6-mm Sn) tube output of 100 kV(p). Three CT scans were obtained at each dose level using both rings. Regions of interest for each material in the test object scans were automatically extracted. The Hounsfield unit values of each material using weighted filtered back projection (WFBP) at 1.5 mGy was used as the reference value to evaluate shifts in CT attenuation at lower dose levels using either WFBP or ADMIRE. Statistical analysis included basic statistics, Welch t tests, multivariable covariant model using the F test to assess the significance of the explanatory (independent) variables on the response (dependent) variable, and CT mean attenuation, in the multivariable covariant model including reconstruction method. Multivariable regression analysis of the mean CT attenuation values showed a significant difference with decreasing dose between ADMIRE and WFBP. The ADMIRE has reduced noise and more stable CT attenuation compared with WFBP. There was a strong effect on the mean CT attenuation values of the scanned materials for ring

  15. Test of the $\\tau$-Model of Bose-Einstein Correlations and Reconstruction of the Source Function in Hadronic Z-boson Decay at LEP

    CERN Document Server

    Achard, P; Aguilar-Benitez, M; Alcaraz, J; Alemanni, G; Allaby, J; Aloisio, A; Alviggi, M G; Anderhub, H; Andreev, V P; Anselmo, F; Arefiev, A; Azemoon, T; Aziz, T; Bagnaia, P; Bajo, A; Baksay, G; Baksay, L; Baldew, S V; Banerjee, S; Banerjee, Sw; Barczyk, A; Barillère, R; Bartalini, P; Basile, M; Batalova, N; Battiston, R; Bay, A; Becker, U; Behner, F; Bellucci, L; Berbeco, R; Berdugo, J; Berges, P; Bertucci, B; Betev, B L; Biasini, M; Biglietti, M; Biland, A; Blaising, J J; Blyth, S C; Bobbink, G J; Böhm, A; Boldizsar, L; Borgia, B; Bottai, S; Bourilkov, D; Bourquin, M; Braccini, S; Branson, J G; Brochu, F; Burger, J D; Burger, W J; Cai, X D; Capell, M; Cara Romeo, G; Carlino, G; Cartacci, A; Casaus, J; Cavallari, F; Cavallo, N; Cecchi, C; Cerrada, M; Chamizo, M; Chang, Y H; Chemarin, M; Chen, A; Chen, G; Chen, G M; Chen, H F; Chen, H S; Chiefari, G; Cifarelli, L; Cindolo, F; Clare, I; Clare, R; Coignet, G; Colino, N; Costantini, S; de la Cruz, B; Cucciarelli, S; Csörgö, T; de Asmundis, R; Déglon, P; Debreczeni, J; Degré, A; Dehmelt, K; Deiters, K; della Volpe, D; Delmeire, E; Denes, P; De Notaristefani, F; De Salvo, A; Diemoz, M; Dierckxsens, M; Dionisi, C; Dittmar, M; Doria, A; Dova, M T; Duchesneau, D; Duda, M; Echenard, B; Eline, A; El Hage, A; El Mamouni, H; Engler, A; Eppling, F J; Extermann, P; Falagan, M A; Falciano, S; Favara, A; Fay, J; Fedin, O; Felcini, M; Ferguson, T; Fesefeldt, H; Fiandrini, E; Field, J H; Filthaut, F; Fisher, P H; Fisher, W; Forconi, G; Freudenreich, K; Furetta, C; Galaktionov, Yu; Ganguli, S N; Garcia-Abia, P; Gataullin, M; Gentile, S; Giagu, S; Gong, Z F; Grenier, G; Grimm, O; Gruenewald, M W; Gupta, V K; Gurtu, A; Gutay, L J; Haas, D; Hakobyan, R; Hatzifotiadou, D; Hebbeker, T; Hervé, A; Hirschfelder, J; Hofer, H; Hohlmann, M; Holzner, G; Hou, S R; Jin, B N; Jindal, P; Jones, L W; de Jong, P; Josa-Mutuberría, I; Kaur, M; Kienzle-Focacci, M N; Kim, J K; Kirkby, J; Kittel, W; Klimentov, A; König, A C; Kopal, M; Koutsenko, V; Kräber, M; Kraemer, R W; Krüger, A; Kunin, A; Ladron de Guevara, P; Laktineh, I; Landi, G; Lebeau, M; Lebedev, A; Lebrun, P; Lecomte, P; Lecoq, P; Le Coultre, P; Le Goff, J M; Leiste, R; Levtchenko, M; Levtchenko, P; Li, C; Likhoded, S; Lin, C H; Lin, W T; Linde, F L; Lista, L; Liu, Z A; Lohmann, W; Longo, E; Lu, Y S; Luci, C; Luminari, L; Lustermann, W; Ma, W G; Malgeri, L; Malinin, A; Maña, C; Mans, J; Martin, J P; Marzano, F; Mazumdar, K; McNeil, R R; Mele, S; Merola, L; Meschini, M; Metzger, W J; Mihul, A; Milcent, H; Mirabelli, G; Mnich, J; Mohanty, G B; Muanza, G S; Muijs, A J M; Musy, M; Nagy, S; Natale, S; Napolitano, M; Nessi-Tedaldi, F; Newman, H; Nisati, A; Novak, T; Nowak, H; Ofierzynski, R; Organtini, G; Pal, I; Palomares, C; Paolucci, P; Paramatti, R; Passaleva, G; Patricelli, S; Paul, T; Pauluzzi, M; Paus, C; Pauss, F; Pedace, M; Pensotti, S; Perret-Gallix, D; Piccolo, D; Pierella, F; Pieri, M; Pioppi, M; Piroué, P A; Pistolesi, E; Plyaskin, V; Pohl, M; Pojidaev, V; Pothier, J; Prokofiev, D; Rahal-Callot, G; Rahaman, M A; Raics, P; Raja, N; Ramelli, R; Rancoita, P G; Ranieri, R; Raspereza, A; Razis, P; Rembeczki, S; Ren, D; Rescigno, M; Reucroft, S; Riemann, S; Riles, K; Roe, B P; Romero, L; Rosca, A; Rosemann, C; Rosenbleck, C; Rosier-Lees, S; Roth, S; Rubio, J A; Ruggiero, G; Rykaczewski, H; Sakharov, A; Saremi, S; Sarkar, S; Salicio, J; Sanchez, E; Schäfer, C; Schegelsky, V; Schopper, H; Schotanus, D J; Sciacca, C; Servoli, L; Shevchenko, S; Shivarov, N; Shoutko, V; Shumilov, E; Shvorob, A; Son, D; Souga, C; Spillantini, P; Steuer, M; Stickland, D P; Stoyanov, B; Straessner, A; Sudhakar, K; Sultanov, G; Sun, L Z; Sushkov, S; Suter, H; Swain, J D; Szillasi, Z; Tang, X W; Tarjan, P; Tauscher, L; Taylor, L; Tellili, B; Teyssier, D; Timmermans, C; Ting, Samuel C C; Ting, S M; Tonwar, S C; Tóth, J; Tully, C; Tung, K L; Ulbricht, J; Valente, E; Van de Walle, R T; Vasquez, R; Vesztergombi, G; Vetlitsky, I; Viertel, G; Vivargent, M; Vlachos, S; Vodopianov, I; Vogel, H; Vogt, H; Vorobiev, I; Vorobyov, A A; Wadhwa, M; Wang, Q; Wang, X L; Wang, Z M; Weber, M; Wynhoff, S; Xia, L; Xu, Z Z; Yamamoto, J; Yang, B Z; Yang, C G; Yang, H J; Yang, M; Yeh, S C; Zalite, An; Zalite, Yu; Zhang, Z P; Zhao, J; Zhu, G Y; Zhu, R Y; Zhuang, H L; Zichichi, A; Zimmermann, B; Zöller, M

    2011-01-01

    Bose-Einstein correlations of pairs of identical charged pions produced in ha- dronic Z decays are analyzed in terms of various parametrizations. A good descrip- tion is achieved using a L ́evy stable distribution in conjunction with a model where a particle’s momentum is correlated with its space-time point of production, the tau-model. Using this description and the measured rapidity and transverse momen- tum distributions, the space-time evolution of particle emission in two-jet events is reconstructed. However, the elongation of the particle emission region previously observed is not accommodated in the τ-model, and this is investigated using an ad hoc modification.

  16. Generalization of Spectral Green's Function nodal method for slab-geometry fixed-source adjoint transport problems in S{sub N} formulation

    Energy Technology Data Exchange (ETDEWEB)

    Curbelo, Jesus P.; Silva, Odair P. da; Barros, Ricardo C. [Universidade do Estado do Rio de Janeiro (UERJ), Nova Friburgo, RJ (Brazil). Instituto Politecnico. Programa de Pos-graduacao em Modelagem Computacional; Garcia, Carlos R., E-mail: cgh@instec.cu [Departamento de Ingenieria Nuclear, Instituto Superior de Tecnologias y Ciencias Aplicadas (InSTEC), La Habana (Cuba)

    2017-07-01

    Presented here is the application of the adjoint technique for solving source-detector discrete ordinates (S{sub N}) transport problems by using a spectral nodal method. For slab-geometry adjoint S-N model, the adjoint spectral Green's function method (SGF{sup †}) is extended to multigroup problems considering arbitrary L'th-order of scattering anisotropy, and the possibility of non-zero prescribed boundary conditions for the forward S{sub N} transport problems. The SGF{sup †} method converges numerical solutions that are completely free from spatial truncation errors. In order to generate numerical solutions of the SGF{sup †} equations, we use the partial adjoint one-node block inversion (NBI) iterative scheme. Partial adjoint NBI scheme uses the most recent estimates for the node-edge adjoint angular Fluxes in the outgoing directions of a given discretization node, to solve the resulting adjoint SN problem in that node for all the adjoint angular fluxes in the incoming directions, which constitute the outgoing adjoint angular fluxes for the adjacent node in the sweeping directions. Numerical results are given to illustrate the present spectral nodal method features and some advantages of using the adjoint technique in source-detector problems. author)

  17. LS-APC v1.0: a tuning-free method for the linear inverse problem and its application to source-term determination

    Directory of Open Access Journals (Sweden)

    O. Tichý

    2016-11-01

    Full Text Available Estimation of pollutant releases into the atmosphere is an important problem in the environmental sciences. It is typically formalized as an inverse problem using a linear model that can explain observable quantities (e.g., concentrations or deposition values as a product of the source-receptor sensitivity (SRS matrix obtained from an atmospheric transport model multiplied by the unknown source-term vector. Since this problem is typically ill-posed, current state-of-the-art methods are based on regularization of the problem and solution of a formulated optimization problem. This procedure depends on manual settings of uncertainties that are often very poorly quantified, effectively making them tuning parameters. We formulate a probabilistic model, that has the same maximum likelihood solution as the conventional method using pre-specified uncertainties. Replacement of the maximum likelihood solution by full Bayesian estimation also allows estimation of all tuning parameters from the measurements. The estimation procedure is based on the variational Bayes approximation which is evaluated by an iterative algorithm. The resulting method is thus very similar to the conventional approach, but with the possibility to also estimate all tuning parameters from the observations. The proposed algorithm is tested and compared with the standard methods on data from the European Tracer Experiment (ETEX where advantages of the new method are demonstrated. A MATLAB implementation of the proposed algorithm is available for download.

  18. Generalization of Spectral Green's Function nodal method for slab-geometry fixed-source adjoint transport problems in SN formulation

    International Nuclear Information System (INIS)

    Curbelo, Jesus P.; Silva, Odair P. da; Barros, Ricardo C.

    2017-01-01

    Presented here is the application of the adjoint technique for solving source{detector discrete ordinates (S N ) transport problems by using a spectral nodal method. For slab-geometry adjoint S-N model, the adjoint spectral Green's function method (SGF † ) is extended to multigroup problems considering arbitrary L'th-order of scattering anisotropy, and the possibility of non{zero prescribed boundary conditions for the forward S N transport problems. The SGF † method converges numerical solutions that are completely free from spatial truncation errors. In order to generate numerical solutions of the SGF † equations, we use the partial adjoint one{node block inversion (NBI) iterative scheme. Partial adjoint NBI scheme uses the most recent estimates for the node-edge adjoint angular Fluxes in the outgoing directions of a given discretization node, to solve the resulting adjoint SN problem in that node for all the adjoint angular fluxes in the incoming directions, which constitute the outgoing adjoint angular fluxes for the adjacent node in the sweeping directions. Numerical results are given to illustrate the present spectral nodal method features and some advantages of using the adjoint technique in source-detector problems. author)

  19. LS-APC v1.0: a tuning-free method for the linear inverse problem and its application to source-term determination

    Science.gov (United States)

    Tichý, Ondřej; Šmídl, Václav; Hofman, Radek; Stohl, Andreas

    2016-11-01

    Estimation of pollutant releases into the atmosphere is an important problem in the environmental sciences. It is typically formalized as an inverse problem using a linear model that can explain observable quantities (e.g., concentrations or deposition values) as a product of the source-receptor sensitivity (SRS) matrix obtained from an atmospheric transport model multiplied by the unknown source-term vector. Since this problem is typically ill-posed, current state-of-the-art methods are based on regularization of the problem and solution of a formulated optimization problem. This procedure depends on manual settings of uncertainties that are often very poorly quantified, effectively making them tuning parameters. We formulate a probabilistic model, that has the same maximum likelihood solution as the conventional method using pre-specified uncertainties. Replacement of the maximum likelihood solution by full Bayesian estimation also allows estimation of all tuning parameters from the measurements. The estimation procedure is based on the variational Bayes approximation which is evaluated by an iterative algorithm. The resulting method is thus very similar to the conventional approach, but with the possibility to also estimate all tuning parameters from the observations. The proposed algorithm is tested and compared with the standard methods on data from the European Tracer Experiment (ETEX) where advantages of the new method are demonstrated. A MATLAB implementation of the proposed algorithm is available for download.

  20. The Letters of Maximos Planudes to Alexios Philanthropenos and Melchisedek Akropolites: the Problems of Source Studies in the Context of the Politico-Military Situation in Byzantium in the Late 13th C.

    Directory of Open Access Journals (Sweden)

    Pavel I. Lysikov

    2017-11-01

    Full Text Available This research work is dedicated to the problem of dating of the Byzantine scholar and monk Maximos Planudes’ letters to the general, pinkernes Alexios Philanthropenos and his companion, monk Melchisedek Akropolites. Our goal is to date these letters on the basis of their content and data from other sources and to reconstruct chronological sequence of their writing. The period of time when Alexios Philanthropenos was in office of dux of Thrakision (1293–1295 during which he conducted some military operations against the Turks of beyliks Germiyan and Menteєe has a special place in the history of the Byzantine-Turkish wars in the early Palaiologan era. At this time the Byzantine state made some of its last successful military efforts in this struggle. By studying this theme the present article makes a contribution to research the Byzantine wars against the Turks and the military art and military organization of the empire in the late 13th century. Following the explicit consideration of some disputed items in dating of Maximos Planudes’ letters to the persons mentioned above (42 letters the author specifies the chronological sequence of their writing and clarifies the stages of the Alexios Philanthropenos’ military activity in Asia Minor. This article also makes a contribution to using of epistolographic data in historical study.

  1. Image reconstruction from multiple fan-beam projections

    International Nuclear Information System (INIS)

    Jelinek, J.; Overton, T.R.

    1984-01-01

    Special-purpose third-generation fan-beam CT systems can be greatly simplified by limiting the number of detectors, but this requires a different mode of data collection to provide a set of projections appropriate to the required spatial resolution in the reconstructed image. Repeated rotation of the source-detector fan, combined with shift of the detector array and perhaps offset of the source with respect to the fan's axis after each 360 0 rotation(cycle), provides a fairly general pattern of projection space filling. The authors' investigated the problem of optimal data-collection geometry for a multiple-rotation fan-beam scanner and of corresponding reconstruction algorithm

  2. The Use of Source-Sink and Doublet Distributions Extended to the Solution of Boundary-Value Problems in Supersonic Flow

    Science.gov (United States)

    Heaslet, Max A; Lomax, Harvard

    1948-01-01

    A direct analogy is established between the use of source-sink and doublet distributions in the solution of specific boundary-value problems in subsonic wing theory and the corresponding problems in supersonic theory. The correct concept of the "finite part" of an integral is introduced and used in the calculation of the improper integrals associated with supersonic doublet distributions. The general equations developed are shown to include several previously published results and particular examples are given for the loading on rolling and pitching triangular wings with supersonic leading edges.

  3. Greedy algorithms for diffuse optical tomography reconstruction

    Science.gov (United States)

    Dileep, B. P. V.; Das, Tapan; Dutta, Pranab K.

    2018-03-01

    Diffuse optical tomography (DOT) is a noninvasive imaging modality that reconstructs the optical parameters of a highly scattering medium. However, the inverse problem of DOT is ill-posed and highly nonlinear due to the zig-zag propagation of photons that diffuses through the cross section of tissue. The conventional DOT imaging methods iteratively compute the solution of forward diffusion equation solver which makes the problem computationally expensive. Also, these methods fail when the geometry is complex. Recently, the theory of compressive sensing (CS) has received considerable attention because of its efficient use in biomedical imaging applications. The objective of this paper is to solve a given DOT inverse problem by using compressive sensing framework and various Greedy algorithms such as orthogonal matching pursuit (OMP), compressive sampling matching pursuit (CoSaMP), and stagewise orthogonal matching pursuit (StOMP), regularized orthogonal matching pursuit (ROMP) and simultaneous orthogonal matching pursuit (S-OMP) have been studied to reconstruct the change in the absorption parameter i.e, Δα from the boundary data. Also, the Greedy algorithms have been validated experimentally on a paraffin wax rectangular phantom through a well designed experimental set up. We also have studied the conventional DOT methods like least square method and truncated singular value decomposition (TSVD) for comparison. One of the main features of this work is the usage of less number of source-detector pairs, which can facilitate the use of DOT in routine applications of screening. The performance metrics such as mean square error (MSE), normalized mean square error (NMSE), structural similarity index (SSIM), and peak signal to noise ratio (PSNR) have been used to evaluate the performance of the algorithms mentioned in this paper. Extensive simulation results confirm that CS based DOT reconstruction outperforms the conventional DOT imaging methods in terms of

  4. Fan-beam filtered-backprojection reconstruction without backprojection weight

    International Nuclear Information System (INIS)

    Dennerlein, Frank; Noo, Frederic; Hornegger, Joachim; Lauritsch, Guenter

    2007-01-01

    In this paper, we address the problem of two-dimensional image reconstruction from fan-beam data acquired along a full 2π scan. Conventional approaches that follow the filtered-backprojection (FBP) structure require a weighted backprojection with the weight depending on the point to be reconstructed and also on the source position; this weight appears only in the case of divergent beam geometries. Compared to reconstruction from parallel-beam data, the backprojection weight implies an increase in computational effort and is also thought to have some negative impacts on noise properties of the reconstructed images. We demonstrate here that direct FBP reconstruction from full-scan fan-beam data is possible with no backprojection weight. Using computer-simulated, realistic fan-beam data, we compared our novel FBP formula with no backprojection weight to the use of an FBP formula based on equal weighting of all data. Comparisons in terms of signal-to-noise ratio, spatial resolution and computational efficiency are presented. These studies show that the formula we suggest yields images with a reduced noise level, at almost identical spatial resolution. This effect increases quickly with the distance from the center of the field of view, from 0% at the center to 20% less noise at 20 cm, and to 40% less noise at 25 cm. Furthermore, the suggested method is computationally less demanding and reduces computation time with a gain that was found to vary between 12% and 43% on the computers used for evaluation

  5. Generalized Fourier slice theorem for cone-beam image reconstruction.

    Science.gov (United States)

    Zhao, Shuang-Ren; Jiang, Dazong; Yang, Kevin; Yang, Kang

    2015-01-01

    The cone-beam reconstruction theory has been proposed by Kirillov in 1961, Tuy in 1983, Feldkamp in 1984, Smith in 1985, Pierre Grangeat in 1990. The Fourier slice theorem is proposed by Bracewell 1956, which leads to the Fourier image reconstruction method for parallel-beam geometry. The Fourier slice theorem is extended to fan-beam geometry by Zhao in 1993 and 1995. By combining the above mentioned cone-beam image reconstruction theory and the above mentioned Fourier slice theory of fan-beam geometry, the Fourier slice theorem in cone-beam geometry is proposed by Zhao 1995 in short conference publication. This article offers the details of the derivation and implementation of this Fourier slice theorem for cone-beam geometry. Especially the problem of the reconstruction from Fourier domain has been overcome, which is that the value of in the origin of Fourier space is 0/0. The 0/0 type of limit is proper handled. As examples, the implementation results for the single circle and two perpendicular circle source orbits are shown. In the cone-beam reconstruction if a interpolation process is considered, the number of the calculations for the generalized Fourier slice theorem algorithm is O(N^4), which is close to the filtered back-projection method, here N is the image size of 1-dimension. However the interpolation process can be avoid, in that case the number of the calculations is O(N5).

  6. Value of 100 kVp scan with sinogram-affirmed iterative reconstruction algorithm on a single-source CT system during whole-body CT for radiation and contrast medium dose reduction: an intra-individual feasibility study.

    Science.gov (United States)

    Nagayama, Y; Nakaura, T; Oda, S; Tsuji, A; Urata, J; Furusawa, M; Tanoue, S; Utsunomiya, D; Yamashita, Y

    2018-02-01

    To perform an intra-individual investigation of the usefulness of a contrast medium (CM) and radiation dose-reduction protocol using single-source computed tomography (CT) combined with 100 kVp and sinogram-affirmed iterative reconstruction (SAFIRE) for whole-body CT (WBCT; chest-abdomen-pelvis CT) in oncology patients. Forty-three oncology patients who had undergone WBCT under both 120 and 100 kVp protocols at different time points (mean interscan intervals: 98 days) were included retrospectively. The CM doses for the 120 and 100 kVp protocols were 600 and 480 mg iodine/kg, respectively; 120 kVp images were reconstructed with filtered back-projection (FBP), whereas 100 kVp images were reconstructed with FBP (100 kVp-F) and the SAFIRE (100 kVp-S). The size-specific dose estimate (SSDE), iodine load and image quality of each protocol were compared. The SSDE and iodine load of 100 kVp protocol were 34% and 21%, respectively, lower than of 120 kVp protocol (SSDE: 10.6±1.1 versus 16.1±1.8 mGy; iodine load: 24.8±4versus 31.5±5.5 g iodine, p<0.01). Contrast enhancement, objective image noise, contrast-to-noise-ratio, and visual score of 100 kVp-S were similar to or better than of 120 kVp protocol. Compared with the 120 kVp protocol, the combined use of 100 kVp and SAFIRE in WBCT for oncology assessment with an SSCT facilitated substantial reduction in the CM and radiation dose while maintaining image quality. Copyright © 2017 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  7. Effect of automated tube voltage selection, integrated circuit detector and advanced iterative reconstruction on radiation dose and image quality of 3rd generation dual-source aortic CT angiography: An intra-individual comparison.

    Science.gov (United States)

    Mangold, Stefanie; De Cecco, Carlo N; Wichmann, Julian L; Canstein, Christian; Varga-Szemes, Akos; Caruso, Damiano; Fuller, Stephen R; Bamberg, Fabian; Nikolaou, Konstantin; Schoepf, U Joseph

    2016-05-01

    To compare, on an intra-individual basis, the effect of automated tube voltage selection (ATVS), integrated circuit detector and advanced iterative reconstruction on radiation dose and image quality of aortic CTA studies using 2nd and 3rd generation dual-source CT (DSCT). We retrospectively evaluated 32 patients who had undergone CTA of the entire aorta with both 2nd generation DSCT at 120kV using filtered back projection (FBP) (protocol 1) and 3rd generation DSCT using ATVS, an integrated circuit detector and advanced iterative reconstruction (protocol 2). Contrast-to-noise ratio (CNR) was calculated. Image quality was subjectively evaluated using a five-point scale. Radiation dose parameters were recorded. All studies were considered of diagnostic image quality. CNR was significantly higher with protocol 2 (15.0±5.2 vs 11.0±4.2; p<.0001). Subjective image quality analysis revealed no significant differences for evaluation of attenuation (p=0.08501) but image noise was rated significantly lower with protocol 2 (p=0.0005). Mean tube voltage and effective dose were 94.7±14.1kV and 6.7±3.9mSv with protocol 2; 120±0kV and 11.5±5.2mSv with protocol 1 (p<0.0001, respectively). Aortic CTA performed with 3rd generation DSCT, ATVS, integrated circuit detector, and advanced iterative reconstruction allow a substantial reduction of radiation exposure while improving image quality in comparison to 120kV imaging with FBP. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  8. The problem of space nuclear power sources collisions with artificial space objects in near-earth orbits

    International Nuclear Information System (INIS)

    Gafarov, A.A.

    1993-01-01

    Practically all space objects with onboard nuclear power sources stay in earth satellite orbits with an orbital lifetime long enough to reduce their radioactivity to levels presenting no danger for the Earth population. One of the reasons for orbit lifetime reduction can be collisions with other space objects in near-earth orbits. The possible consequence of collisions can be partial, or even complete, destruction of the spacecraft with an onboard nuclear power source; as well as delivery of additional impulse both to the spacecraft and its fragments. It is shown that collisions in orbit do not cause increase of radiation hazard for the Earth population if there is aerodynamic breakup of nuclear power sources into fragments of safe sizes during atmospheric reentry

  9. Variance analysis of the Monte Carlo perturbation source method in inhomogeneous linear particle transport problems. Derivation of formulae

    International Nuclear Information System (INIS)

    Noack, K.

    1981-01-01

    The perturbation source method is used in the Monte Carlo method in calculating small effects in a particle field. It offers primising possibilities for introducing positive correlation between subtracting estimates even in the cases where other methods fail, in the case of geometrical variations of a given arrangement. The perturbation source method is formulated on the basis of integral equations for the particle fields. The formulae for the second moment of the difference of events are derived. Explicity a certain class of transport games and different procedures for generating the so-called perturbation particles are considered [ru

  10. LS-APC v1.0: a tuning-free method for the linear inverse problem and its application to source-term determination

    Czech Academy of Sciences Publication Activity Database

    Tichý, Ondřej; Šmídl, Václav; Hofman, Radek; Stohl, A.

    2016-01-01

    Roč. 9, č. 11 (2016), s. 4297-4311 ISSN 1991-959X R&D Projects: GA MŠk(CZ) 7F14287 Institutional support: RVO:67985556 Keywords : Linear inverse problem * Bayesian regularization * Source-term determination * Variational Bayes method Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 3.458, year: 2016 http://library.utia.cas.cz/separaty/2016/AS/tichy-0466029.pdf

  11. A three-step reconstruction method for fluorescence molecular tomography based on compressive sensing

    DEFF Research Database (Denmark)

    Zhu, Yansong; Jha, Abhinav K.; Dreyer, Jakob K.

    2017-01-01

    Fluorescence molecular tomography (FMT) is a promising tool for real time in vivo quantification of neurotransmission (NT) as we pursue in our BRAIN initiative effort. However, the acquired image data are noisy and the reconstruction problem is ill-posed. Further, while spatial sparsity of the NT...... matrix coherence. The resultant image data are input to a homotopy-based reconstruction strategy that exploits sparsity via ℓ1 regularization. The reconstructed image is then input to a maximum-likelihood expectation maximization (MLEM) algorithm that retains the sparseness of the input estimate...... and improves upon the quantitation by accurate Poisson noise modeling. The proposed reconstruction method was evaluated in a three-dimensional simulated setup with fluorescent sources in a cuboidal scattering medium with optical properties simulating human brain cortex (reduced scattering coefficient: 9.2 cm-1...

  12. Laurentide Ice-Sheet Meltwater Sources to the Gulf of Mexico During the Last Deglaciation: Assessing Data Reconstructions Using Water Isotope Enabled Simulations

    Science.gov (United States)

    Vetter, L.; LeGrande, A. N.; Ullman, D. J.; Carlson, A. E.

    2017-12-01

    Sediment cores from the Gulf of Mexico show evidence of meltwater derived from the Laurentide Ice Sheet during the last deglaciation. Recent studies using geochemical measurements of individual foraminifera suggest changes in the oxygen isotopic composition of the meltwater as deglaciation proceeded. Here we use the water isotope enabled climate model simulations (NASA GISS ModelE-R) to investigate potential sources of meltwater within the ice sheet. We find that initial melting of the ice sheet from the southern margin contributed an oxygen isotope value reflecting a low-elevation, local precipitation source. As deglacial melting proceeded, meltwater delivered to the Gulf of Mexico had a more negative oxygen isotopic value, which the climate model simulates as being sourced from the high-elevation, high-latitude interior of the ice sheet. This study demonstrates the utility of combining stable isotope analyses with climate model simulations to investigate past changes in the hydrologic cycle.

  13. A Test Set for stiff Initial Value Problem Solvers in the open source software R: Package deTestSet

    NARCIS (Netherlands)

    Mazzia, F.; Cash, J.R.; Soetaert, K.

    2012-01-01

    In this paper we present the R package deTestSet that includes challenging test problems written as ordinary differential equations (ODEs), differential algebraic equations (DAEs) of index up to 3 and implicit differential equations (IDES). In addition it includes 6 new codes to solve initial value

  14. Fast Tomographic Reconstruction From Limited Data Using Artificial Neural Networks

    NARCIS (Netherlands)

    D.M. Pelt (Daniël); K.J. Batenburg (Joost)

    2013-01-01

    htmlabstractImage reconstruction from a small number of projections is a challenging problem in tomography. Advanced algorithms that incorporate prior knowledge can sometimes produce accurate reconstructions, but they typically require long computation times. Furthermore, the required prior

  15. Image Quality of 3rd Generation Spiral Cranial Dual-Source CT in Combination with an Advanced Model Iterative Reconstruction Technique: A Prospective Intra-Individual Comparison Study to Standard Sequential Cranial CT Using Identical Radiation Dose.

    Science.gov (United States)

    Wenz, Holger; Maros, Máté E; Meyer, Mathias; Förster, Alex; Haubenreisser, Holger; Kurth, Stefan; Schoenberg, Stefan O; Flohr, Thomas; Leidecker, Christianne; Groden, Christoph; Scharf, Johann; Henzler, Thomas

    2015-01-01

    To prospectively intra-individually compare image quality of a 3rd generation Dual-Source-CT (DSCT) spiral cranial CT (cCT) to a sequential 4-slice Multi-Slice-CT (MSCT) while maintaining identical intra-individual radiation dose levels. 35 patients, who had a non-contrast enhanced sequential cCT examination on a 4-slice MDCT within the past 12 months, underwent a spiral cCT scan on a 3rd generation DSCT. CTDIvol identical to initial 4-slice MDCT was applied. Data was reconstructed using filtered backward projection (FBP) and 3rd-generation iterative reconstruction (IR) algorithm at 5 different IR strength levels. Two neuroradiologists independently evaluated subjective image quality using a 4-point Likert-scale and objective image quality was assessed in white matter and nucleus caudatus with signal-to-noise ratios (SNR) being subsequently calculated. Subjective image quality of all spiral cCT datasets was rated significantly higher compared to the 4-slice MDCT sequential acquisitions (pspiral compared to sequential cCT datasets with mean SNR improvement of 61.65% (p*Bonferroni0.05spiral cCT with an advanced model IR technique significantly improves subjective and objective image quality compared to a standard sequential cCT acquisition acquired at identical dose levels.

  16. Reconstruction and modernization of Novi Han radioactive waste repository

    International Nuclear Information System (INIS)

    Kolev, I.; Dralchev, D.; Spasov, P.; Jordanov, M.

    2000-01-01

    This report presents briefly the most important issues of the study performed by EQE - Bulgaria. The objectives of the study are the development of conceptual solutions for construction of the following facilities in the Novi Han radioactive waste repository: an operational storage for unconditioned high level spent sources; new temporary buildings over the existing radioactive waste storage facilities; a rain-water draining system ect. The study also includes the engineering solutions for conservation of the existing facilities, currently full with high level spent sources. A 'Program for reconstruction and modernization' has been created, including the analysis of some regulation aspects concerning this program implementation. In conclusions the engineering problems of Novi Han repository are clear and appropriate solutions are available. They can be implemented in both cases of 'small' or 'large' reconstruction. The reconstruction project anyway should start with the construction of a new site infrastructure. Reconstruction and modernization of Novi Han radioactive waste repository is the only way to improve the management and safety of radioactive waste from medicine, industry and scientific research in Bulgaria

  17. Applications of the Advanced Light Source to problems in the earth, soil, and environmental sciences report of the workshop

    International Nuclear Information System (INIS)

    1992-10-01

    This report discusses the following topics: ALS status and research opportunities; advanced light source applications to geological materials; applications in the soil and environmental sciences; x-ray microprobe analysis; potential applications of the ALS in soil and environmental sciences; and x-ray spectroscopy using soft x-rays: applications to earth materials

  18. Recent advances in the spectral green's function method for monoenergetic slab-geometry fixed-source adjoint transport problems in S{sub N} formulation

    Energy Technology Data Exchange (ETDEWEB)

    Curbelo, Jesus P.; Alves Filho, Hermes; Barros, Ricardo C., E-mail: jperez@iprj.uerj.br, E-mail: halves@iprj.uerj.br, E-mail: rcbarros@pq.cnpq.br [Universidade do Estado do Rio de Janeiro (UERJ), Nova Friburgo, RJ (Brazil). Instituto Politecnico. Programa de Pos-Graduacao em Modelagem Computacional; Hernandez, Carlos R.G., E-mail: cgh@instec.cu [Instituto Superior de Tecnologias y Ciencias Aplicadas (InSTEC), La Habana (Cuba)

    2015-07-01

    The spectral Green's function (SGF) method is a numerical method that is free of spatial truncation errors for slab-geometry fixed-source discrete ordinates (S{sub N}) adjoint problems. The method is based on the standard spatially discretized adjoint S{sub N} balance equations and a nonstandard adjoint auxiliary equation expressing the node-average adjoint angular flux, in each discretization node, as a weighted combination of the node-edge outgoing adjoint fluxes. The auxiliary equation contains parameters which act as Green's functions for the cell-average adjoint angular flux. These parameters are determined by means of a spectral analysis which yields the local general solution of the S{sub N} equations within each node of the discretization grid. In this work a number of advances in the SGF adjoint method are presented: the method is extended to adjoint S{sub N} problems considering linearly anisotropic scattering and non-zero prescribed boundary conditions for the forward source-detector problem. Numerical results to typical model problems are considered to illustrate the efficiency and accuracy of the o offered method. (author)

  19. Using local archive sources to reconstruct historical landslide occurrence in selected urban regions of the Czech Republic: examples from regions with different historical development

    Czech Academy of Sciences Publication Activity Database

    Raška, P.; Klimeš, Jan; Ďubišar, J.

    2015-01-01

    Roč. 26, č. 2 (2015), s. 142-157 ISSN 1085-3278 R&D Projects: GA ČR GP205/09/P383 Institutional support: RVO:67985891 Keywords : landslides * rockfalls * landslide inventory * documentary sources * historical geomorphology * Czech Republic Subject RIV: DE - Earth Magnetism, Geodesy, Geography Impact factor: 8.145, year: 2015

  20. Strong Maximum Principle for Multi-Term Time-Fractional Diffusion Equations and its Application to an Inverse Source Problem

    OpenAIRE

    Liu, Yikan

    2015-01-01

    In this paper, we establish a strong maximum principle for fractional diffusion equations with multiple Caputo derivatives in time, and investigate a related inverse problem of practical importance. Exploiting the solution properties and the involved multinomial Mittag-Leffler functions, we improve the weak maximum principle for the multi-term time-fractional diffusion equation to a stronger one, which is parallel to that for its single-term counterpart as expected. As a direct application, w...

  1. A multi-criteria approach to Great Barrier Reef catchment (Queensland, Australia) diffuse-source pollution problem.

    Science.gov (United States)

    Greiner, R; Herr, A; Brodie, J; Haynes, D

    2005-01-01

    This paper presents a multi-criteria based tool for assessing the relative impact of diffuse-source pollution to the Great Barrier Reef (GBR) from the river basins draining into the GBR lagoon. The assessment integrates biophysical and ecological data of water quality and pollutant concentrations with socio-economic information pertaining to non-point source pollution and (potential) pollutant impact. The tool generates scores for each river basin against four criteria, thus profiling the basins and enabling prioritization of management alternatives between and within basins. The results support policy development for pollution control through community participation, scientific data integration and expert knowledge contributed by people from across the catchment. The results specifically provided support for the Reef Water Quality Protection Plan, released in October 2003. The aim of the plan is to provide a framework for reducing discharge of sediment, nutrient and other diffuse-source loads and (potential) impact of that discharge and for prioritising management actions both between and within river basins.

  2. LEGEND, STORY AND NARRATION IN THE GENRE STRUCTURE OF IVAN SHMELEV'S SHORT NOVEL INEXHAUSTIBLE CUP: THE PROBLEM OF SOURCES

    Directory of Open Access Journals (Sweden)

    Nikolay Ivanovich Sobolev

    2013-11-01

    Full Text Available The article is devoted to one of the central episodes of Ivan Shmelev's short novel The Inexhaustible Chalice (Inexaustible Cup, or Non-intoxicating Chalice. The source of this episode was a legend, recorded by a priest Yakov Brilliantov. In 1912 he published the text of the legend calling it a Story of the Miraculous Icon of the Mother of God Called the “Inexhaustible Chalice”. The legend existed in the folk tradition for a long time. The paper presents a hypothesis that Ivan Shmelev reproduced an oral version or edition of the legend in his short novel. Comparison of Shmelev's novel and the old legend reveals similarities and discrepancies of texts, analysis of which can serve as the basis for important observations on lingvopoetics of the short novel and the author’s style. Ivan Shmelev uses the legend as a source of pious history: he connects it with his main text at all narrative levels, while leaving only functional elements in the recipient text. This type of creative editing can be defined as a form of a condensed narrative. Moreover, analysis of sources leads to a conclusion about the poetics of the chronotope and the main characters of the tale.

  3. THE PROBLEMS OF USING EXEMPTION ACTIVITY VALUES FOR REGULATING THE MANAGEMENT OF SEALED RADIONUCLIDE SOURCES OF GAMMA-RADIATION

    Directory of Open Access Journals (Sweden)

    A. N. Barkovsky

    2017-01-01

    Full Text Available The article focuses on the procedure for exemption of the sealed and unsealed radionuclide sources of gamma radiation from regulatory control. The contradictory nature of the existing set of exemption criteria has been noted, leading, in some cases, to paradoxical situations. It is shown that the exempt activity values determined in NRB-99/2009 and in the international basic safety standards of the IAEA are significantly overestimated (in comparison with the activity values of a point source creating the ambient dose equivalent rate of 1 μSv / h at a distance of 0.1 m for a number of the most widely used gamma-emitting radionuclides, including 22Na, 54Mn, 75Se, 152Eu and 154Eu. It is proposed to revise the current values of exempt activity, bringing them in line with the dose rate criterion for the exempt of sealed radionuclide sources of gamma radiation, and to present them with one significant digit. The corrected values of exempt activity for  seven selected radionuclides are proposed for further use in the process of revision of the national radiation safety standards.

  4. PET reconstruction

    International Nuclear Information System (INIS)

    O'Sullivan, F.; Pawitan, Y.; Harrison, R.L.; Lewellen, T.K.

    1990-01-01

    In statistical terms, filtered backprojection can be viewed as smoothed Least Squares (LS). In this paper, the authors report on improvement in LS resolution by: incorporating locally adaptive smoothers, imposing positivity and using statistical methods for optimal selection of the resolution parameter. The resulting algorithm has high computational efficiency relative to more elaborate Maximum Likelihood (ML) type techniques (i.e. EM with sieves). Practical aspects of the procedure are discussed in the context of PET and illustrations with computer simulated and real tomograph data are presented. The relative recovery coefficients for a 9mm sphere in a computer simulated hot-spot phantom range from .3 to .6 when the number of counts ranges from 10,000 to 640,000 respectively. The authors will also present results illustrating the relative efficacy of ML and LS reconstruction techniques

  5. Source-reconstruction of event-related fields reveals hyperfunction and hypofunction of cortical circuits in antipsychotic-naive, first-episode schizophrenia patients during Mooney face processing.

    Science.gov (United States)

    Rivolta, Davide; Castellanos, Nazareth P; Stawowsky, Cerisa; Helbling, Saskia; Wibral, Michael; Grützner, Christine; Koethe, Dagmar; Birkner, Katharina; Kranaster, Laura; Enning, Frank; Singer, Wolf; Leweke, F Markus; Uhlhaas, Peter J

    2014-04-23

    Schizophrenia is characterized by dysfunctions in neural circuits that can be investigated with electrophysiological methods, such as EEG and MEG. In the present human study, we examined event-related fields (ERFs), in a sample of medication-naive, first-episode schizophrenia (FE-ScZ) patients (n = 14) and healthy control participants (n = 17) during perception of Mooney faces to investigate the integrity of neuromagnetic responses and their experience-dependent modification. ERF responses were analyzed for M100, M170, and M250 components at the sensor and source levels. In addition, we analyzed peak latency and adaptation effects due to stimulus repetition. FE-ScZ patients were characterized by significantly impaired sensory processing, as indicated by a reduced discrimination index (A'). At the sensor level, M100 and M170 responses in FE-ScZ were within the normal range, whereas the M250 response was impaired. However, source localization revealed widespread elevated activity for M100 and M170 in FE-ScZ and delayed peak latencies for the M100 and M250 responses. In addition, M170 source activity in FE-ScZ was not modulated by stimulus repetitions. The present findings suggest that neural circuits in FE-ScZ may be characterized by a disturbed balance between excitation and inhibition that could lead to a failure to gate information flow and abnormal spreading of activity, which is compatible with dysfunctional glutamatergic neurotransmission.

  6. Experimental Component Characterization, Monte-Carlo-Based Image Generation and Source Reconstruction for the Neutron Imaging System of the National Ignition Facility

    Energy Technology Data Exchange (ETDEWEB)

    Barrera, C A; Moran, M J

    2007-08-21

    The Neutron Imaging System (NIS) is one of seven ignition target diagnostics under development for the National Ignition Facility. The NIS is required to record hot-spot (13-15 MeV) and downscattered (6-10 MeV) images with a resolution of 10 microns and a signal-to-noise ratio (SNR) of 10 at the 20% contour. The NIS is a valuable diagnostic since the downscattered neutrons reveal the spatial distribution of the cold fuel during an ignition attempt, providing important information in the case of a failed implosion. The present study explores the parameter space of several line-of-sight (LOS) configurations that could serve as the basis for the final design. Six commercially available organic scintillators were experimentally characterized for their light emission decay profile and neutron sensitivity. The samples showed a long lived decay component that makes direct recording of a downscattered image impossible. The two best candidates for the NIS detector material are: EJ232 (BC422) plastic fibers or capillaries filled with EJ399B. A Monte Carlo-based end-to-end model of the NIS was developed to study the imaging capabilities of several LOS configurations and verify that the recovered sources meet the design requirements. The model includes accurate neutron source distributions, aperture geometries (square pinhole, triangular wedge, mini-penumbral, annular and penumbral), their point spread functions, and a pixelated scintillator detector. The modeling results show that a useful downscattered image can be obtained by recording the primary peak and the downscattered images, and then subtracting a decayed version of the former from the latter. The difference images need to be deconvolved in order to obtain accurate source distributions. The images are processed using a frequency-space modified-regularization algorithm and low-pass filtering. The resolution and SNR of these sources are quantified by using two surrogate sources. The simulations show that all LOS

  7. A Contribution to the Problem of Initiation of a Combustion Source in an Oil-Saturated Bed

    Science.gov (United States)

    Koznacheev, I. A.; Dobrego, K. V.

    2013-11-01

    The problem on in-situ self-ignition of an oil-saturated bed under the conditions of forced filtration of an oxygen-containing gas has been solved with analytical and numerical methods with account of the burnout of a deficient gas component. The influence of the burnout of this component and of convective removal of heat from the bed on the time of its self-ignition has been determined. Recommendations for the optimum regime of initiation of the self-ignition of the bed with account of variation of the blast flow rate and the oxygen content have been given.

  8. Breast Reconstruction After Mastectomy

    Science.gov (United States)

    ... Cancer Prevention Genetics of Breast & Gynecologic Cancers Breast Cancer Screening Research Breast Reconstruction After Mastectomy On This Page What is breast reconstruction? How do surgeons use implants to reconstruct a woman’s breast? How do surgeons ...

  9. Breast reconstruction - implants

    Science.gov (United States)

    Breast implants surgery; Mastectomy - breast reconstruction with implants; Breast cancer - breast reconstruction with implants ... harder to find a tumor if your breast cancer comes back. Getting breast implants does not take as long as breast reconstruction ...

  10. Vector tomography for reconstructing electric fields with non-zero divergence in bounded domains

    Science.gov (United States)

    Koulouri, Alexandra; Brookes, Mike; Rimpiläinen, Ville

    2017-01-01

    In vector tomography (VT), the aim is to reconstruct an unknown multi-dimensional vector field using line integral data. In the case of a 2-dimensional VT, two types of line integral data are usually required. These data correspond to integration of the parallel and perpendicular projection of the vector field along the integration lines and are called the longitudinal and transverse measurements, respectively. In most cases, however, the transverse measurements cannot be physically acquired. Therefore, the VT methods are typically used to reconstruct divergence-free (or source-free) velocity and flow fields that can be reconstructed solely from the longitudinal measurements. In this paper, we show how vector fields with non-zero divergence in a bounded domain can also be reconstructed from the longitudinal measurements without the need of explicitly evaluating the transverse measurements. To the best of our knowledge, VT has not previously been used for this purpose. In particular, we study low-frequency, time-harmonic electric fields generated by dipole sources in convex bounded domains which arise, for example, in electroencephalography (EEG) source imaging. We explain in detail the theoretical background, the derivation of the electric field inverse problem and the numerical approximation of the line integrals. We show that fields with non-zero divergence can be reconstructed from the longitudinal measurements with the help of two sparsity constraints that are constructed from the transverse measurements and the vector Laplace operator. As a comparison to EEG source imaging, we note that VT does not require mathematical modeling of the sources. By numerical simulations, we show that the pattern of the electric field can be correctly estimated using VT and the location of the source activity can be determined accurately from the reconstructed magnitudes of the field.

  11. [Problems with Using Hospital Quality Reports as a Secondary Data Source for Health Services Research in Germany].

    Science.gov (United States)

    Kraska, R A; de Cruppe, W; Geraedts, M

    2017-07-01

    Background Since 2005, German hospitals are required by law to publish structured quality reports (QRs). Because of the detailed data basis, the QRs are being increasingly used for secondary data analyses in health services research. Up until now, methodological difficulties that can cause distorted results of the analyses have essentially been overlooked. The aim of this study is to systematically list the methodological problems associated with using QR and to suggest solution strategies. Methods The QRs from 2006-2012 form the basis of the analyses and were aggregated in a database using an individualized data linkage procedure. Thereafter, a correlation analysis between a quality indicator and the staffing of hospitals was conducted, serving as an example for both cross-sectional as well as longitudinal studies. The resulting methodological problems are described qualitatively and quantitatively, and potential solutions are derived from the statistical literature. Results In each reporting year, 2-15% of the hospitals delivered no QR. In 2-16% of the QRs, it is not recognizable whether a report belongs to a hospital network or a single location. In addition, 6-66% of the location reports falsely contain data from the hospital network. 10% of the hospitals changed their institution code (IC), in 5% of the cases, the same "IC-location-number-combination" was used for different hospitals over the years. Therefore, 10-20% of the QRs cannot be linked with the IC as key variable. As a remedy for the linking of QR, the combination of the IC, the address and the number of beds represents a suitable solution. Using this solution, hospital network reports, location reports and missing reports can be identified and considered in an analysis. Conclusions Secondary data analyses with quality reports provide a high potential for error due to the inconsistent data base and the problems of the data linkage procedure. These can distort calculated parameters and limit the

  12. Compound-specific C- and H-isotope compositions of enclosed organic matter in carbonate rocks: Implications for source identification of sedimentary organic matter and paleoenvironmental reconstruction

    International Nuclear Information System (INIS)

    Xiong Yongqiang; Wang Yanmei; Wang Yongquan; Xu Shiping

    2007-01-01

    The Bohai Bay Basin is one of the most important oil-producing provinces in China. Molecular organic geochemical characteristics of Lower Paleozoic source rocks in this area have been investigated by analyzing chemical and isotopic compositions of solvent extracts and acid-released organic matter from the Lower Paleozoic carbonate rocks in the Jiyang Sub-basin of the Bohai Bay Basin. The results indicate that enclosed organic matter in carbonate rocks has not been recognizably altered by post-depositional processes. Two end-member compositions are suggested for early organic matter trapped in the Lower Paleozoic carbonate rocks: (1) a source dominated by aquatic organisms and deposited in a relatively deep marine environment and (2) a relatively high saline, evaporative marine depositional environment. In contrast, chemical and isotopic compositions of solvent extracts from these Lower Paleozoic carbonate rocks are relatively complicated, not only inheriting original characteristics of their precursors, but also overprinted by various post-depositional alterations, such as thermal maturation, biodegradation and mixing. Therefore, the integration of both organic matter characteristics can provide more useful information on the origin of organic matter present in carbonate rocks and the environments of their deposition

  13. Compound-specific C- and H-isotope compositions of enclosed organic matter in carbonate rocks: Implications for source identification of sedimentary organic matter and paleoenvironmental reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Xiong Yongqiang [State Key Laboratory of Organic Geochemistry, Guangzhou Institute of Geochemistry, Chinese Academy of Sciences, Guangzhou 510640 (China)], E-mail: xiongyq@gig.ac.cn; Wang Yanmei; Wang Yongquan; Xu Shiping [State Key Laboratory of Organic Geochemistry, Guangzhou Institute of Geochemistry, Chinese Academy of Sciences, Guangzhou 510640 (China)

    2007-11-15

    The Bohai Bay Basin is one of the most important oil-producing provinces in China. Molecular organic geochemical characteristics of Lower Paleozoic source rocks in this area have been investigated by analyzing chemical and isotopic compositions of solvent extracts and acid-released organic matter from the Lower Paleozoic carbonate rocks in the Jiyang Sub-basin of the Bohai Bay Basin. The results indicate that enclosed organic matter in carbonate rocks has not been recognizably altered by post-depositional processes. Two end-member compositions are suggested for early organic matter trapped in the Lower Paleozoic carbonate rocks: (1) a source dominated by aquatic organisms and deposited in a relatively deep marine environment and (2) a relatively high saline, evaporative marine depositional environment. In contrast, chemical and isotopic compositions of solvent extracts from these Lower Paleozoic carbonate rocks are relatively complicated, not only inheriting original characteristics of their precursors, but also overprinted by various post-depositional alterations, such as thermal maturation, biodegradation and mixing. Therefore, the integration of both organic matter characteristics can provide more useful information on the origin of organic matter present in carbonate rocks and the environments of their deposition.

  14. Problems related to the carcinogenic impact of radon and daughters, as a source of exposure of population and underground miners

    International Nuclear Information System (INIS)

    Todorov, A.; Vasilev, G.; Vyrbanov, P.

    1998-01-01

    The population radiation exposure to radon and its daughters is a specific problem. In real conditions, ling irradiation in some population groups exceeds the allowable limit of occupational exposure for A category, but no increase in carcinogenesis is observed. At the same time, a correlation between dose and incidence of lung cancer is recorded in underground miners. There are various explanations of these effects, but a number of questions are still disputable. Thus in ICRP publication 60, for evaluation of radon irradiation exposure expressed in WLM is used, rather than effective dose. Epidemiological data are published, questioning the use of the linear non-threshold model for the carcinogenic impact of radon, as well as the role of some accompanying factors, such as smoking (author)

  15. Mastectomy Skin Necrosis After Breast Reconstruction: A Comparative Analysis Between Autologous Reconstruction and Implant-Based Reconstruction.

    Science.gov (United States)

    Sue, Gloria R; Lee, Gordon K

    2018-05-01

    Mastectomy skin necrosis is a significant problem after breast reconstruction. We sought to perform a comparative analysis on this complication between patients undergoing autologous breast reconstruction and patients undergoing 2-stage expander implant breast reconstruction. A retrospective review was performed on consecutive patients undergoing autologous breast reconstruction or 2-stage expander implant breast reconstruction by the senior author from 2006 through 2015. Patient demographic factors including age, body mass index, history of diabetes, history of smoking, and history of radiation to the breast were collected. Our primary outcome measure was mastectomy skin necrosis. Fisher exact test was used for statistical analysis between the 2 patient cohorts. The treatment patterns of mastectomy skin necrosis were then analyzed. We identified 204 patients who underwent autologous breast reconstruction and 293 patients who underwent 2-stage expander implant breast reconstruction. Patients undergoing autologous breast reconstruction were older, heavier, more likely to have diabetes, and more likely to have had prior radiation to the breast compared with patients undergoing implant-based reconstruction. The incidence of mastectomy skin necrosis was 30.4% of patients in the autologous group compared with only 10.6% of patients in the tissue expander group (P care in the autologous group, only 3.2% were treated with local wound care in the tissue expander group (P skin necrosis is significantly more likely to occur after autologous breast reconstruction compared with 2-stage expander implant-based breast reconstruction. Patients with autologous reconstructions are more readily treated with local wound care compared with patients with tissue expanders, who tended to require operative treatment of this complication. Patients considering breast reconstruction should be counseled appropriately regarding the differences in incidence and management of mastectomy skin

  16. Source Location of Noble Gas Plumes

    International Nuclear Information System (INIS)

    Hoffman, I.; Ungar, K.; Bourgouin, P.; Yee, E.; Wotawa, G.

    2015-01-01

    In radionuclide monitoring, one of the most significant challenges from a verification or surveillance perspective is the source location problem. Modern monitoring/surveillance systems employ meteorological source reconstruction — for example, the Fukushima accident, CRL emissions analysis and even radon risk mapping. These studies usually take weeks to months to conduct, involving multidisciplinary teams representing meteorology; dispersion modelling; radionuclide sampling and metrology; and, when relevant, proper representation of source characteristics (e.g., reactor engineering expertise). Several different approaches have been tried in an attempt to determine useful techniques to apply to the source location problem and to develop rigorous methods that combine all potentially relevant observations and models to identify a most probable source location and size with uncertainties. The ultimate goal is to understand the utility and limitations of these techniques so they can transition from R&D to operational tools. (author)

  17. Integrating cut-and-solve and semi-Lagrangean based dual ascent for the single-source capacitated facility location problem

    DEFF Research Database (Denmark)

    Gadegaard, Sune Lauth

    polytope with generalized upper bounds. From our computational study, we show that the semi-Lagrangean relaxation approach has its merits when the instances are tightly constrained with regards to the capacity of the system, but that it is very hard to compete with a standalone implementation of the cut......This paper describes how the cut-and-solve framework and semi-Lagrangean based dual ascent algorithms can be integrated in two natural ways in order to solve the single source capacitated facility location problem. The first uses the cut-and-solve framework both as a heuristic and as an exact...... solver for the semi-Lagrangean subproblems. The other uses a semi-Lagrangean based dual ascent algorithm to solve the sparse problems arising in the cut-and-solve algorithm. Furthermore, we developed a simple way to separate a special type of cutting planes from what we denote the effective capacity...

  18. The problem of the architectural heritage reconstruction

    OpenAIRE

    Alfazhr M.A.; Osama E.

    2017-01-01

    the subject of this research is the modern technology of the architectural monuments restoration, which makes possible to increase the design and performance, as well as the durability of historical objects. Choosing the most efficient, cost-effective and durable recovery and expanding of architectural monuments technologies is a priority of historical cities. Adoption of the faster and sound monuments restoration technology is neсessay because there are a lot of historical Russian cities in ...

  19. Multi-Physics Modelling of Fault Mechanics Using REDBACK: A Parallel Open-Source Simulator for Tightly Coupled Problems

    Science.gov (United States)

    Poulet, Thomas; Paesold, Martin; Veveakis, Manolis

    2017-03-01

    Faults play a major role in many economically and environmentally important geological systems, ranging from impermeable seals in petroleum reservoirs to fluid pathways in ore-forming hydrothermal systems. Their behavior is therefore widely studied and fault mechanics is particularly focused on the mechanisms explaining their transient evolution. Single faults can change in time from seals to open channels as they become seismically active and various models have recently been presented to explain the driving forces responsible for such transitions. A model of particular interest is the multi-physics oscillator of Alevizos et al. (J Geophys Res Solid Earth 119(6), 4558-4582, 2014) which extends the traditional rate and state friction approach to rate and temperature-dependent ductile rocks, and has been successfully applied to explain spatial features of exposed thrusts as well as temporal evolutions of current subduction zones. In this contribution we implement that model in REDBACK, a parallel open-source multi-physics simulator developed to solve such geological instabilities in three dimensions. The resolution of the underlying system of equations in a tightly coupled manner allows REDBACK to capture appropriately the various theoretical regimes of the system, including the periodic and non-periodic instabilities. REDBACK can then be used to simulate the drastic permeability evolution in time of such systems, where nominally impermeable faults can sporadically become fluid pathways, with permeability increases of several orders of magnitude.

  20. AIR Tools - A MATLAB Package of Algebraic Iterative Reconstruction Techniques

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Saxild-Hansen, Maria

    This collection of MATLAB software contains implementations of several Algebraic Iterative Reconstruction methods for discretizations of inverse problems. These so-called row action methods rely on semi-convergence for achieving the necessary regularization of the problem. Two classes of methods...... are implemented: Algebraic Reconstruction Techniques (ART) and Simultaneous Iterative Reconstruction Techniques (SIRT). In addition we provide a few simplified test problems from medical and seismic tomography. For each iterative method, a number of strategies are available for choosing the relaxation parameter...

  1. Simulation and Track Reconstruction for Beam Telescopes

    CERN Document Server

    Maqbool, Salman

    2017-01-01

    Beam telescopes are an important tool to test new detectors under development in a particle beam. To test these novel detectors and determine their properties, the particle tracks need to be reconstructed from the known detectors in the telescope. Based on the reconstructed track, its predicted position on the Device under Test (DUT) are compared with the actual hits on the DUT. Several methods exist for track reconstruction, but most of them do not account for the effects of multiple scattering. General Broken Lines is one such algorithm which incorporates these effects during reconstruction. The aim of this project was to simulate the beam telescope and extend the track reconstruction framework for the FE-I4 telescope, which takes these effects into account. Section 1 introduces the problem, while section 2 focuses on beam telescopes. This is followed by the Allpix2 simulation framework in Section 3. And finally, Section 4 introduces the Proteus track reconstruction framework along with the General Broken ...

  2. Study of medical education in 3D surgical modeling by surgeons with free open-source software: Example of mandibular reconstruction with fibula free flap and creation of its surgical guides.

    Science.gov (United States)

    Ganry, L; Hersant, B; Bosc, R; Leyder, P; Quilichini, J; Meningaud, J P

    2018-02-27

    Benefits of 3D printing techniques, biomodeling and surgical guides are well known in surgery, especially when the same surgeon who performed the surgery participated in the virtual surgical planning. Our objective was to evaluate the transfer of know how of a neutral 3D surgical modeling free open-source software protocol to surgeons with different surgical specialities. A one-day training session was organised in 3D surgical modeling applied to one mandibular reconstruction case with fibula free flap and creation of its surgical guides. Surgeon satisfaction was analysed before and after the training. Of 22 surgeons, 59% assessed the training as excellent or very good and 68% considered changing their daily surgical routine and would try to apply our open-source software protocol in their department after a single training day. The mean capacity in using the software improved from 4.13 on 10 before to 6.59 on 10 after training for OsiriX ® software, from 1.14 before to 5.05 after training for Meshlab ® , from 0.45 before to 4.91 after training for Netfabb ® and from 1.05 before and 4.41 after training for Blender ® . According to surgeons, using the software Blender ® became harder as the day went on. Despite improvement in the capacity in using software for all participants, more than a single training day is needed for the transfer of know how on 3D modeling with open-source software. Although the know-how transfer, overall satisfaction, actual learning outcomes and relevance of this training were appropriated, a longer training including different topics will be needed to improve training quality. Copyright © 2018 Elsevier Masson SAS. All rights reserved.

  3. Separation of radiated sound field components from waves scattered by a source under non-anechoic conditions

    DEFF Research Database (Denmark)

    Fernandez Grande, Efren; Jacobsen, Finn

    2010-01-01

    to the source. Thus the radiated free-field component is estimated simultaneously with solving the inverse problem of reconstructing the sound field near the source. The method is particularly suited to cases in which the overall contribution of reflected sound in the measurement plane is significant....

  4. Adaptive algebraic reconstruction technique

    International Nuclear Information System (INIS)

    Lu Wenkai; Yin Fangfang

    2004-01-01

    Algebraic reconstruction techniques (ART) are iterative procedures for reconstructing objects from their projections. It is proven that ART can be computationally efficient by carefully arranging the order in which the collected data are accessed during the reconstruction procedure and adaptively adjusting the relaxation parameters. In this paper, an adaptive algebraic reconstruction technique (AART), which adopts the same projection access scheme in multilevel scheme algebraic reconstruction technique (MLS-ART), is proposed. By introducing adaptive adjustment of the relaxation parameters during the reconstruction procedure, one-iteration AART can produce reconstructions with better quality, in comparison with one-iteration MLS-ART. Furthermore, AART outperforms MLS-ART with improved computational efficiency

  5. A Bootstrap-Based Probabilistic Optimization Method to Explore and Efficiently Converge in Solution Spaces of Earthquake Source Parameter Estimation Problems: Application to Volcanic and Tectonic Earthquakes

    Science.gov (United States)

    Dahm, T.; Heimann, S.; Isken, M.; Vasyura-Bathke, H.; Kühn, D.; Sudhaus, H.; Kriegerowski, M.; Daout, S.; Steinberg, A.; Cesca, S.

    2017-12-01

    Seismic source and moment tensor waveform inversion is often ill-posed or non-unique if station coverage is poor or signals are weak. Therefore, the interpretation of moment tensors can become difficult, if not the full model space is explored, including all its trade-offs and uncertainties. This is especially true for non-double couple components of weak or shallow earthquakes, as for instance found in volcanic, geothermal or mining environments.We developed a bootstrap-based probabilistic optimization scheme (Grond), which is based on pre-calculated Greens function full waveform databases (e.g. fomosto tool, doi.org/10.5880/GFZ.2.1.2017.001). Grond is able to efficiently explore the full model space, the trade-offs and the uncertainties of source parameters. The program is highly flexible with respect to the adaption to specific problems, the design of objective functions, and the diversity of empirical datasets.It uses an integrated, robust waveform data processing based on a newly developed Python toolbox for seismology (Pyrocko, see Heimann et al., 2017, http://doi.org/10.5880/GFZ.2.1.2017.001), and allows for visual inspection of many aspects of the optimization problem. Grond has been applied to the CMT moment tensor inversion using W-phases, to nuclear explosions in Korea, to meteorite atmospheric explosions, to volcano-tectonic events during caldera collapse and to intra-plate volcanic and tectonic crustal events.Grond can be used to optimize simultaneously seismological waveforms, amplitude spectra and static displacements of geodetic data as InSAR and GPS (e.g. KITE, Isken et al., 2017, http://doi.org/10.5880/GFZ.2.1.2017.002). We present examples of Grond optimizations to demonstrate the advantage of a full exploration of source parameter uncertainties for interpretation.

  6. Clinical applications of iterative reconstruction

    International Nuclear Information System (INIS)

    Eberl, S.

    1998-01-01

    Expectation maximisation (EM) reconstruction largely eliminates the hot and cold streaking artifacts characteristic of filtered-back projection (FBP) reconstruction around localised hot areas, such as the bladder. It also substantially reduces the problem of decreased inferior wall counts in MIBI myocardial perfusion studies due to ''streaking'' from high liver uptake. Non-uniform attenuation and scatter correction, resolution recovery, anatomical information, e.g. from MRI or CT tracer kinetic modelling, can all be built into the EM reconstruction imaging model. The properties of ordered subset EM (OSEM) have also been used to correct for known patient motion as part of the reconstruction process. These uses of EM are elaborated more fully in some of the other abstracts of this meeting. Currently we use OSEM routinely for: (i) studies where streaking is a problem, including all MIBI myocardial perfusion studies, to avoid hot liver inferior wall artifact, (ii) all whole body FDG PET, all lung V/Q SPECT (which have a short acquisition time) and all gated 201 TI myocardial perfusion studies due to improved noise characteristics of OSEM in these studies; (iii) studies with measured, non-uniform attenuation correction. With the accelerated OSEM algorithm, iterative reconstruction is practical for routine clinical applications and we have found OSEM to provide clearly superior reconstructions for the areas listed above and are investigating its application to other studies. In clinical use, we have not found OSEM to introduce artifacts which would not also occur with FBP, e.g. uncorrected patient motion will cause artifacts with both OSEM and FBP

  7. Titanium template for scaphoid reconstruction.

    Science.gov (United States)

    Haefeli, M; Schaefer, D J; Schumacher, R; Müller-Gerbl, M; Honigmann, P

    2015-06-01

    Reconstruction of a non-united scaphoid with a humpback deformity involves resection of the non-union followed by bone grafting and fixation of the fragments. Intraoperative control of the reconstruction is difficult owing to the complex three-dimensional shape of the scaphoid and the other carpal bones overlying the scaphoid on lateral radiographs. We developed a titanium template that fits exactly to the surfaces of the proximal and distal scaphoid poles to define their position relative to each other after resection of the non-union. The templates were designed on three-dimensional computed tomography reconstructions and manufactured using selective laser melting technology. Ten conserved human wrists were used to simulate the reconstruction. The achieved precision measured as the deviation of the surface of the reconstructed scaphoid from its virtual counterpart was good in five cases (maximal difference 1.5 mm), moderate in one case (maximal difference 3 mm) and inadequate in four cases (difference more than 3 mm). The main problems were attributed to the template design and can be avoided by improved pre-operative planning, as shown in a clinical case. © The Author(s) 2014.

  8. Haplotyping Problem, A Clustering Approach

    International Nuclear Information System (INIS)

    Eslahchi, Changiz; Sadeghi, Mehdi; Pezeshk, Hamid; Kargar, Mehdi; Poormohammadi, Hadi

    2007-01-01

    Construction of two haplotypes from a set of Single Nucleotide Polymorphism (SNP) fragments is called haplotype reconstruction problem. One of the most popular computational model for this problem is Minimum Error Correction (MEC). Since MEC is an NP-hard problem, here we propose a novel heuristic algorithm based on clustering analysis in data mining for haplotype reconstruction problem. Based on hamming distance and similarity between two fragments, our iterative algorithm produces two clusters of fragments; then, in each iteration, the algorithm assigns a fragment to one of the clusters. Our results suggest that the algorithm has less reconstruction error rate in comparison with other algorithms

  9. A Survey of Urban Reconstruction

    KAUST Repository

    Musialski, P.

    2013-05-10

    This paper provides a comprehensive overview of urban reconstruction. While there exists a considerable body of literature, this topic is still under active research. The work reviewed in this survey stems from the following three research communities: computer graphics, computer vision and photogrammetry and remote sensing. Our goal is to provide a survey that will help researchers to better position their own work in the context of existing solutions, and to help newcomers and practitioners in computer graphics to quickly gain an overview of this vast field. Further, we would like to bring the mentioned research communities to even more interdisciplinary work, since the reconstruction problem itself is by far not solved. This paper provides a comprehensive overview of urban reconstruction. While there exists a considerable body of literature, this topic is still under active research. The work reviewed in this survey stems from the following three research communities: computer graphics, computer vision and photogrammetry and remote sensing. Our goal is to provide a survey that will help researchers to better position their own work in the context of existing solutions, and to help newcomers and practitioners in computer graphics to quickly gain an overview of this vast field. Further, we would like to bring the mentioned research communities to even more interdisciplinary work, since the reconstruction problem itself is by far not solved. © 2013 The Eurographics Association and John Wiley & Sons Ltd.

  10. A Survey of Urban Reconstruction

    KAUST Repository

    Musialski, P.; Wonka, Peter; Aliaga, D. G.; Wimmer, M.; van Gool, L.; Purgathofer, W.

    2013-01-01

    This paper provides a comprehensive overview of urban reconstruction. While there exists a considerable body of literature, this topic is still under active research. The work reviewed in this survey stems from the following three research communities: computer graphics, computer vision and photogrammetry and remote sensing. Our goal is to provide a survey that will help researchers to better position their own work in the context of existing solutions, and to help newcomers and practitioners in computer graphics to quickly gain an overview of this vast field. Further, we would like to bring the mentioned research communities to even more interdisciplinary work, since the reconstruction problem itself is by far not solved. This paper provides a comprehensive overview of urban reconstruction. While there exists a considerable body of literature, this topic is still under active research. The work reviewed in this survey stems from the following three research communities: computer graphics, computer vision and photogrammetry and remote sensing. Our goal is to provide a survey that will help researchers to better position their own work in the context of existing solutions, and to help newcomers and practitioners in computer graphics to quickly gain an overview of this vast field. Further, we would like to bring the mentioned research communities to even more interdisciplinary work, since the reconstruction problem itself is by far not solved. © 2013 The Eurographics Association and John Wiley & Sons Ltd.

  11. An analytical approach for a nodal formulation of a two-dimensional fixed-source neutron transport problem in heterogeneous medium

    Energy Technology Data Exchange (ETDEWEB)

    Basso Barichello, Liliane; Dias da Cunha, Rudnei [Universidade Federal do Rio Grande do Sul, Porto Alegre, RS (Brazil). Inst. de Matematica; Becker Picoloto, Camila [Universidade Federal do Rio Grande do Sul, Porto Alegre, RS (Brazil). Programa de Pos-Graduacao em Engenharia Mecanica; Tres, Anderson [Universidade Federal do Rio Grande do Sul, Porto Alegre, RS (Brazil). Programa de Pos-Graduacao em Matematica Aplicada

    2015-05-15

    A nodal formulation of a fixed-source two-dimensional neutron transport problem, in Cartesian geometry, defined in a heterogeneous medium, is solved by an analytical approach. Explicit expressions, in terms of the spatial variables, are derived for averaged fluxes in each region in which the domain is subdivided. The procedure is an extension of an analytical discrete ordinates method, the ADO method, for the solution of the two-dimensional homogeneous medium case. The scheme is developed from the discrete ordinates version of the two-dimensional transport equation along with the level symmetric quadrature scheme. As usual for nodal schemes, relations between the averaged fluxes and the unknown angular fluxes at the contours are introduced as auxiliary equations. Numerical results are in agreement with results available in the literature.

  12. Prosthetic breast reconstruction: indications and update

    Science.gov (United States)

    Quinn, Tam T.; Miller, George S.; Rostek, Marie; Cabalag, Miguel S.; Rozen, Warren M.

    2016-01-01

    Background Despite 82% of patients reporting psychosocial improvement following breast reconstruction, only 33% patients choose to undergo surgery. Implant reconstruction outnumbers autologous reconstruction in many centres. Methods A systematic review of the literature was undertaken. Inclusion required: (I) Meta-analyses or review articles; (II) adult patients aged 18 years or over undergoing alloplastic breast reconstruction; (III) studies including outcome measures; (IV) case series with more than 10 patients; (V) English language; and (VI) publication after 1st January, 2000. Results After full text review, analysis and data extraction was conducted for a total of 63 articles. Definitive reconstruction with an implant can be immediate or delayed. Older patients have similar or even lower complication rates to younger patients. Complications include capsular contracture, hematoma and infection. Obesity, smoking, large breasts, diabetes and higher grade tumors are associated with increased risk of wound problems and reconstructive failure. Silicone implant patients have higher capsular contracture rates but have higher physical and psychosocial function. There were no associations made between silicone implants and cancer or systemic disease. There were no differences in outcomes or complications between round and shaped implants. Textured implants have a lower risk of capsular contracture than smooth implants. Smooth implants are more likely to be displaced as well as having higher rates of infection. Immediate breast reconstruction (IBR) gives the best aesthetic outcome if radiotherapy is not required but has a higher rate of capsular contracture and implant failure. Delayed-immediate reconstruction patients can achieve similar aesthetic results to IBR whilst preserving the breast skin if radiotherapy is required. Delayed breast reconstruction (DBR) patients have fewer complications than IBR patients. Conclusions Implant reconstruction is a safe and popular

  13. Shredded banknotes reconstruction using AKAZE points.

    Science.gov (United States)

    Nabiyev, Vasif V; Yılmaz, Seçkin; Günay, Asuman; Muzaffer, Gül; Ulutaş, Güzin

    2017-09-01

    Shredded banknote reconstruction is a recent topic and can be viewed as solving large-scale jigsaw puzzles. Also, problems such as reconstruction of fragmented documents, photographs and historical artefacts are closely related with this topic. The high computational complexity of these problems increases the need for the development of new methods Reconstruction of shredded banknotes consists of three main stages. (1) Matching fragments with a reference banknote. (2) Aligning the fragments by rotating at certain angles. (3) Assembling the fragments. The existing methods can successfully applied to synthetic banknote fragments which are created in computer environment. But when real banknote reconstruction problem is considered, different sub problems arise and make the existing methods inadequate. In this study, a keypoint based method, named AKAZE, was used to make the matching process effective. This is the first study that uses the AKAZE method in the reconstruction of shredded banknotes. A new method for fragment alignment has also been proposed. In this method, the convex hulls that contain all true matched AKAZE keypoints were found on reference banknote and fragments. The orientations of fragments were estimated accurately by comparing these convex polygons. Also, a new criterion was developed to reveal the success rates of reconstructed banknotes. In addition, two different data sets including real and synthetic banknote fragments of different countries were created to test the success of proposed method. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. FLICKERING OF 1.3 cm SOURCES IN SGR B2: TOWARD A SOLUTION TO THE ULTRACOMPACT H II REGION LIFETIME PROBLEM

    Energy Technology Data Exchange (ETDEWEB)

    De Pree, C. G.; Monsrud, A. [Agnes Scott College, 141 East College Avenue, Decatur, GA 30030 (United States); Peters, T. [Institut für Theoretische Physik, Universität Zürich, CH-8057 Zürich (Switzerland); Mac Low, M.-M. [American Museum of Natural History, New York, NY 10024 (United States); Wilner, D. J.; Keto, E. R. [Harvard-Smithsonian CfA, Cambridge, MA 02138 (United States); Goss, W. M. [National Radio Astronomy Observatory, Socorro, NM 87801 (United States); Galván-Madrid, R. [European Southern Observatory, Karl-Schwarzschild-Str. 2, D-85748 Garching (Germany); Klessen, R. S. [Zentrum für Astronomie, Institut für Theoretische Astrophysik, Universität Heidelberg, Albert-Ueberle-Str. 2, D-69120 Heidelberg (Germany)

    2014-02-01

    Accretion flows onto massive stars must transfer mass so quickly that they are themselves gravitationally unstable, forming dense clumps and filaments. These density perturbations interact with young massive stars, emitting ionizing radiation, alternately exposing and confining their H II regions. As a result, the H II regions are predicted to flicker in flux density over periods of decades to centuries rather than increase monotonically in size as predicted by simple Spitzer solutions. We have recently observed the Sgr B2 region at 1.3 cm with the Very Large Array in its three hybrid configurations (DnC, CnB, and BnA) at a resolution of ∼0.''25. These observations were made to compare in detail with matched continuum observations from 1989. At 0.''25 resolution, Sgr B2 contains 41 ultracompact (UC) H II regions, 6 of which are hypercompact. The new observations of Sgr B2 allow comparison of relative peak flux densities for the H II regions in Sgr B2 over a 23 year time baseline (1989-2012) in one of the most source-rich massive star forming regions in the Milky Way. The new 1.3 cm continuum images indicate that four of the 41 UC H II regions exhibit significant changes in their peak flux density, with one source (K3) dropping in peak flux density, and the other three sources (F10.303, F1, and F3) increasing in peak flux density. The results are consistent with statistical predictions from simulations of high mass star formation, suggesting that they offer a solution to the lifetime problem for UC H II regions.

  15. Neural Network for Sparse Reconstruction

    Directory of Open Access Journals (Sweden)

    Qingfa Li

    2014-01-01

    Full Text Available We construct a neural network based on smoothing approximation techniques and projected gradient method to solve a kind of sparse reconstruction problems. Neural network can be implemented by circuits and can be seen as an important method for solving optimization problems, especially large scale problems. Smoothing approximation is an efficient technique for solving nonsmooth optimization problems. We combine these two techniques to overcome the difficulties of the choices of the step size in discrete algorithms and the item in the set-valued map of differential inclusion. In theory, the proposed network can converge to the optimal solution set of the given problem. Furthermore, some numerical experiments show the effectiveness of the proposed network in this paper.

  16. SU-D-210-03: Limited-View Multi-Source Quantitative Photoacoustic Tomography

    Energy Technology Data Exchange (ETDEWEB)

    Feng, J; Gao, H [Shanghai Jiao Tong University, Shanghai, Shanghai (China)

    2015-06-15

    Purpose: This work is to investigate a novel limited-view multi-source acquisition scheme for the direct and simultaneous reconstruction of optical coefficients in quantitative photoacoustic tomography (QPAT), which has potentially improved signal-to-noise ratio and reduced data acquisition time. Methods: Conventional QPAT is often considered in two steps: first to reconstruct the initial acoustic pressure from the full-view ultrasonic data after each optical illumination, and then to quantitatively reconstruct optical coefficients (e.g., absorption and scattering coefficients) from the initial acoustic pressure, using multi-source or multi-wavelength scheme.Based on a novel limited-view multi-source scheme here, We have to consider the direct reconstruction of optical coefficients from the ultrasonic data, since the initial acoustic pressure can no longer be reconstructed as an intermediate variable due to the incomplete acoustic data in the proposed limited-view scheme. In this work, based on a coupled photo-acoustic forward model combining diffusion approximation and wave equation, we develop a limited-memory Quasi-Newton method (LBFGS) for image reconstruction that utilizes the adjoint forward problem for fast computation of gradients. Furthermore, the tensor framelet sparsity is utilized to improve the image reconstruction which is solved by Alternative Direction Method of Multipliers (ADMM). Results: The simulation was performed on a modified Shepp-Logan phantom to validate the feasibility of the proposed limited-view scheme and its corresponding image reconstruction algorithms. Conclusion: A limited-view multi-source QPAT scheme is proposed, i.e., the partial-view acoustic data acquisition accompanying each optical illumination, and then the simultaneous rotations of both optical sources and ultrasonic detectors for next optical illumination. Moreover, LBFGS and ADMM algorithms are developed for the direct reconstruction of optical coefficients from the

  17. Hanford Environmental Dose Reconstruction Project

    International Nuclear Information System (INIS)

    Cannon, S.D.; Finch, S.M.

    1992-10-01

    The objective of the Hanford Environmental Dose Reconstruction (HEDR) Project is to estimate the radiation doses that individuals and populations could have received from nuclear operations at Hanford since 1944. The independent Technical Steering Panel (TSP) provides technical direction. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed from release to impact on humans (dose estimates):Source Terms, Environmental Transport, Environmental Monitoring Data, Demography, Food Consumption, and Agriculture, and Environmental Pathways and Dose Estimates

  18. Breast reconstruction - natural tissue

    Science.gov (United States)

    ... flap; TRAM; Latissimus muscle flap with a breast implant; DIEP flap; DIEAP flap; Gluteal free flap; Transverse upper gracilis flap; TUG; Mastectomy - breast reconstruction with natural tissue; Breast cancer - breast reconstruction with natural tissue

  19. Breast reconstruction after mastectomy

    Directory of Open Access Journals (Sweden)

    Daniel eSchmauss

    2016-01-01

    Full Text Available Breast cancer is the leading cause of cancer death in women worldwide. Its surgical approach has become less and less mutilating in the last decades. However, the overall number of breast reconstructions has significantly increased lately. Nowadays breast reconstruction should be individualized at its best, first of all taking into consideration oncological aspects of the tumor, neo-/adjuvant treatment and genetic predisposition, but also its timing (immediate versus delayed breast reconstruction, as well as the patient’s condition and wish. This article gives an overview over the various possibilities of breast reconstruction, including implant- and expander-based reconstruction, flap-based reconstruction (vascularized autologous tissue, the combination of implant and flap, reconstruction using non-vascularized autologous fat, as well as refinement surgery after breast reconstruction.

  20. Solving ill-posed control problems by stabilized finite element methods: an alternative to Tikhonov regularization

    Science.gov (United States)

    Burman, Erik; Hansbo, Peter; Larson, Mats G.

    2018-03-01

    Tikhonov regularization is one of the most commonly used methods for the regularization of ill-posed problems. In the setting of finite element solutions of elliptic partial differential control problems, Tikhonov regularization amounts to adding suitably weighted least squares terms of the control variable, or derivatives thereof, to the Lagrangian determining the optimality system. In this note we show that the stabilization methods for discretely ill-posed problems developed in the setting of convection-dominated convection-diffusion problems, can be highly suitable for stabilizing optimal control problems, and that Tikhonov regularization will lead to less accurate discrete solutions. We consider some inverse problems for Poisson’s equation as an illustration and derive new error estimates both for the reconstruction of the solution from the measured data and reconstruction of the source term from the measured data. These estimates include both the effect of the discretization error and error in the measurements.

  1. Environmental problems due to mining in Jharia Coalfield

    Energy Technology Data Exchange (ETDEWEB)

    Banerjee, S.P.

    1985-06-01

    The Jharia coalfield to the NW of Calcutta, India is the most important source of coking coal in the country. Coal mining started in 1890; in 1971-3 the coking coal mines were nationalised, the remainder being operated by the Bharat Coking Coal Ltd., TISCO and IISCO. Intensive mining has resulted in major environmental problems - land damage, water pollution, air pollution and overpopulation - which a major reconstruction programme started in 1976 hopes to solve. 2 references.

  2. Reconstruction of driving forces through recurrence plots

    International Nuclear Information System (INIS)

    Tanio, Masaaki; Hirata, Yoshito; Suzuki, Hideyuki

    2009-01-01

    We consider the problem of reconstructing one-dimensional driving forces only from the observations of driven systems. We extend the approach presented in a seminal paper [M.C. Casdagli, Physica D 108 (1997) 12] and propose a method that is robust and has wider applicability. By reinterpreting the work of Thiel et al. [M. Thiel, M.C. Romano, J. Kurths, Phys. Lett. A 330 (2004) 343], we formulate the reconstruction problem as a combinatorial optimization problem and relax conditions by assuming that a driving force is continuous. The method is demonstrated by using a tent map driven by an external force.

  3. New vertex reconstruction algorithms for CMS

    CERN Document Server

    Frühwirth, R; Prokofiev, Kirill; Speer, T.; Vanlaer, P.; Chabanat, E.; Estre, N.

    2003-01-01

    The reconstruction of interaction vertices can be decomposed into a pattern recognition problem (``vertex finding'') and a statistical problem (``vertex fitting''). We briefly review classical methods. We introduce novel approaches and motivate them in the framework of high-luminosity experiments like at the LHC. We then show comparisons with the classical methods in relevant physics channels

  4. The transverse musculocutaneous gracilis flap for breast reconstruction: guidelines for flap and patient selection.

    Science.gov (United States)

    Schoeller, Thomas; Huemer, Georg M; Wechselberger, Gottfried

    2008-07-01

    The transverse musculocutaneous gracilis (TMG) flap has received little attention in the literature as a valuable alternative source of donor tissue in the setting of breast reconstruction. The authors give an in-depth review of their experience with breast reconstruction using the TMG flap. A retrospective review of 111 patients treated with a TMG flap for breast reconstruction in an immediate or a delayed setting between August of 2002 and July of 2007 was undertaken. Of these, 26 patients underwent bilateral reconstruction and 68 underwent unilateral reconstruction, and 17 patients underwent reconstruction unilaterally with a double TMG flap. Patient age ranged between 24 and 65 years (mean, 37 years). Twelve patients had to be taken back to the operating room because of flap-related problems and nine patients underwent successful revision microsurgically, resulting in three complete flap losses in a series of 111 patients with 154 transplanted TMG flaps. Partial flap loss was encountered in two patients, whereas fat tissue necrosis was managed conservatively in six patients. Donor-site morbidity was an advantage of this flap, with a concealed scar and minimal contour irregularities of the thigh, even in unilateral harvest. Complications included delayed wound healing (n = 10), hematoma (n = 5), and transient sensory deficit over the posterior thigh (n = 49). The TMG flap is more than an alternative to the deep inferior epigastric perforator (DIEP) flap in microsurgical breast reconstruction in selected patients. In certain indications, such as bilateral reconstructions, it possibly surpasses the DIEP flap because of a better concealed donor scar and easier harvest.

  5. Early anterior cruciate ligament reconstruction can save meniscus without any complications

    Directory of Open Access Journals (Sweden)

    Chang-Ik Hur

    2017-01-01

    Conclusions: Early ACL reconstruction had excellent clinical results and stability as good as delayed reconstruction without the problem of knee motion, muscle power, and postural control. Moreover, early reconstruction showed the high possibility of meniscal repair. Therefore, early ACL reconstruction should be recommended.

  6. Three-dimension reconstruction based on spatial light modulator

    International Nuclear Information System (INIS)

    Deng Xuejiao; Zhang Nanyang; Zeng Yanan; Yin Shiliang; Wang Weiyu

    2011-01-01

    Three-dimension reconstruction, known as an important research direction of computer graphics, is widely used in the related field such as industrial design and manufacture, construction, aerospace, biology and so on. Via such technology we can obtain three-dimension digital point cloud from a two-dimension image, and then simulate the three-dimensional structure of the physical object for further study. At present, the obtaining of three-dimension digital point cloud data is mainly based on the adaptive optics system with Shack-Hartmann sensor and phase-shifting digital holography. Referring to surface fitting, there are also many available methods such as iterated discrete fourier transform, convolution and image interpolation, linear phase retrieval. The main problems we came across in three-dimension reconstruction are the extraction of feature points and arithmetic of curve fitting. To solve such problems, we can, first of all, calculate the relevant surface normal vector information of each pixel in the light source coordinate system, then these vectors are to be converted to the coordinates of image through the coordinate conversion, so the expectant 3D point cloud get arise. Secondly, after the following procedures of de-noising, repairing, the feature points can later be selected and fitted to get the fitting function of the surface topography by means of Zernike polynomial, so as to reconstruct the determinand's three-dimensional topography. In this paper, a new kind of three-dimension reconstruction algorithm is proposed, with the assistance of which, the topography can be estimated from its grayscale at different sample points. Moreover, the previous stimulation and the experimental results prove that the new algorithm has a strong capability to fit, especially for large-scale objects .

  7. Three-dimension reconstruction based on spatial light modulator

    Energy Technology Data Exchange (ETDEWEB)

    Deng Xuejiao; Zhang Nanyang; Zeng Yanan; Yin Shiliang; Wang Weiyu, E-mail: daisydelring@yahoo.com.cn [Huazhong University of Science and Technology (China)

    2011-02-01

    Three-dimension reconstruction, known as an important research direction of computer graphics, is widely used in the related field such as industrial design and manufacture, construction, aerospace, biology and so on. Via such technology we can obtain three-dimension digital point cloud from a two-dimension image, and then simulate the three-dimensional structure of the physical object for further study. At present, the obtaining of three-dimension digital point cloud data is mainly based on the adaptive optics system with Shack-Hartmann sensor and phase-shifting digital holography. Referring to surface fitting, there are also many available methods such as iterated discrete fourier transform, convolution and image interpolation, linear phase retrieval. The main problems we came across in three-dimension reconstruction are the extraction of feature points and arithmetic of curve fitting. To solve such problems, we can, first of all, calculate the relevant surface normal vector information of each pixel in the light source coordinate system, then these vectors are to be converted to the coordinates of image through the coordinate conversion, so the expectant 3D point cloud get arise. Secondly, after the following procedures of de-noising, repairing, the feature points can later be selected and fitted to get the fitting function of the surface topography by means of Zernike polynomial, so as to reconstruct the determinand's three-dimensional topography. In this paper, a new kind of three-dimension reconstruction algorithm is proposed, with the assistance of which, the topography can be estimated from its grayscale at different sample points. Moreover, the previous stimulation and the experimental results prove that the new algorithm has a strong capability to fit, especially for large-scale objects .

  8. Three-dimension reconstruction based on spatial light modulator

    Science.gov (United States)

    Deng, Xuejiao; Zhang, Nanyang; Zeng, Yanan; Yin, Shiliang; Wang, Weiyu

    2011-02-01

    Three-dimension reconstruction, known as an important research direction of computer graphics, is widely used in the related field such as industrial design and manufacture, construction, aerospace, biology and so on. Via such technology we can obtain three-dimension digital point cloud from a two-dimension image, and then simulate the three-dimensional structure of the physical object for further study. At present, the obtaining of three-dimension digital point cloud data is mainly based on the adaptive optics system with Shack-Hartmann sensor and phase-shifting digital holography. Referring to surface fitting, there are also many available methods such as iterated discrete fourier transform, convolution and image interpolation, linear phase retrieval. The main problems we came across in three-dimension reconstruction are the extraction of feature points and arithmetic of curve fitting. To solve such problems, we can, first of all, calculate the relevant surface normal vector information of each pixel in the light source coordinate system, then these vectors are to be converted to the coordinates of image through the coordinate conversion, so the expectant 3D point cloud get arise. Secondly, after the following procedures of de-noising, repairing, the feature points can later be selected and fitted to get the fitting function of the surface topography by means of Zernike polynomial, so as to reconstruct the determinand's three-dimensional topography. In this paper, a new kind of three-dimension reconstruction algorithm is proposed, with the assistance of which, the topography can be estimated from its grayscale at different sample points. Moreover, the previous stimulation and the experimental results prove that the new algorithm has a strong capability to fit, especially for large-scale objects .

  9. Third-generation dual-source CT of the neck using automated tube voltage adaptation in combination with advanced modeled iterative reconstruction: evaluation of image quality and radiation dose

    International Nuclear Information System (INIS)

    Scholtz, Jan-Erik; Wichmann, Julian L.; Huesers, Kristina; Albrecht, Moritz H.; Beeres, Martin; Bauer, Ralf W.; Vogl, Thomas J.; Bodelle, Boris

    2016-01-01

    To evaluate image quality and radiation dose in third-generation dual-source computed tomography (DSCT) of the neck using automated tube voltage adaptation (TVA) with advanced modelled iterative reconstruction (ADMIRE) algorithm. One hundred and sixteen patients were retrospectively evaluated. Group A (n = 59) was examined on second-generation DSCT with automated TVA and filtered back projection. Group B (n = 57) was examined on a third-generation DSCT with automated TVA and ADMIRE. Age, body diameter, attenuation of several anatomic structures, noise, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), radiation dose (CTDI vol ) and size-specific dose estimates (SSDE) were assessed. Diagnostic acceptability was rated by three readers. Age (p = 0.87) and body diameter (p = 0.075) did not differ significantly. Tube voltage in Group A was set automatically to 100 kV for all patients (n = 59), and to 70 kV (n = 2), 80 kV (n = 5), and 90 kV (n = 50) in Group B. Noise was reduced and CNR was increased significantly (p < 0.001). Diagnostic acceptability was rated high in both groups, with better ratings in Group B (p < 0.001). SSDE was reduced by 34 % in Group B (20.38 ± 1.63 mGy vs. 13.04 ± 1.50 mGy, p < 0.001). Combination of automated TVA and ADMIRE in neck CT using third-generation DSCT results in a substantial radiation dose reduction with low noise and increased CNR. (orig.)

  10. Vector tomography for reconstructing electric fields with non-zero divergence in bounded domains

    Energy Technology Data Exchange (ETDEWEB)

    Koulouri, Alexandra, E-mail: koulouri@uni-muenster.de [Institute for Computational and Applied Mathematics, University of Münster, Einsteinstrasse 62, D-48149 Münster (Germany); Department of Electrical and Electronic Engineering, Imperial College London, Exhibition Road, London SW7 2BT (United Kingdom); Brookes, Mike [Department of Electrical and Electronic Engineering, Imperial College London, Exhibition Road, London SW7 2BT (United Kingdom); Rimpiläinen, Ville [Institute for Biomagnetism and Biosignalanalysis, University of Münster, Malmedyweg 15, D-48149 Münster (Germany); Department of Mathematics, University of Auckland, Private bag 92019, Auckland 1142 (New Zealand)

    2017-01-15

    In vector tomography (VT), the aim is to reconstruct an unknown multi-dimensional vector field using line integral data. In the case of a 2-dimensional VT, two types of line integral data are usually required. These data correspond to integration of the parallel and perpendicular projection of the vector field along the integration lines and are called the longitudinal and transverse measurements, respectively. In most cases, however, the transverse measurements cannot be physically acquired. Therefore, the VT methods are typically used to reconstruct divergence-free (or source-free) velocity and flow fields that can be reconstructed solely from the longitudinal measurements. In this paper, we show how vector fields with non-zero divergence in a bounded domain can also be reconstructed from the longitudinal measurements without the need of explicitly evaluating the transverse measurements. To the best of our knowledge, VT has not previously been used for this purpose. In particular, we study low-frequency, time-harmonic electric fields generated by dipole sources in convex bounded domains which arise, for example, in electroencephalography (EEG) source imaging. We explain in detail the theoretical background, the derivation of the electric field inverse problem and the numerical approximation of the line integrals. We show that fields with non-zero divergence can be reconstructed from the longitudinal measurements with the help of two sparsity constraints that are constructed from the transverse measurements and the vector Laplace operator. As a comparison to EEG source imaging, we note that VT does not require mathematical modeling of the sources. By numerical simulations, we show that the pattern of the electric field can be correctly estimated using VT and the location of the source activity can be determined accurately from the reconstructed magnitudes of the field. - Highlights: • Vector tomography is used to reconstruct electric fields generated by dipole

  11. Photoacoustic image reconstruction via deep learning

    Science.gov (United States)

    Antholzer, Stephan; Haltmeier, Markus; Nuster, Robert; Schwab, Johannes

    2018-02-01

    Applying standard algorithms to sparse data problems in photoacoustic tomography (PAT) yields low-quality images containing severe under-sampling artifacts. To some extent, these artifacts can be reduced by iterative image reconstruction algorithms which allow to include prior knowledge such as smoothness, total variation (TV) or sparsity constraints. These algorithms tend to be time consuming as the forward and adjoint problems have to be solved repeatedly. Further, iterative algorithms have additional drawbacks. For example, the reconstruction quality strongly depends on a-priori model assumptions about the objects to be recovered, which are often not strictly satisfied in practical applications. To overcome these issues, in this paper, we develop direct and efficient reconstruction algorithms based on deep learning. As opposed to iterative algorithms, we apply a convolutional neural network, whose parameters are trained before the reconstruction process based on a set of training data. For actual image reconstruction, a single evaluation of the trained network yields the desired result. Our presented numerical results (using two different network architectures) demonstrate that the proposed deep learning approach reconstructs images with a quality comparable to state of the art iterative reconstruction methods.

  12. Computational modeling for the angular reconstruction of monoenergetic neutron flux in non-multiplying slabs using synthetic diffusion approximation

    International Nuclear Information System (INIS)

    Mansur, Ralph S.; Barros, Ricardo C.

    2011-01-01

    We describe a method to determine the neutron scalar flux in a slab using monoenergetic diffusion model. To achieve this goal we used three ingredients in the computational code that we developed on the Scilab platform: a spectral nodal method that generates numerical solution for the one-speed slab-geometry fixed source diffusion problem with no spatial truncation errors; a spatial reconstruction scheme to yield detailed profile of the coarse-mesh solution; and an angular reconstruction scheme to yield approximately the neutron angular flux profile at a given location of the slab migrating in a given direction. Numerical results are given to illustrate the efficiency of the offered code. (author)

  13. Labral reconstruction: when to perform and how

    Directory of Open Access Journals (Sweden)

    Brian J White

    2015-07-01

    Full Text Available Over the past decade, the understanding of the anatomy and function of the hip joint has continuously evolved, and surgical treatment options for the hip have significantly progressed. Originally, surgical treatment of the hip primarily involved resection of damaged tissue. Procedures that maintain and preserve proper hip anatomy, such as labral repair and femoroacetabular impingement (FAI correction, have shown superior results, in terms of pain reduction, increased function, and ability to return to activities. Labral reconstruction is a treatment option that uses a graft to reconstruct the native labrum. The technique and outcomes of labral reconstruction have been described relatively recently, and labral reconstruction is a cutting edge procedure that has shown promising early outcomes. The aim of this article is to review the current literature on hip labral reconstruction. We will review the indications for labral reconstruction, surgical technique and graft options, and surgical outcomes that have been described to date. Labral reconstruction provides an alternative treatment option for challenging intra-articular hip problems. Labral reconstruction restores the original anatomy of the hip and has the potential to preserve the longevity of the hip joint. This technique is an important tool in the orthopaedic surgeon’s arsenal for hip joint treatment and preservation.

  14. Compressed Sensing, Pseudodictionary-Based, Superresolution Reconstruction

    Directory of Open Access Journals (Sweden)

    Chun-mei Li

    2016-01-01

    Full Text Available The spatial resolution of digital images is the critical factor that affects photogrammetry precision. Single-frame, superresolution, image reconstruction is a typical underdetermined, inverse problem. To solve this type of problem, a compressive, sensing, pseudodictionary-based, superresolution reconstruction method is proposed in this study. The proposed method achieves pseudodictionary learning with an available low-resolution image and uses the K-SVD algorithm, which is based on the sparse characteristics of the digital image. Then, the sparse representation coefficient of the low-resolution image is obtained by solving the norm of l0 minimization problem, and the sparse coefficient and high-resolution pseudodictionary are used to reconstruct image tiles with high resolution. Finally, single-frame-image superresolution reconstruction is achieved. The proposed method is applied to photogrammetric images, and the experimental results indicate that the proposed method effectively increase image resolution, increase image information content, and achieve superresolution reconstruction. The reconstructed results are better than those obtained from traditional interpolation methods in aspect of visual effects and quantitative indicators.

  15. CURRENT CONCEPTS IN ACL RECONSTRUCTION

    Directory of Open Access Journals (Sweden)

    Freddie H. Fu

    2008-09-01

    Full Text Available Current Concepts in ACL Reconstruction is a complete reference text composed of the most thorough collection of topics on the ACL and its surgical reconstruction compiled, with contributions from some of the world's experts and most experienced ACL surgeons. Various procedures mentioned throughout the text are also demonstrated in an accompanying video CD-ROM. PURPOSE Composing a single, comprehensive and complete information source on ACL including basic sciences, clinical issues, latest concepts and surgical techniques, from evaluation to outcome, from history to future, editors and contributors have targeted to keep the audience pace with the latest concepts and techniques for the evaluation and the treatment of ACL injuries. FEATURES The text is composed of 27 chapters in 6 sections. The first section is mostly about basic sciences, also history of the ACL, imaging, clinical approach to adolescent and pediatric patients are subjected. In the second section, Graft Choices and Arthroscopy Portals for ACL Reconstruction are mentioned. The third section is about the technique and the outcome of the single-bundle ACL reconstruction. The fourth chapter includes the techniques and outcome of the double-bundle ACL reconstruction. In the fifth chapter revision, navigation technology, rehabilitation and the evaluation of the outcome of ACL reconstruction is subjected. The sixth/the last chapter is about the future advances to reach: What We Have Learned and the Future of ACL Reconstruction. AUDIENCE Orthopedic residents, sports traumatology and knee surgery fellows, orthopedic surgeons, also scientists in basic sciences or clinicians who are studying or planning a research on ACL forms the audience group of this book. ASSESSMENT This is the latest, the most complete and comprehensive textbook of ACL reconstruction produced by the editorial work up of two pioneer and masters "Freddie H. Fu MD and Steven B. Cohen MD" with the contribution of world

  16. Increasing efficiency of reconstruction and technological development of coking enterprises

    Energy Technology Data Exchange (ETDEWEB)

    Rozenfel' d, M.S.; Martynenko, V.M.; Tytyuk, Yu.A.; Ivanov, V.V.; Svyatogorov, A.A.; Kolomiets, A.F. (NIISP, Voroshilovgrad (USSR))

    1989-07-01

    Discusses problems associated with reconstruction of coking plants in the USSR. Planning coking plant reconstruction is analyzed. Duration of individual stages of plant reconstruction is considered. A method developed by the Giprokoks research institute for calculating reconstruction time considering duration of individual stages of coke oven battery repair is analyzed: construction of storage facilities, transport of materials and equipment, safety requirements, coke oven cooling, dismantling, construction of coke oven walls, installation of machines and equipment. Advantages of using the methods for analysis of coke oven battery reconstruction and optimization of repair time are discussed.

  17. Multi-sheet surface rebinning methods for reconstruction from asymmetrically truncated cone beam projections: I. Approximation and optimality

    International Nuclear Information System (INIS)

    Betcke, Marta M; Lionheart, William R B

    2013-01-01

    The mechanical motion of the gantry in conventional cone beam CT scanners restricts the speed of data acquisition in applications with near real time requirements. A possible resolution of this problem is to replace the moving source detector assembly with static parts that are electronically activated. An example of such a system is the Rapiscan Systems RTT80 real time tomography scanner, with a static ring of sources and axially offset static cylinder of detectors. A consequence of such a design is asymmetrical axial truncation of the cone beam projections resulting, in the sense of integral geometry, in severely incomplete data. In particular we collect data only in a fraction of the Tam–Danielsson window, hence the standard cone beam reconstruction techniques do not apply. In this work we propose a family of multi-sheet surface rebinning methods for reconstruction from such truncated projections. The proposed methods combine analytical and numerical ideas utilizing linearity of the ray transform to reconstruct data on multi-sheet surfaces, from which the volumetric image is obtained through deconvolution. In this first paper in the series, we discuss the rebinning to multi-sheet surfaces. In particular we concentrate on the underlying transforms on multi-sheet surfaces and their approximation with data collected by offset multi-source scanning geometries like the RTT. The optimal multi-sheet surface and the corresponding rebinning function are found as a solution of a variational problem. In the case of the quadratic objective, the variational problem for the optimal rebinning pair can be solved by a globally convergent iteration. Examples of optimal rebinning pairs are computed for different trajectories. We formulate the axial deconvolution problem for the recovery of the volumetric image from the reconstructions on multi-sheet surfaces. Efficient and stable solution of the deconvolution problem is the subject of the second paper in this series (Betcke and

  18. BigSUR: large-scale structured urban reconstruction

    KAUST Repository

    Kelly, Tom; Femiani, John; Wonka, Peter; Mitra, Niloy J.

    2017-01-01

    The creation of high-quality semantically parsed 3D models for dense metropolitan areas is a fundamental urban modeling problem. Although recent advances in acquisition techniques and processing algorithms have resulted in large-scale imagery or 3D polygonal reconstructions, such data-sources are typically noisy, and incomplete, with no semantic structure. In this paper, we present an automatic data fusion technique that produces high-quality structured models of city blocks. From coarse polygonal meshes, street-level imagery, and GIS footprints, we formulate a binary integer program that globally balances sources of error to produce semantically parsed mass models with associated facade elements. We demonstrate our system on four city regions of varying complexity; our examples typically contain densely built urban blocks spanning hundreds of buildings. In our largest example, we produce a structured model of 37 city blocks spanning a total of 1,011 buildings at a scale and quality previously impossible to achieve automatically.

  19. BigSUR: large-scale structured urban reconstruction

    KAUST Repository

    Kelly, Tom

    2017-11-22

    The creation of high-quality semantically parsed 3D models for dense metropolitan areas is a fundamental urban modeling problem. Although recent advances in acquisition techniques and processing algorithms have resulted in large-scale imagery or 3D polygonal reconstructions, such data-sources are typically noisy, and incomplete, with no semantic structure. In this paper, we present an automatic data fusion technique that produces high-quality structured models of city blocks. From coarse polygonal meshes, street-level imagery, and GIS footprints, we formulate a binary integer program that globally balances sources of error to produce semantically parsed mass models with associated facade elements. We demonstrate our system on four city regions of varying complexity; our examples typically contain densely built urban blocks spanning hundreds of buildings. In our largest example, we produce a structured model of 37 city blocks spanning a total of 1,011 buildings at a scale and quality previously impossible to achieve automatically.

  20. Manhattan-World Urban Reconstruction from Point Clouds

    KAUST Repository

    Li, Minglei

    2016-09-16

    Manhattan-world urban scenes are common in the real world. We propose a fully automatic approach for reconstructing such scenes from 3D point samples. Our key idea is to represent the geometry of the buildings in the scene using a set of well-aligned boxes. We first extract plane hypothesis from the points followed by an iterative refinement step. Then, candidate boxes are obtained by partitioning the space of the point cloud into a non-uniform grid. After that, we choose an optimal subset of the candidate boxes to approximate the geometry of the buildings. The contribution of our work is that we transform scene reconstruction into a labeling problem that is solved based on a novel Markov Random Field formulation. Unlike previous methods designed for particular types of input point clouds, our method can obtain faithful reconstructions from a variety of data sources. Experiments demonstrate that our method is superior to state-of-the-art methods. © Springer International Publishing AG 2016.

  1. Manhattan-World Urban Reconstruction from Point Clouds

    KAUST Repository

    Li, Minglei; Wonka, Peter; Nan, Liangliang

    2016-01-01

    Manhattan-world urban scenes are common in the real world. We propose a fully automatic approach for reconstructing such scenes from 3D point samples. Our key idea is to represent the geometry of the buildings in the scene using a set of well-aligned boxes. We first extract plane hypothesis from the points followed by an iterative refinement step. Then, candidate boxes are obtained by partitioning the space of the point cloud into a non-uniform grid. After that, we choose an optimal subset of the candidate boxes to approximate the geometry of the buildings. The contribution of our work is that we transform scene reconstruction into a labeling problem that is solved based on a novel Markov Random Field formulation. Unlike previous methods designed for particular types of input point clouds, our method can obtain faithful reconstructions from a variety of data sources. Experiments demonstrate that our method is superior to state-of-the-art methods. © Springer International Publishing AG 2016.

  2. Reconstruction and visualization of nanoparticle composites by transmission electron tomography

    Energy Technology Data Exchange (ETDEWEB)

    Wang, X.Y. [National Institute for Nanotechnology, 11421 Saskatchewan Drive, Edmonton, Canada T6H 2M9 (Canada); Department of Physics, University of Alberta, Edmonton, Canada T6G 2G7 (Canada); Lockwood, R. [National Institute for Nanotechnology, 11421 Saskatchewan Drive, Edmonton, Canada T6H 2M9 (Canada); Malac, M., E-mail: marek.malac@nrc-cnrc.gc.ca [National Institute for Nanotechnology, 11421 Saskatchewan Drive, Edmonton, Canada T6H 2M9 (Canada); Department of Physics, University of Alberta, Edmonton, Canada T6G 2G7 (Canada); Furukawa, H. [SYSTEM IN FRONTIER INC., 2-8-3, Shinsuzuharu bldg. 4F, Akebono-cho, Tachikawa-shi, Tokyo 190-0012 (Japan); Li, P.; Meldrum, A. [National Institute for Nanotechnology, 11421 Saskatchewan Drive, Edmonton, Canada T6H 2M9 (Canada)

    2012-02-15

    This paper examines the limits of transmission electron tomography reconstruction methods for a nanocomposite object composed of many closely packed nanoparticles. Two commonly used reconstruction methods in TEM tomography were examined and compared, and the sources of various artefacts were explored. Common visualization methods were investigated, and the resulting 'interpretation artefacts' ( i.e., deviations from 'actual' particle sizes and shapes arising from the visualization) were determined. Setting a known or estimated nanoparticle volume fraction as a criterion for thresholding does not in fact give a good visualization. Unexpected effects associated with common built-in image filtering methods were also found. Ultimately, this work set out to establish the common problems and pitfalls associated with electron beam tomographic reconstruction and visualization of samples consisting of closely spaced nanoparticles. -- Highlights: Black-Right-Pointing-Pointer Electron tomography limits were explored by both experiment and simulation. Black-Right-Pointing-Pointer Reliable quantitative volumetry using electron tomography is not presently feasible. Black-Right-Pointing-Pointer Volume rendering appears to be better choice for visualization of composite samples.

  3. Cooperation of the member states of the SEV in the field of energy, fuel and raw material resources, and the problems of developing of new sources of energy

    Energy Technology Data Exchange (ETDEWEB)

    Kapol' i, L

    1980-01-01

    On the basis of the agreement concerning the creation of a combined organization for conducting geological prospecting operations for petroleum and gas in the Baltic Sea in the area of the continental shelf and the floor of the territorial waters of the signatory states of East Germany, Poland, and the USSR, which was signed in 1975, the Petrolbaltik organization ws created. A long time program of cooperation in the areas of energy, fuel, and raw materials foresees that the SEV member states will carry out prospective scientific developments on the use of new sources of energy, including solar, wind, chemical and geothermal forms of energy. Forty-seven scientific and technical organization of the SEV member states are working under the leadership of the Coordination center concerning the problem, ''New methods on the use of coal,'' on the industrial use of the by-products of hte extraction and enrichment of coal, the methods of their coking, and their liquefaction and gasification. The technique of producing alumina and cement from the ash of energy systems, working on coal, as well as from coal heaps is being successfully applied in Hungary.

  4. Dynamic dual-tracer PET reconstruction.

    Science.gov (United States)

    Gao, Fei; Liu, Huafeng; Jian, Yiqiang; Shi, Pengcheng

    2009-01-01

    Although of important medical implications, simultaneous dual-tracer positron emission tomography reconstruction remains a challenging problem, primarily because the photon measurements from dual tracers are overlapped. In this paper, we propose a simultaneous dynamic dual-tracer reconstruction of tissue activity maps based on guidance from tracer kinetics. The dual-tracer reconstruction problem is formulated in a state-space representation, where parallel compartment models serve as continuous-time system equation describing the tracer kinetic processes of dual tracers, and the imaging data is expressed as discrete sampling of the system states in measurement equation. The image reconstruction problem has therefore become a state estimation problem in a continuous-discrete hybrid paradigm, and H infinity filtering is adopted as the estimation strategy. As H infinity filtering makes no assumptions on the system and measurement statistics, robust reconstruction results can be obtained for the dual-tracer PET imaging system where the statistical properties of measurement data and system uncertainty are not available a priori, even when there are disturbances in the kinetic parameters. Experimental results on digital phantoms, Monte Carlo simulations and physical phantoms have demonstrated the superior performance.

  5. Genital reconstruction in exstrophy patients

    Directory of Open Access Journals (Sweden)

    R B Nerli

    2012-01-01

    Full Text Available Introduction: Surgery for bladder exstrophy has been evolving over the last four to five decades. Because survival has become almost universal, the focus has changed in the exstrophy-epispadias complex to improving quality of life. The most prevalent problem in the long-term function of exstrophy patients is the sexual activity of the adolescent and adult males. The penis in exstrophy patients appears short because of marked congenital deficiency of anterior corporal tissue. Many patients approach for genital reconstruction to improve cosmesis as well as to correct chordee. We report our series of male patients seeking genital reconstruction following exstrophy repair in the past. Materials and Methods: Fourteen adolescent/adult male patients attended urology services during the period January 2000-December 2009 seeking genital reconstruction following exstrophy repair in the past. Results: Three patients underwent epispadias repair, four patients had chordee correction with cosmetic excision of skin tags and seven patients underwent chordee correction with penile lengthening. All patients reported satisfaction in the answered questionnaire. Patients undergoing penile lengthening by partial corporal dissection achieved a mean increase in length of 1.614 ± 0.279 cm dorsally and 1.543 ± 0.230 cm ventrally. The satisfactory rate assessed by the Short Form-36 (SF-36 showed that irrespective of the different genital reconstructive procedures done, the patients were satisfied with cosmetic and functional outcome. Conclusions: Surgical procedures have transformed the management in these patients with bladder exstrophy. Bladders can be safely placed within the pelvis, with most patients achieving urinary continence and cosmetically acceptable external genitalia. Genital reconstruction in the form of correction of chordee, excision of ugly skin tags and lengthening of penis can be performed to give the patients a satisfactory cosmetic and functional

  6. Analysis of reproducibility of the single photon tomography reconstruction by the method of singular value decomposition

    International Nuclear Information System (INIS)

    Devaux, J.Y.; Mazelier, L.; Lefkopoulos, D.

    1997-01-01

    We have earlier shown that the method of singular value decomposition (SVD) allows the image reconstruction in single-photon-tomography with precision higher than the classical method of filtered back-projections. Actually, the establishing of an elementary response matrix which incorporates both the photon attenuation phenomenon, the scattering, the translation non-invariance principle and the detector response, allows to take into account the totality of physical parameters of acquisition. By an non-consecutive optimized truncation of the singular values we have obtained a significant improvement in the efficiency of the regularization of bad conditioning of this problem. The present study aims at verifying the stability of this truncation under modifications of acquisition conditions. Two series of parameters were tested, first, those modifying the geometry of acquisition: the influence of rotation center, the asymmetric disposition of the elementary-volume sources against the detector and the precision of rotation angle, and secondly, those affecting the correspondence between the matrix and the space to be reconstructed: the effect of partial volume and a noise propagation in the experimental model. For the parameters which introduce a spatial distortion, the alteration of reconstruction has been, as expected, comparable to that observed with the classical reconstruction and proportional with the amplitude of shift from the normal one. In exchange, for the effect of partial volume and of noise, the study of truncation signature revealed a variation in the optimal choice of the conserved singular values but with no effect on the global precision of reconstruction

  7. An Approximate Cone Beam Reconstruction Algorithm for Gantry-Tilted CT Using Tangential Filtering

    Directory of Open Access Journals (Sweden)

    Ming Yan

    2006-01-01

    Full Text Available FDK algorithm is a well-known 3D (three-dimensional approximate algorithm for CT (computed tomography image reconstruction and is also known to suffer from considerable artifacts when the scanning cone angle is large. Recently, it has been improved by performing the ramp filtering along the tangential direction of the X-ray source helix for dealing with the large cone angle problem. In this paper, we present an FDK-type approximate reconstruction algorithm for gantry-tilted CT imaging. The proposed method improves the image reconstruction by filtering the projection data along a proper direction which is determined by CT parameters and gantry-tilted angle. As a result, the proposed algorithm for gantry-tilted CT reconstruction can provide more scanning flexibilities in clinical CT scanning and is efficient in computation. The performance of the proposed algorithm is evaluated with turbell clock phantom and thorax phantom and compared with FDK algorithm and a popular 2D (two-dimensional approximate algorithm. The results show that the proposed algorithm can achieve better image quality for gantry-tilted CT image reconstruction.

  8. 3-D Reconstruction From Satellite Images

    DEFF Research Database (Denmark)

    Denver, Troelz

    1999-01-01

    of planetary surfaces, but other purposes is considered as well. The system performance is measured with respect to the precision and the time consumption.The reconstruction process is divided into four major areas: Acquisition, calibration, matching/reconstruction and presentation. Each of these areas...... are treated individually. A detailed treatment of various lens distortions is required, in order to correct for these problems. This subject is included in the acquisition part. In the calibration part, the perspective distortion is removed from the images. Most attention has been paid to the matching problem...

  9. Iterative methods for tomography problems: implementation to a cross-well tomography problem

    Science.gov (United States)

    Karadeniz, M. F.; Weber, G. W.

    2018-01-01

    The velocity distribution between two boreholes is reconstructed by cross-well tomography, which is commonly used in geology. In this paper, iterative methods, Kaczmarz’s algorithm, algebraic reconstruction technique (ART), and simultaneous iterative reconstruction technique (SIRT), are implemented to a specific cross-well tomography problem. Convergence to the solution of these methods and their CPU time for the cross-well tomography problem are compared. Furthermore, these three methods for this problem are compared for different tolerance values.

  10. Maxillary reconstruction: Current concepts and controversies

    Directory of Open Access Journals (Sweden)

    Subramania Iyer

    2014-01-01

    Full Text Available Maxillary reconstruction is still an evolving art when compared to the reconstruction of the mandible. The defects of maxilla apart from affecting the functions of the speech, swallowing and mastication also cause cosmetic disfigurement. Rehabilitation of the form and function in patients with maxillary defects is either by using an obturator prosthesis or by a surgical reconstruction. Literature is abundant with a variety of reconstructive methods. The classification systems are also varied, with no universal acceptance of any one of them. The oncologic safety of these procedures is still debated, and conclusive evidence in this regard has not emerged yet. Management of the orbit is also not yet addressed properly. Tissue engineering, that has been hyped to be one of the possible solutions for this vexing reconstructive problem, has not come out with reliable and reproducible results so far. This review article discusses the rationale and oncological safety of the reconstructing the maxillary defects, critically analyzes the classification systems, offers the different reconstructive methods and touches upon the controversies in this subject. The management of the retained and exenterated orbit associated with maxillectomy is reviewed. The surgical morbidity, complications and the recent advances in this field are also looked into. An algorithm, based on our experience, is presented.

  11. Maxillary reconstruction: Current concepts and controversies

    Science.gov (United States)

    Iyer, Subramania; Thankappan, Krishnakumar

    2014-01-01

    Maxillary reconstruction is still an evolving art when compared to the reconstruction of the mandible. The defects of maxilla apart from affecting the functions of the speech, swallowing and mastication also cause cosmetic disfigurement. Rehabilitation of the form and function in patients with maxillary defects is either by using an obturator prosthesis or by a surgical reconstruction. Literature is abundant with a variety of reconstructive methods. The classification systems are also varied, with no universal acceptance of any one of them. The oncologic safety of these procedures is still debated, and conclusive evidence in this regard has not emerged yet. Management of the orbit is also not yet addressed properly. Tissue engineering, that has been hyped to be one of the possible solutions for this vexing reconstructive problem, has not come out with reliable and reproducible results so far. This review article discusses the rationale and oncological safety of the reconstructing the maxillary defects, critically analyzes the classification systems, offers the different reconstructive methods and touches upon the controversies in this subject. The management of the retained and exenterated orbit associated with maxillectomy is reviewed. The surgical morbidity, complications and the recent advances in this field are also looked into. An algorithm, based on our experience, is presented. PMID:24987199

  12. Algorithms for biomagnetic source imaging with prior anatomical and physiological information

    Energy Technology Data Exchange (ETDEWEB)

    Hughett, Paul William [Univ. of California, Berkeley, CA (United States). Dept. of Electrical Engineering and Computer Sciences

    1995-12-01

    This dissertation derives a new method for estimating current source amplitudes in the brain and heart from external magnetic field measurements and prior knowledge about the probable source positions and amplitudes. The minimum mean square error estimator for the linear inverse problem with statistical prior information was derived and is called the optimal constrained linear inverse method (OCLIM). OCLIM includes as special cases the Shim-Cho weighted pseudoinverse and Wiener estimators but allows more general priors and thus reduces the reconstruction error. Efficient algorithms were developed to compute the OCLIM estimate for instantaneous or time series data. The method was tested in a simulated neuromagnetic imaging problem with five simultaneously active sources on a grid of 387 possible source locations; all five sources were resolved, even though the true sources were not exactly at the modeled source positions and the true source statistics differed from the assumed statistics.

  13. Coordinate reconstruction using box reconstruction and projection of X-ray photo

    International Nuclear Information System (INIS)

    Achmad Suntoro

    2011-01-01

    Some mathematical formula have been derived for a process of reconstruction to define the coordinate of any point relative to a pre set coordinate system. The process of reconstruction uses a reconstruction box in which each edge's length of the box is known, each top-bottom face and left-right face of the box having a cross marker, and the top face and the right face of the box as plane projections by X-ray source in perspective projection -system. Using the data of the two X-ray projection images, any point inside the reconstruction box, as long as its projection is recorded in the two photos, will be determined its coordinate relative to the midpoint of the reconstruction box as the central point coordinates. (author)

  14. Special Inspector General for Iraq Reconstruction. Quarterly Report to the United States Congress

    National Research Council Canada - National Science Library

    Bowen, Jr, Stuart W

    2007-01-01

    .... relief and reconstruction program in Iraq. Two notable developments frame this Report. First, total relief and reconstruction investment for Iraq from all sources the United States, Iraq, and other donors passed...

  15. Low dose reconstruction algorithm for differential phase contrast imaging.

    Science.gov (United States)

    Wang, Zhentian; Huang, Zhifeng; Zhang, Li; Chen, Zhiqiang; Kang, Kejun; Yin, Hongxia; Wang, Zhenchang; Marco, Stampanoni

    2011-01-01

    Differential phase contrast imaging computed tomography (DPCI-CT) is a novel x-ray inspection method to reconstruct the distribution of refraction index rather than the attenuation coefficient in weakly absorbing samples. In this paper, we propose an iterative reconstruction algorithm for DPCI-CT which benefits from the new compressed sensing theory. We first realize a differential algebraic reconstruction technique (DART) by discretizing the projection process of the differential phase contrast imaging into a linear partial derivative matrix. In this way the compressed sensing reconstruction problem of DPCI reconstruction can be transformed to a resolved problem in the transmission imaging CT. Our algorithm has the potential to reconstruct the refraction index distribution of the sample from highly undersampled projection data. Thus it can significantly reduce the dose and inspection time. The proposed algorithm has been validated by numerical simulations and actual experiments.

  16. Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting.

    Science.gov (United States)

    Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen F; Wald, Lawrence L

    2016-08-01

    This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple MR tissue parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization.

  17. Parallel CT image reconstruction based on GPUs

    International Nuclear Information System (INIS)

    Flores, Liubov A.; Vidal, Vicent; Mayo, Patricia; Rodenas, Francisco; Verdú, Gumersindo

    2014-01-01

    In X-ray computed tomography (CT) iterative methods are more suitable for the reconstruction of images with high contrast and precision in noisy conditions from a small number of projections. However, in practice, these methods are not widely used due to the high computational cost of their implementation. Nowadays technology provides the possibility to reduce effectively this drawback. It is the goal of this work to develop a fast GPU-based algorithm to reconstruct high quality images from under sampled and noisy projection data. - Highlights: • We developed GPU-based iterative algorithm to reconstruct images. • Iterative algorithms are capable to reconstruct images from under sampled set of projections. • The computer cost of the implementation of the developed algorithm is low. • The efficiency of the algorithm increases for the large scale problems

  18. [Development and current situation of reconstruction methods following total sacrectomy].

    Science.gov (United States)

    Huang, Siyi; Ji, Tao; Guo, Wei

    2018-05-01

    To review the development of the reconstruction methods following total sacrectomy, and to provide reference for finding a better reconstruction method following total sacrectomy. The case reports and biomechanical and finite element studies of reconstruction following total sacrectomy at home and abroad were searched. Development and current situation were summarized. After developing for nearly 30 years, great progress has been made in the reconstruction concept and fixation techniques. The fixation methods can be summarized as the following three strategies: spinopelvic fixation (SPF), posterior pelvic ring fixation (PPRF), and anterior spinal column fixation (ASCF). SPF has undergone technical progress from intrapelvic rod and hook constructs to pedicle and iliac screw-rod systems. PPRF and ASCF could improve the stability of the reconstruction system. Reconstruction following total sacrectomy remains a challenge. Reconstruction combining SPF, PPRF, and ASCF is the developmental direction to achieve mechanical stability. How to gain biological fixation to improve the long-term stability is an urgent problem to be solved.

  19. Failed medial patellofemoral ligament reconstruction: Causes and surgical strategies

    OpenAIRE

    Sanchis-Alfonso, Vicente; Montesinos-Berry, Erik; Ramirez-Fuentes, Cristina; Leal Blanquet, Joan; Gelber, Pablo-Eduardo; Monllau García, Juan Carlos

    2017-01-01

    Patellar instability is a common clinical problem encountered by orthopedic surgeons specializing in the knee. For patients with chronic lateral patellar instability, the standard surgical approach is to stabilize the patella through a medial patellofemoral ligament (MPFL) reconstruction. Foreseeably, an increasing number of revision surgeries of the reconstructed MPFL will be seen in upcoming years. In this paper, the causes of failed MPFL reconstruction are analyzed: (1) incorrect surgical ...

  20. Profile reconstruction from neutron reflectivity data and a priori knowledge

    International Nuclear Information System (INIS)

    Leeb, H.

    2008-01-01

    The problem of incomplete and noisy information in profile reconstruction from neutron reflectometry data is considered. In particular methods of Bayesian statistics in combination with modelling or inverse scattering techniques are considered in order to properly include the required a priori knowledge to obtain quantitatively reliable estimates of the reconstructed profiles. Applying Bayes theorem the results of different experiments on the same sample can be consistently included in the profile reconstruction

  1. Self-expressive Dictionary Learning for Dynamic 3D Reconstruction.

    Science.gov (United States)

    Zheng, Enliang; Ji, Dinghuang; Dunn, Enrique; Frahm, Jan-Michael

    2017-08-22

    We target the problem of sparse 3D reconstruction of dynamic objects observed by multiple unsynchronized video cameras with unknown temporal overlap. To this end, we develop a framework to recover the unknown structure without sequencing information across video sequences. Our proposed compressed sensing framework poses the estimation of 3D structure as the problem of dictionary learning, where the dictionary is defined as an aggregation of the temporally varying 3D structures. Given the smooth motion of dynamic objects, we observe any element in the dictionary can be well approximated by a sparse linear combination of other elements in the same dictionary (i.e. self-expression). Our formulation optimizes a biconvex cost function that leverages a compressed sensing formulation and enforces both structural dependency coherence across video streams, as well as motion smoothness across estimates from common video sources. We further analyze the reconstructability of our approach under different capture scenarios, and its comparison and relation to existing methods. Experimental results on large amounts of synthetic data as well as real imagery demonstrate the effectiveness of our approach.

  2. [Reconstructive surgery of cranio-orbital injuries].

    Science.gov (United States)

    Eolchiian, S A; Potapov, A A; Serova, N K; Kataev, M G; Sergeeva, L A; Zakharova, N E; Van Damm, P

    2011-01-01

    The aim of study was to optimize evaluation and surgery of cranioorbital injuries in different periods after trauma. Material and methods. We analyzed 374 patients with cranioorbital injuries treated in Burdenko Neurosurgery Institute in different periods after trauma from January 1998 till April 2010. 288 (77%) underwent skull and facial skeleton reconstructive surgery within 24 hours - 7 years after trauma. Clinical and CT examination data were used for preoperative planning and assessment of surgery results. Stereolithographic models (STLM) were applied for preoperative planning in 89 cases. The follow-up period ranged from 4 months up to 10 years. Results. In 254 (88%) of 288 patients reconstruction of anterior skull base, upper and/or midface with restoration of different parts of orbit was performed. Anterior skull base CSF leaks repair, calvarial vault reconstruction, maxillar and mandibular osteosynthesis were done in 34 (12%) cases. 242 (84%) of 288 patients underwent one reconstructive operation, while 46 (16%)--two and more (totally 105 operations). The patients with extended frontoorbital and midface fractures commonly needed more than one operation--in 27 (62.8%) cases. Different plastic materials were used for reconstruction in 233 (80.9%) patients, of those in 147 (51%) cases split calvarial bone grafts were preferred. Good functional and cosmetic results were achieved in 261 (90.6%) of 288 patients while acceptable were observed in 27 (9.4%). Conclusion. Active single-stage surgical management for repair of combined cranioorbital injury in acute period with primary reconstruction optimizes functional and cosmetic outcomes and prevents the problems of delayed or secondary reconstruction. Severe extended anterior skull base, upper and midface injuries when intracranial surgery is needed produced the most challenging difficulties for adequate reconstruction. Randomized trial is required to define the extent and optimal timing of reconstructive surgery

  3. Hybrid light transport model based bioluminescence tomography reconstruction for early gastric cancer detection

    Science.gov (United States)

    Chen, Xueli; Liang, Jimin; Hu, Hao; Qu, Xiaochao; Yang, Defu; Chen, Duofang; Zhu, Shouping; Tian, Jie

    2012-03-01

    Gastric cancer is the second cause of cancer-related death in the world, and it remains difficult to cure because it has been in late-stage once that is found. Early gastric cancer detection becomes an effective approach to decrease the gastric cancer mortality. Bioluminescence tomography (BLT) has been applied to detect early liver cancer and prostate cancer metastasis. However, the gastric cancer commonly originates from the gastric mucosa and grows outwards. The bioluminescent light will pass through a non-scattering region constructed by gastric pouch when it transports in tissues. Thus, the current BLT reconstruction algorithms based on the approximation model of radiative transfer equation are not optimal to handle this problem. To address the gastric cancer specific problem, this paper presents a novel reconstruction algorithm that uses a hybrid light transport model to describe the bioluminescent light propagation in tissues. The radiosity theory integrated with the diffusion equation to form the hybrid light transport model is utilized to describe light propagation in the non-scattering region. After the finite element discretization, the hybrid light transport model is converted into a minimization problem which fuses an l1 norm based regularization term to reveal the sparsity of bioluminescent source distribution. The performance of the reconstruction algorithm is first demonstrated with a digital mouse based simulation with the reconstruction error less than 1mm. An in situ gastric cancer-bearing nude mouse based experiment is then conducted. The primary result reveals the ability of the novel BLT reconstruction algorithm in early gastric cancer detection.

  4. AIR Tools - A MATLAB package of algebraic iterative reconstruction methods

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Saxild-Hansen, Maria

    2012-01-01

    We present a MATLAB package with implementations of several algebraic iterative reconstruction methods for discretizations of inverse problems. These so-called row action methods rely on semi-convergence for achieving the necessary regularization of the problem. Two classes of methods are impleme......We present a MATLAB package with implementations of several algebraic iterative reconstruction methods for discretizations of inverse problems. These so-called row action methods rely on semi-convergence for achieving the necessary regularization of the problem. Two classes of methods...... are implemented: Algebraic Reconstruction Techniques (ART) and Simultaneous Iterative Reconstruction Techniques (SIRT). In addition we provide a few simplified test problems from medical and seismic tomography. For each iterative method, a number of strategies are available for choosing the relaxation parameter...

  5. Intrinsic functional brain mapping in reconstructed 4D magnetic susceptibility (χ) data space.

    Science.gov (United States)

    Chen, Zikuan; Calhoun, Vince

    2015-02-15

    By solving an inverse problem of T2*-weighted magnetic resonance imaging for a dynamic fMRI study, we reconstruct a 4D magnetic susceptibility source (χ) data space for intrinsic functional mapping. A 4D phase dataset is calculated from a 4D complex fMRI dataset. The background field and phase wrapping effect are removed by a Laplacian technique. A 3D χ source map is reconstructed from a 3D phase image by a computed inverse MRI (CIMRI) scheme. A 4D χ data space is reconstructed by repeating the 3D χ source reconstruction for each time point. A functional map is calculated by a temporal correlation between voxel signals in the 4D χ space and the timecourse of the task paradigm. With a finger-tapping experiment, we obtain two 3D functional mappings in the 4D magnitude data space and in the reconstructed 4D χ data space. We find that the χ-based functional mapping reveals co-occurrence of bidirectional responses in a 3D activation map that is different from the conventional magnitude-based mapping. The χ-based functional mapping can also be achieved by a 3D deconvolution of a phase activation map. Based on a subject experimental comparison, we show that the 4D χ tomography method could produce a similar χ activation map as obtained by the 3D deconvolution method. By removing the dipole effect and other fMRI technological contaminations, 4D χ tomography provides a 4D χ data space that allows a more direct and truthful functional mapping of a brain activity. Published by Elsevier B.V.

  6. Update on orbital reconstruction.

    Science.gov (United States)

    Chen, Chien-Tzung; Chen, Yu-Ray

    2010-08-01

    Orbital trauma is common and frequently complicated by ocular injuries. The recent literature on orbital fracture is analyzed with emphasis on epidemiological data assessment, surgical timing, method of approach and reconstruction materials. Computed tomographic (CT) scan has become a routine evaluation tool for orbital trauma, and mobile CT can be applied intraoperatively if necessary. Concomitant serious ocular injury should be carefully evaluated preoperatively. Patients presenting with nonresolving oculocardiac reflex, 'white-eyed' blowout fracture, or diplopia with a positive forced duction test and CT evidence of orbital tissue entrapment require early surgical repair. Otherwise, enophthalmos can be corrected by late surgery with a similar outcome to early surgery. The use of an endoscope-assisted approach for orbital reconstruction continues to grow, offering an alternative method. Advances in alloplastic materials have improved surgical outcome and shortened operating time. In this review of modern orbital reconstruction, several controversial issues such as surgical indication, surgical timing, method of approach and choice of reconstruction material are discussed. Preoperative fine-cut CT image and thorough ophthalmologic examination are key elements to determine surgical indications. The choice of surgical approach and reconstruction materials much depends on the surgeon's experience and the reconstruction area. Prefabricated alloplastic implants together with image software and stereolithographic models are significant advances that help to more accurately reconstruct the traumatized orbit. The recent evolution of orbit reconstruction improves functional and aesthetic results and minimizes surgical complications.

  7. The use of anatomical information for molecular image reconstruction algorithms: Attention/Scatter correction, motion compensation, and noise reduction

    Energy Technology Data Exchange (ETDEWEB)

    Chun, Se Young [School of Electrical and Computer Engineering, Ulsan National Institute of Science and Technology (UNIST), Ulsan (Korea, Republic of)

    2016-03-15

    PET and SPECT are important tools for providing valuable molecular information about patients to clinicians. Advances in nuclear medicine hardware technologies and statistical image reconstruction algorithms enabled significantly improved image quality. Sequentially or simultaneously acquired anatomical images such as CT and MRI from hybrid scanners are also important ingredients for improving the image quality of PET or SPECT further. High-quality anatomical information has been used and investigated for attenuation and scatter corrections, motion compensation, and noise reduction via post-reconstruction filtering and regularization in inverse problems. In this article, we will review works using anatomical information for molecular image reconstruction algorithms for better image quality by describing mathematical models, discussing sources of anatomical information for different cases, and showing some examples.

  8. Hanford Environmental Dose Reconstruction Project

    International Nuclear Information System (INIS)

    Finch, S.M.; McMakin, A.H.

    1991-04-01

    The objective of the Hanford Environmental Dose Reconstruction Project is to estimate the radiation doses that populations could have received from nuclear operations at Hanford since 1944. The project is being managed and conducted by the Pacific Northwest Laboratory (PNL) under the direction of an independent Technical Steering Panel (TSP). The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed, from released to impact on humans (dose estimates): source terms; environmental transport; environmental monitoring data; demographics, agriculture, food habits; and, environmental pathways and dose estimates

  9. Hanford Environmental Dose Reconstruction Project

    International Nuclear Information System (INIS)

    Finch, S.M.; McMakin, A.H.

    1992-06-01

    The objective of the Hanford Environmental Dose Reconstruction Project is to estimate the radiation doses that individuals and populations could have received from nuclear operations at Hanford since 1944. The project is being managed and conducted by the Battelle Pacific Northwest Laboratories under contract with the Centers for Disease Control. The independent Technical Steering Panel (TSP) provides technical direction. The project is divided into the following technical tasks. These tasks correspond to the path radionuclides followed, from release to impact on humans (dose estimates): source terms; environmental transport; environmental monitoring data; demography, food consumption, and agriculture; environmental pathways and dose estimates

  10. Ultrafast and scalable cone-beam CT reconstruction using MapReduce in a cloud computing environment.

    Science.gov (United States)

    Meng, Bowen; Pratx, Guillem; Xing, Lei

    2011-12-01

    Four-dimensional CT (4DCT) and cone beam CT (CBCT) are widely used in radiation therapy for accurate tumor target definition and localization. However, high-resolution and dynamic image reconstruction is computationally demanding because of the large amount of data processed. Efficient use of these imaging techniques in the clinic requires high-performance computing. The purpose of this work is to develop a novel ultrafast, scalable and reliable image reconstruction technique for 4D CBCT∕CT using a parallel computing framework called MapReduce. We show the utility of MapReduce for solving large-scale medical physics problems in a cloud computing environment. In this work, we accelerated the Feldcamp-Davis-Kress (FDK) algorithm by porting it to Hadoop, an open-source MapReduce implementation. Gated phases from a 4DCT scans were reconstructed independently. Following the MapReduce formalism, Map functions were used to filter and backproject subsets of projections, and Reduce function to aggregate those partial backprojection into the whole volume. MapReduce automatically parallelized the reconstruction process on a large cluster of computer nodes. As a validation, reconstruction of a digital phantom and an acquired CatPhan 600 phantom was performed on a commercial cloud computing environment using the proposed 4D CBCT∕CT reconstruction algorithm. Speedup of reconstruction time is found to be roughly linear with the number of nodes employed. For instance, greater than 10 times speedup was achieved using 200 nodes for all cases, compared to the same code executed on a single machine. Without modifying the code, faster reconstruction is readily achievable by allocating more nodes in the cloud computing environment. Root mean square error between the images obtained using MapReduce and a single-threaded reference implementation was on the order of 10(-7). Our study also proved that cloud computing with MapReduce is fault tolerant: the reconstruction completed

  11. A New Method for Coronal Magnetic Field Reconstruction

    Science.gov (United States)

    Yi, Sibaek; Choe, Gwang-Son; Cho, Kyung-Suk; Kim, Kap-Sung

    2017-08-01

    A precise way of coronal magnetic field reconstruction (extrapolation) is an indispensable tool for understanding of various solar activities. A variety of reconstruction codes have been developed so far and are available to researchers nowadays, but they more or less bear this and that shortcoming. In this paper, a new efficient method for coronal magnetic field reconstruction is presented. The method imposes only the normal components of magnetic field and current density at the bottom boundary to avoid the overspecification of the reconstruction problem, and employs vector potentials to guarantee the divergence-freeness. In our method, the normal component of current density is imposed, not by adjusting the tangential components of A, but by adjusting its normal component. This allows us to avoid a possible numerical instability that on and off arises in codes using A. In real reconstruction problems, the information for the lateral and top boundaries is absent. The arbitrariness of the boundary conditions imposed there as well as various preprocessing brings about the diversity of resulting solutions. We impose the source surface condition at the top boundary to accommodate flux imbalance, which always shows up in magnetograms. To enhance the convergence rate, we equip our code with a gradient-method type accelerator. Our code is tested on two analytical force-free solutions. When the solution is given only at the bottom boundary, our result surpasses competitors in most figures of merits devised by Schrijver et al. (2006). We have also applied our code to a real active region NOAA 11974, in which two M-class flares and a halo CME took place. The EUV observation shows a sudden appearance of an erupting loop before the first flare. Our numerical solutions show that two entwining flux tubes exist before the flare and their shackling is released after the CME with one of them opened up. We suggest that the erupting loop is created by magnetic reconnection between

  12. Bayesian Multi-Energy Computed Tomography reconstruction approaches based on decomposition models

    International Nuclear Information System (INIS)

    Cai, Caifang

    2013-01-01

    Multi-Energy Computed Tomography (MECT) makes it possible to get multiple fractions of basis materials without segmentation. In medical application, one is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical MECT measurements are usually obtained with polychromatic X-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam poly-chromaticity fail to estimate the correct decomposition fractions and result in Beam-Hardening Artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log pre-processing and the water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on non-linear forward models counting the beam poly-chromaticity show great potential for giving accurate fraction images.This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint Maximum A Posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a non-quadratic cost function. To solve it, the use of a monotone Conjugate Gradient (CG) algorithm with suboptimal descent steps is proposed.The performances of the proposed approach are analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also

  13. Joint-2D-SL0 Algorithm for Joint Sparse Matrix Reconstruction

    Directory of Open Access Journals (Sweden)

    Dong Zhang

    2017-01-01

    Full Text Available Sparse matrix reconstruction has a wide application such as DOA estimation and STAP. However, its performance is usually restricted by the grid mismatch problem. In this paper, we revise the sparse matrix reconstruction model and propose the joint sparse matrix reconstruction model based on one-order Taylor expansion. And it can overcome the grid mismatch problem. Then, we put forward the Joint-2D-SL0 algorithm which can solve the joint sparse matrix reconstruction problem efficiently. Compared with the Kronecker compressive sensing method, our proposed method has a higher computational efficiency and acceptable reconstruction accuracy. Finally, simulation results validate the superiority of the proposed method.

  14. Distributed MRI reconstruction using Gadgetron-based cloud computing.

    Science.gov (United States)

    Xue, Hui; Inati, Souheil; Sørensen, Thomas Sangild; Kellman, Peter; Hansen, Michael S

    2015-03-01

    To expand the open source Gadgetron reconstruction framework to support distributed computing and to demonstrate that a multinode version of the Gadgetron can be used to provide nonlinear reconstruction with clinically acceptable latency. The Gadgetron framework was extended with new software components that enable an arbitrary number of Gadgetron instances to collaborate on a reconstruction task. This cloud-enabled version of the Gadgetron was deployed on three different distributed computing platforms ranging from a heterogeneous collection of commodity computers to the commercial Amazon Elastic Compute Cloud. The Gadgetron cloud was used to provide nonlinear, compressed sensing reconstruction on a clinical scanner with low reconstruction latency (eg, cardiac and neuroimaging applications). The proposed setup was able to handle acquisition and 11 -SPIRiT reconstruction of nine high temporal resolution real-time, cardiac short axis cine acquisitions, covering the ventricles for functional evaluation, in under 1 min. A three-dimensional high-resolution brain acquisition with 1 mm(3) isotropic pixel size was acquired and reconstructed with nonlinear reconstruction in less than 5 min. A distributed computing enabled Gadgetron provides a scalable way to improve reconstruction performance using commodity cluster computing. Nonlinear, compressed sensing reconstruction can be deployed clinically with low image reconstruction latency. © 2014 Wiley Periodicals, Inc.

  15. Surface Reconstruction and Image Enhancement via $L^1$-Minimization

    KAUST Repository

    Dobrev, Veselin

    2010-01-01

    A surface reconstruction technique based on minimization of the total variation of the gradient is introduced. Convergence of the method is established, and an interior-point algorithm solving the associated linear programming problem is introduced. The reconstruction algorithm is illustrated on various test cases including natural and urban terrain data, and enhancement oflow-resolution or aliased images. Copyright © by SIAM.

  16. Tomographic image reconstruction using training images

    DEFF Research Database (Denmark)

    Soltani, Sara; Andersen, Martin Skovgaard; Hansen, Per Christian

    2017-01-01

    We describe and examine an algorithm for tomographic image reconstruction where prior knowledge about the solution is available in the form of training images. We first construct a non-negative dictionary based on prototype elements from the training images; this problem is formulated within...

  17. Prepectoral Implant-Based Breast Reconstruction

    Directory of Open Access Journals (Sweden)

    Lyndsey Highton, BMBCh, MA, FRCS(Plast

    2017-09-01

    Conclusion:. Prepectoral implant placement with ADM cover is emerging as an alternative approach for IBR. This method facilitates breast reconstruction with a good cosmetic outcome for patients who want a quick recovery without potential compromise of pectoral muscle function and associated problems.

  18. Simulation and track reconstruction for beam telescopes

    CERN Document Server

    Maqbool, Salman

    2017-01-01

    Beam telescopes are used for testing new detectors under development. Sensors are placed and a particle beam is passed through them. To test these novel detectors and determine their properties, the particle tracks need to be reconstructed from the known detectors in the telescope. Based on the reconstructed track, it’s predicted hits on the Device under Test (DUT) are compared with the actual hits on the DUT. Several methods exist for track reconstruction, but most of them don’t account for the effects of multiple scattering. General Broken Lines is one such algorithm which incorporates these effects during reconstruction. The aim of this project was to simulate the beam telescope and extend the track reconstruction framework for the FE-I4 telescope, which takes these effects into account. Section 1 introduces the problem, while section 2 focuses on beam telescopes. This is followed by the Allpix2 simulation framework in Section 3. And finally, Section 4 introduces the Proteus track reconstruction framew...

  19. Accelerated Compressed Sensing Based CT Image Reconstruction.

    Science.gov (United States)

    Hashemi, SayedMasoud; Beheshti, Soosan; Gill, Patrick R; Paul, Narinder S; Cobbold, Richard S C

    2015-01-01

    In X-ray computed tomography (CT) an important objective is to reduce the radiation dose without significantly degrading the image quality. Compressed sensing (CS) enables the radiation dose to be reduced by producing diagnostic images from a limited number of projections. However, conventional CS-based algorithms are computationally intensive and time-consuming. We propose a new algorithm that accelerates the CS-based reconstruction by using a fast pseudopolar Fourier based Radon transform and rebinning the diverging fan beams to parallel beams. The reconstruction process is analyzed using a maximum-a-posterior approach, which is transformed into a weighted CS problem. The weights involved in the proposed model are calculated based on the statistical characteristics of the reconstruction process, which is formulated in terms of the measurement noise and rebinning interpolation error. Therefore, the proposed method not only accelerates the reconstruction, but also removes the rebinning and interpolation errors. Simulation results are shown for phantoms and a patient. For example, a 512 × 512 Shepp-Logan phantom when reconstructed from 128 rebinned projections using a conventional CS method had 10% error, whereas with the proposed method the reconstruction error was less than 1%. Moreover, computation times of less than 30 sec were obtained using a standard desktop computer without numerical optimization.

  20. Accelerated Compressed Sensing Based CT Image Reconstruction

    Directory of Open Access Journals (Sweden)

    SayedMasoud Hashemi

    2015-01-01

    Full Text Available In X-ray computed tomography (CT an important objective is to reduce the radiation dose without significantly degrading the image quality. Compressed sensing (CS enables the radiation dose to be reduced by producing diagnostic images from a limited number of projections. However, conventional CS-based algorithms are computationally intensive and time-consuming. We propose a new algorithm that accelerates the CS-based reconstruction by using a fast pseudopolar Fourier based Radon transform and rebinning the diverging fan beams to parallel beams. The reconstruction process is analyzed using a maximum-a-posterior approach, which is transformed into a weighted CS problem. The weights involved in the proposed model are calculated based on the statistical characteristics of the reconstruction process, which is formulated in terms of the measurement noise and rebinning interpolation error. Therefore, the proposed method not only accelerates the reconstruction, but also removes the rebinning and interpolation errors. Simulation results are shown for phantoms and a patient. For example, a 512 × 512 Shepp-Logan phantom when reconstructed from 128 rebinned projections using a conventional CS method had 10% error, whereas with the proposed method the reconstruction error was less than 1%. Moreover, computation times of less than 30 sec were obtained using a standard desktop computer without numerical optimization.