WorldWideScience

Sample records for semi-analytical model reconstruction

  1. A simple stationary semi-analytical wake model

    Larsen, Gunner Chr.

    We present an idealized simple, but fast, semi-analytical algorithm for computation of stationary wind farm wind fields with a possible potential within a multi-fidelity strategy for wind farm topology optimization. Basically, the model considers wakes as linear perturbations on the ambient non......-linear. With each of these approached, a parabolic system are described, which is initiated by first considering the most upwind located turbines and subsequently successively solved in the downstream direction. Algorithms for the resulting wind farm flow fields are proposed, and it is shown that in the limit......-uniform mean wind field, although the modelling of the individual stationary wake flow fields includes non-linear terms. The simulation of the individual wake contributions are based on an analytical solution of the thin shear layer approximation of the NS equations. The wake flow fields are assumed...

  2. A semi-analytic model of magnetized liner inertial fusion

    McBride, Ryan D.; Slutz, Stephen A. [Sandia National Laboratories, Albuquerque, New Mexico 87185 (United States)

    2015-05-15

    Presented is a semi-analytic model of magnetized liner inertial fusion (MagLIF). This model accounts for several key aspects of MagLIF, including: (1) preheat of the fuel (optionally via laser absorption); (2) pulsed-power-driven liner implosion; (3) liner compressibility with an analytic equation of state, artificial viscosity, internal magnetic pressure, and ohmic heating; (4) adiabatic compression and heating of the fuel; (5) radiative losses and fuel opacity; (6) magnetic flux compression with Nernst thermoelectric losses; (7) magnetized electron and ion thermal conduction losses; (8) end losses; (9) enhanced losses due to prescribed dopant concentrations and contaminant mix; (10) deuterium-deuterium and deuterium-tritium primary fusion reactions for arbitrary deuterium to tritium fuel ratios; and (11) magnetized α-particle fuel heating. We show that this simplified model, with its transparent and accessible physics, can be used to reproduce the general 1D behavior presented throughout the original MagLIF paper [S. A. Slutz et al., Phys. Plasmas 17, 056303 (2010)]. We also discuss some important physics insights gained as a result of developing this model, such as the dependence of radiative loss rates on the radial fraction of the fuel that is preheated.

  3. Semi-analytical solutions of the Schnakenberg model of a reaction-diffusion cell with feedback

    Al Noufaey, K. S.

    2018-06-01

    This paper considers the application of a semi-analytical method to the Schnakenberg model of a reaction-diffusion cell. The semi-analytical method is based on the Galerkin method which approximates the original governing partial differential equations as a system of ordinary differential equations. Steady-state curves, bifurcation diagrams and the region of parameter space in which Hopf bifurcations occur are presented for semi-analytical solutions and the numerical solution. The effect of feedback control, via altering various concentrations in the boundary reservoirs in response to concentrations in the cell centre, is examined. It is shown that increasing the magnitude of feedback leads to destabilization of the system, whereas decreasing this parameter to negative values of large magnitude stabilizes the system. The semi-analytical solutions agree well with numerical solutions of the governing equations.

  4. Two-dimensional semi-analytic nodal method for multigroup pin power reconstruction

    Seung Gyou, Baek; Han Gyu, Joo; Un Chul, Lee

    2007-01-01

    A pin power reconstruction method applicable to multigroup problems involving square fuel assemblies is presented. The method is based on a two-dimensional semi-analytic nodal solution which consists of eight exponential terms and 13 polynomial terms. The 13 polynomial terms represent the particular solution obtained under the condition of a 2-dimensional 13 term source expansion. In order to achieve better approximation of the source distribution, the least square fitting method is employed. The 8 exponential terms represent a part of the analytically obtained homogeneous solution and the 8 coefficients are determined by imposing constraints on the 4 surface average currents and 4 corner point fluxes. The surface average currents determined from a transverse-integrated nodal solution are used directly whereas the corner point fluxes are determined during the course of the reconstruction by employing an iterative scheme that would realize the corner point balance condition. The outgoing current based corner point flux determination scheme is newly introduced. The accuracy of the proposed method is demonstrated with the L336C5 benchmark problem. (authors)

  5. Semi-analytical wave functions in relativistic average atom model for high-temperature plasmas

    Guo Yonghui; Duan Yaoyong; Kuai Bin

    2007-01-01

    The semi-analytical method is utilized for solving a relativistic average atom model for high-temperature plasmas. Semi-analytical wave function and the corresponding energy eigenvalue, containing only a numerical factor, are obtained by fitting the potential function in the average atom into hydrogen-like one. The full equations for the model are enumerated, and more attentions are paid upon the detailed procedures including the numerical techniques and computer code design. When the temperature of plasmas is comparatively high, the semi-analytical results agree quite well with those obtained by using a full numerical method for the same model and with those calculated by just a little different physical models, and the result's accuracy and computation efficiency are worthy of note. The drawbacks for this model are also analyzed. (authors)

  6. A fast semi-analytical model for the slotted structure of induction motors

    Sprangers, R.L.J.; Paulides, J.J.H.; Gysen, B.L.J.; Lomonova, E.A.

    A fast, semi-analytical model for induction motors (IMs) is presented. In comparison to traditional analytical models for IMs, such as lumped parameter, magnetic equivalent circuit and anisotropic layer models, the presented model calculates a continuous distribution of the magnetic flux density in

  7. Evaluation of subject contrast and normalized average glandular dose by semi-analytical models

    Tomal, A.; Poletti, M.E.; Caldas, L.V.E.

    2010-01-01

    In this work, two semi-analytical models are described to evaluate the subject contrast of nodules and the normalized average glandular dose in mammography. Both models were used to study the influence of some parameters, such as breast characteristics (thickness and composition) and incident spectra (kVp and target-filter combination) on the subject contrast of a nodule and on the normalized average glandular dose. From the subject contrast results, detection limits of nodules were also determined. Our results are in good agreement with those reported by other authors, who had used Monte Carlo simulation, showing the robustness of our semi-analytical method.

  8. Semi-analytical Model for Estimating Absorption Coefficients of Optically Active Constituents in Coastal Waters

    Wang, D.; Cui, Y.

    2015-12-01

    The objectives of this paper are to validate the applicability of a multi-band quasi-analytical algorithm (QAA) in retrieval absorption coefficients of optically active constituents in turbid coastal waters, and to further improve the model using a proposed semi-analytical model (SAA). The ap(531) and ag(531) semi-analytically derived using SAA model are quite different from the retrievals procedures of QAA model that ap(531) and ag(531) are semi-analytically derived from the empirical retrievals results of a(531) and a(551). The two models are calibrated and evaluated against datasets taken from 19 independent cruises in West Florida Shelf in 1999-2003, provided by SeaBASS. The results indicate that the SAA model produces a superior performance to QAA model in absorption retrieval. Using of the SAA model in retrieving absorption coefficients of optically active constituents from West Florida Shelf decreases the random uncertainty of estimation by >23.05% from the QAA model. This study demonstrates the potential of the SAA model in absorption coefficients of optically active constituents estimating even in turbid coastal waters. Keywords: Remote sensing; Coastal Water; Absorption Coefficient; Semi-analytical Model

  9. A comparison of galaxy group luminosity functions from semi-analytic models

    Snaith, Owain N.; Gibson, Brad K.; Brook, Chris B.; Courty, Stéphanie; Sánchez-Blázquez, Patricia; Kawata, Daisuke; Knebe, Alexander; Sales, Laura V.

    Semi-analytic models (SAMs) are currently one of the primary tools with which we model statistically significant ensembles of galaxies. The underlying physical prescriptions inherent to each SAM are, in many cases, different from one another. Several SAMs have been applied to the dark matter merger

  10. Magnetic saturation in semi-analytical harmonic modeling for electric machine analysis

    Sprangers, R.L.J.; Paulides, J.J.H.; Gysen, B.L.J.; Lomonova, E.

    2016-01-01

    A semi-analytical method based on the harmonic modeling (HM) technique is presented for the analysis of the magneto-static field distribution in the slotted structure of rotating electric machines. In contrast to the existing literature, the proposed model does not require the assumption of infinite

  11. Comparison of a semi-analytic and a CFD model uranium combustion to experimental data

    Clarksean, R.

    1998-01-01

    Two numerical models were developed and compared for the analysis of uranium combustion and ignition in a furnace. Both a semi-analytical solution and a computational fluid dynamics (CFD) numerical solution were obtained. Prediction of uranium oxidation rates is important for fuel storage applications, fuel processing, and the development of spent fuel metal waste forms. The semi-analytical model was based on heat transfer correlations, a semi-analytical model of flow over a flat surface, and simple radiative heat transfer from the material surface. The CFD model numerically determined the flowfield over the object of interest, calculated the heat and mass transfer to the material of interest, and calculated the radiative heat exchange of the material with the furnace. The semi-analytical model is much less detailed than the CFD model, but yields reasonable results and assists in understanding the physical process. Short computation times allowed the analyst to study numerous scenarios. The CFD model had significantly longer run times, was found to have some physical limitations that were not easily modified, but was better able to yield details of the heat and mass transfer and flow field once code limitations were overcome

  12. Semi-analytical modelling of positive corona discharge in air

    Pontiga, Francisco; Yanallah, Khelifa; Chen, Junhong

    2013-09-01

    Semianalytical approximate solutions of the spatial distribution of electric field and electron and ion densities have been obtained by solving Poisson's equations and the continuity equations for the charged species along the Laplacian field lines. The need to iterate for the correct value of space charge on the corona electrode has been eliminated by using the corona current distribution over the grounded plane derived by Deutsch, which predicts a cos m θ law similar to Warburg's law. Based on the results of the approximated model, a parametric study of the influence of gas pressure, the corona wire radius, and the inter-electrode wire-plate separation has been carried out. Also, the approximate solutions of the electron number density has been combined with a simplified plasma chemistry model in order to compute the ozone density generated by the corona discharge in the presence of a gas flow. This work was supported by the Consejeria de Innovacion, Ciencia y Empresa (Junta de Andalucia) and by the Ministerio de Ciencia e Innovacion, Spain, within the European Regional Development Fund contracts FQM-4983 and FIS2011-25161.

  13. A fluid-coupled transmitting CMUT operated in collapse mode : Semi-analytic modeling and experiments

    Pekař, Martin; van Nispen, Stephan H.M.; Fey, Rob H.B.; Shulepov, Sergei; Mihajlović, Nenad; Nijmeijer, Henk

    2017-01-01

    An electro-mechanical, semi-analytic, reduced-order (RO) model of a fluid-loaded transmitting capacitive-micromachined ultrasound transducer (CMUT) operated in collapse mode is developed. Simulation of static deflections, approximated by a linear combination of six mode shapes, are benchmarked

  14. Accuracy of semi-analytical finite elements for modelling wave propagation in rails

    Andhavarapu, EV

    2010-01-01

    Full Text Available The semi-analytical finite element method (SAFE) is a popular method for analysing guided wave propagation in elastic waveguides of complex cross-section such as rails. The convergence of these models has previously been studied for linear...

  15. Simplified semi-analytical model for mass transport simulation in unsaturated zone

    Sa, Bernadete L. Vieira de; Hiromoto, Goro

    2001-01-01

    This paper describes a simple model to determine the flux of radionuclides released from a concrete vault repository and its implementation through the development of a computer program. The radionuclide leach rate from waste is calculated using a model based on simple first order kinetics and the transport through porous media bellow the waste is determined using a semi-analytical solution of the mass transport equation. Results obtained in the IAEA intercomparison program are also related in this communication. (author)

  16. A semi-analytical stationary model of a point-to-plane corona discharge

    Yanallah, K; Pontiga, F

    2012-01-01

    A semi-analytical model of a dc corona discharge is formulated to determine the spatial distribution of charged particles (electrons, negative ions and positive ions) and the electric field in pure oxygen using a point-to-plane electrode system. A key point in the modeling is the integration of Gauss' law and the continuity equation of charged species along the electric field lines, and the use of Warburg's law and the corona current–voltage characteristics as input data in the boundary conditions. The electric field distribution predicted by the model is compared with the numerical solution obtained using a finite-element technique. The semi-analytical solutions are obtained at a negligible computational cost, and provide useful information to characterize and control the corona discharge in different technological applications. (paper)

  17. Research on bathymetry estimation by Worldview-2 based with the semi-analytical model

    Sheng, L.; Bai, J.; Zhou, G.-W.; Zhao, Y.; Li, Y.-C.

    2015-04-01

    South Sea Islands of China are far away from the mainland, the reefs takes more than 95% of south sea, and most reefs scatter over interested dispute sensitive area. Thus, the methods of obtaining the reefs bathymetry accurately are urgent to be developed. Common used method, including sonar, airborne laser and remote sensing estimation, are limited by the long distance, large area and sensitive location. Remote sensing data provides an effective way for bathymetry estimation without touching over large area, by the relationship between spectrum information and bathymetry. Aimed at the water quality of the south sea of China, our paper develops a bathymetry estimation method without measured water depth. Firstly the semi-analytical optimization model of the theoretical interpretation models has been studied based on the genetic algorithm to optimize the model. Meanwhile, OpenMP parallel computing algorithm has been introduced to greatly increase the speed of the semi-analytical optimization model. One island of south sea in China is selected as our study area, the measured water depth are used to evaluate the accuracy of bathymetry estimation from Worldview-2 multispectral images. The results show that: the semi-analytical optimization model based on genetic algorithm has good results in our study area;the accuracy of estimated bathymetry in the 0-20 meters shallow water area is accepted.Semi-analytical optimization model based on genetic algorithm solves the problem of the bathymetry estimation without water depth measurement. Generally, our paper provides a new bathymetry estimation method for the sensitive reefs far away from mainland.

  18. Semi-analytical approach to modelling the dynamic behaviour of soil excited by embedded foundations

    Bucinskas, Paulius; Andersen, Lars Vabbersgaard

    2017-01-01

    The underlying soil has a significant effect on the dynamic behaviour of structures. The paper proposes a semi-analytical approach based on a Green’s function solution in frequency–wavenumber domain. The procedure allows calculating the dynamic stiffness for points on the soil surface as well...... are analysed. It is determined how simplification of the numerical model affects the overall dynamic behaviour. © 2017 The Authors. Published by Elsevier Ltd....

  19. Coupled thermodynamic-dynamic semi-analytical model of free piston Stirling engines

    Formosa, F., E-mail: fabien.formosa@univ-savoie.f [Laboratoire SYMME, Universite de Savoie, BP 80439, 74944 Annecy le Vieux Cedex (France)

    2011-05-15

    Research highlights: {yields} The free piston Stirling behaviour relies on its thermal and dynamic features. {yields} A global semi-analytical model for preliminary design is developed. {yields} The model compared with NASA-RE1000 experimental data shows good correlations. -- Abstract: The study of free piston Stirling engine (FPSE) requires both accurate thermodynamic and dynamic modelling to predict its performances. The steady state behaviour of the engine partly relies on non linear dissipative phenomena such as pressure drop loss within heat exchangers which is dependant on the temperature within the associated components. An analytical thermodynamic model which encompasses the effectiveness and the flaws of the heat exchangers and the regenerator has been previously developed and validated. A semi-analytical dynamic model of FPSE is developed and presented in this paper. The thermodynamic model is used to define the thermal variables that are used in the dynamic model which evaluates the kinematic results. Thus, a coupled iterative strategy has been used to perform a global simulation. The global modelling approach has been validated using the experimental data available from the NASA RE-1000 Stirling engine prototype. The resulting coupled thermodynamic-dynamic model using a standardized description of the engine allows efficient and realistic preliminary design of FPSE.

  20. Coupled thermodynamic-dynamic semi-analytical model of free piston Stirling engines

    Formosa, F.

    2011-01-01

    Research highlights: → The free piston Stirling behaviour relies on its thermal and dynamic features. → A global semi-analytical model for preliminary design is developed. → The model compared with NASA-RE1000 experimental data shows good correlations. -- Abstract: The study of free piston Stirling engine (FPSE) requires both accurate thermodynamic and dynamic modelling to predict its performances. The steady state behaviour of the engine partly relies on non linear dissipative phenomena such as pressure drop loss within heat exchangers which is dependant on the temperature within the associated components. An analytical thermodynamic model which encompasses the effectiveness and the flaws of the heat exchangers and the regenerator has been previously developed and validated. A semi-analytical dynamic model of FPSE is developed and presented in this paper. The thermodynamic model is used to define the thermal variables that are used in the dynamic model which evaluates the kinematic results. Thus, a coupled iterative strategy has been used to perform a global simulation. The global modelling approach has been validated using the experimental data available from the NASA RE-1000 Stirling engine prototype. The resulting coupled thermodynamic-dynamic model using a standardized description of the engine allows efficient and realistic preliminary design of FPSE.

  1. CALIBRATION OF SEMI-ANALYTIC MODELS OF GALAXY FORMATION USING PARTICLE SWARM OPTIMIZATION

    Ruiz, Andrés N.; Domínguez, Mariano J.; Yaryura, Yamila; Lambas, Diego García; Cora, Sofía A.; Martínez, Cristian A. Vega-; Gargiulo, Ignacio D.; Padilla, Nelson D.; Tecce, Tomás E.; Orsi, Álvaro; Arancibia, Alejandra M. Muñoz

    2015-01-01

    We present a fast and accurate method to select an optimal set of parameters in semi-analytic models of galaxy formation and evolution (SAMs). Our approach compares the results of a model against a set of observables applying a stochastic technique called Particle Swarm Optimization (PSO), a self-learning algorithm for localizing regions of maximum likelihood in multidimensional spaces that outperforms traditional sampling methods in terms of computational cost. We apply the PSO technique to the SAG semi-analytic model combined with merger trees extracted from a standard Lambda Cold Dark Matter N-body simulation. The calibration is performed using a combination of observed galaxy properties as constraints, including the local stellar mass function and the black hole to bulge mass relation. We test the ability of the PSO algorithm to find the best set of free parameters of the model by comparing the results with those obtained using a MCMC exploration. Both methods find the same maximum likelihood region, however, the PSO method requires one order of magnitude fewer evaluations. This new approach allows a fast estimation of the best-fitting parameter set in multidimensional spaces, providing a practical tool to test the consequences of including other astrophysical processes in SAMs

  2. CALIBRATION OF SEMI-ANALYTIC MODELS OF GALAXY FORMATION USING PARTICLE SWARM OPTIMIZATION

    Ruiz, Andrés N.; Domínguez, Mariano J.; Yaryura, Yamila; Lambas, Diego García [Instituto de Astronomía Teórica y Experimental, CONICET-UNC, Laprida 854, X5000BGR, Córdoba (Argentina); Cora, Sofía A.; Martínez, Cristian A. Vega-; Gargiulo, Ignacio D. [Consejo Nacional de Investigaciones Científicas y Técnicas, Rivadavia 1917, C1033AAJ Buenos Aires (Argentina); Padilla, Nelson D.; Tecce, Tomás E.; Orsi, Álvaro; Arancibia, Alejandra M. Muñoz, E-mail: andresnicolas@oac.uncor.edu [Instituto de Astrofísica, Pontificia Universidad Católica de Chile, Av. Vicuña Mackenna 4860, Santiago (Chile)

    2015-03-10

    We present a fast and accurate method to select an optimal set of parameters in semi-analytic models of galaxy formation and evolution (SAMs). Our approach compares the results of a model against a set of observables applying a stochastic technique called Particle Swarm Optimization (PSO), a self-learning algorithm for localizing regions of maximum likelihood in multidimensional spaces that outperforms traditional sampling methods in terms of computational cost. We apply the PSO technique to the SAG semi-analytic model combined with merger trees extracted from a standard Lambda Cold Dark Matter N-body simulation. The calibration is performed using a combination of observed galaxy properties as constraints, including the local stellar mass function and the black hole to bulge mass relation. We test the ability of the PSO algorithm to find the best set of free parameters of the model by comparing the results with those obtained using a MCMC exploration. Both methods find the same maximum likelihood region, however, the PSO method requires one order of magnitude fewer evaluations. This new approach allows a fast estimation of the best-fitting parameter set in multidimensional spaces, providing a practical tool to test the consequences of including other astrophysical processes in SAMs.

  3. A semi-analytical bearing model considering outer race flexibility for model based bearing load monitoring

    Kerst, Stijn; Shyrokau, Barys; Holweg, Edward

    2018-05-01

    This paper proposes a novel semi-analytical bearing model addressing flexibility of the bearing outer race structure. It furthermore presents the application of this model in a bearing load condition monitoring approach. The bearing model is developed as current computational low cost bearing models fail to provide an accurate description of the more and more common flexible size and weight optimized bearing designs due to their assumptions of rigidity. In the proposed bearing model raceway flexibility is described by the use of static deformation shapes. The excitation of the deformation shapes is calculated based on the modelled rolling element loads and a Fourier series based compliance approximation. The resulting model is computational low cost and provides an accurate description of the rolling element loads for flexible outer raceway structures. The latter is validated by a simulation-based comparison study with a well-established bearing simulation software tool. An experimental study finally shows the potential of the proposed model in a bearing load monitoring approach.

  4. Simulation of reactive geochemical transport in groundwater using a semi-analytical screening model

    McNab, Walt W.

    1997-10-01

    A reactive geochemical transport model, based on a semi-analytical solution to the advective-dispersive transport equation in two dimensions, is developed as a screening tool for evaluating the impact of reactive contaminants on aquifer hydrogeochemistry. Because the model utilizes an analytical solution to the transport equation, it is less computationally intensive than models based on numerical transport schemes, is faster, and it is not subject to numerical dispersion effects. Although the assumptions used to construct the model preclude consideration of reactions between the aqueous and solid phases, thermodynamic mineral saturation indices are calculated to provide qualitative insight into such reactions. Test problems involving acid mine drainage and hydrocarbon biodegradation signatures illustrate the utility of the model in simulating essential hydrogeochemical phenomena.

  5. Semi-analytical models of hydroelastic sloshing impact in tanks of liquefied natural gas vessels.

    Ten, I; Malenica, Š; Korobkin, A

    2011-07-28

    The present paper deals with the methods for the evaluation of the hydroelastic interactions that appear during the violent sloshing impacts inside the tanks of liquefied natural gas carriers. The complexity of both the fluid flow and the structural behaviour (containment system and ship structure) does not allow for a fully consistent direct approach according to the present state of the art. Several simplifications are thus necessary in order to isolate the most dominant physical aspects and to treat them properly. In this paper, choice was made of semi-analytical modelling for the hydrodynamic part and finite-element modelling for the structural part. Depending on the impact type, different hydrodynamic models are proposed, and the basic principles of hydroelastic coupling are clearly described and validated with respect to the accuracy and convergence of the numerical results.

  6. Maxwell: A semi-analytic 4D code for earthquake cycle modeling of transform fault systems

    Sandwell, David; Smith-Konter, Bridget

    2018-05-01

    We have developed a semi-analytic approach (and computational code) for rapidly calculating 3D time-dependent deformation and stress caused by screw dislocations imbedded within an elastic layer overlying a Maxwell viscoelastic half-space. The maxwell model is developed in the Fourier domain to exploit the computational advantages of the convolution theorem, hence substantially reducing the computational burden associated with an arbitrarily complex distribution of force couples necessary for fault modeling. The new aspect of this development is the ability to model lateral variations in shear modulus. Ten benchmark examples are provided for testing and verification of the algorithms and code. One final example simulates interseismic deformation along the San Andreas Fault System where lateral variations in shear modulus are included to simulate lateral variations in lithospheric structure.

  7. A semi-analytical refrigeration cycle modelling approach for a heat pump hot water heater

    Panaras, G.; Mathioulakis, E.; Belessiotis, V.

    2018-04-01

    The use of heat pump systems in applications like the production of hot water or space heating makes important the modelling of the processes for the evaluation of the performance of existing systems, as well as for design purposes. The proposed semi-analytical model offers the opportunity to estimate the performance of a heat pump system producing hot water, without using detailed geometrical or any performance data. This is important, as for many commercial systems the type and characteristics of the involved subcomponents can hardly be detected, thus not allowing the implementation of more analytical approaches or the exploitation of the manufacturers' catalogue performance data. The analysis copes with the issues related with the development of the models of the subcomponents involved in the studied system. Issues not discussed thoroughly in the existing literature, as the refrigerant mass inventory in the case an accumulator is present, are examined effectively.

  8. A semi-analytical foreshock model for energetic storm particle events inside 1 AU

    Vainio Rami

    2014-02-01

    Full Text Available We have constructed a semi-analytical model of the energetic-ion foreshock of a CME-driven coronal/interplanetary shock wave responsible for the acceleration of large solar energetic particle (SEP events. The model is based on the analytical model of diffusive shock acceleration of Bell (1978, appended with a temporal dependence of the cut-off momentum of the energetic particles accelerated at the shock, derived from the theory. Parameters of the model are re-calibrated using a fully time-dependent self-consistent simulation model of the coupled particle acceleration and Alfvén-wave generation upstream of the shock. Our results show that analytical estimates of the cut-off energy resulting from the simplified theory and frequently used in SEP modelling are overestimating the cut-off momentum at the shock by one order magnitude. We show also that the cut-off momentum observed remotely far upstream of the shock (e.g., at 1 AU can be used to infer the properties of the foreshock and the resulting energetic storm particle (ESP event, when the shock is still at small distances from the Sun, unaccessible to the in-situ observations. Our results can be used in ESP event modelling for future missions to the inner heliosphere, like the Solar Orbiter and Solar Probe Plus as well as in developing acceleration models for SEP events in the solar corona.

  9. A semi-analytical beam model for the vibration of railway tracks

    Kostovasilis, D.; Thompson, D. J.; Hussein, M. F. M.

    2017-04-01

    The high frequency dynamic behaviour of railway tracks, in both vertical and lateral directions, strongly affects the generation of rolling noise as well as other phenomena such as rail corrugation. An improved semi-analytical model of a beam on an elastic foundation is introduced that accounts for the coupling of the vertical and lateral vibration. The model includes the effects of cross-section asymmetry, shear deformation, rotational inertia and restrained warping. Consideration is given to the fact that the loads at the rail head, as well as those exerted by the railpads at the rail foot, may not act through the centroid of the section. The response is evaluated for a harmonic load and the solution is obtained in the wavenumber domain. Results are presented as dispersion curves for free and supported rails and are validated with the aid of a Finite Element (FE) and a waveguide finite element (WFE) model. Closed form expressions are derived for the forced response, and validated against the WFE model. Track mobilities and decay rates are presented to assess the potential implications for rolling noise and the influence of the various sources of vertical-lateral coupling. Comparison is also made with measured data. Overall, the model presented performs very well, especially for the lateral vibration, although it does not contain the high frequency cross-section deformation modes. The most significant effects on the response are shown to be the inclusion of torsion and foundation eccentricity, which mainly affect the lateral response.

  10. THE STELLAR MASS COMPONENTS OF GALAXIES: COMPARING SEMI-ANALYTICAL MODELS WITH OBSERVATION

    Liu Lei; Yang Xiaohu; Mo, H. J.; Van den Bosch, Frank C.; Springel, Volker

    2010-01-01

    We compare the stellar masses of central and satellite galaxies predicted by three independent semi-analytical models (SAMs) with observational results obtained from a large galaxy group catalog constructed from the Sloan Digital Sky Survey. In particular, we compare the stellar mass functions of centrals and satellites, the relation between total stellar mass and halo mass, and the conditional stellar mass functions, Φ(M * |M h ), which specify the average number of galaxies of stellar mass M * that reside in a halo of mass M h . The SAMs only predict the correct stellar masses of central galaxies within a limited mass range and all models fail to reproduce the sharp decline of stellar mass with decreasing halo mass observed at the low mass end. In addition, all models over-predict the number of satellite galaxies by roughly a factor of 2. The predicted stellar mass in satellite galaxies can be made to match the data by assuming that a significant fraction of satellite galaxies are tidally stripped and disrupted, giving rise to a population of intra-cluster stars (ICS) in their host halos. However, the amount of ICS thus predicted is too large compared to observation. This suggests that current galaxy formation models still have serious problems in modeling star formation in low-mass halos.

  11. Self-consistent semi-analytic models of the first stars

    Visbal, Eli; Haiman, Zoltán; Bryan, Greg L.

    2018-04-01

    We have developed a semi-analytic framework to model the large-scale evolution of the first Population III (Pop III) stars and the transition to metal-enriched star formation. Our model follows dark matter haloes from cosmological N-body simulations, utilizing their individual merger histories and three-dimensional positions, and applies physically motivated prescriptions for star formation and feedback from Lyman-Werner (LW) radiation, hydrogen ionizing radiation, and external metal enrichment due to supernovae winds. This method is intended to complement analytic studies, which do not include clustering or individual merger histories, and hydrodynamical cosmological simulations, which include detailed physics, but are computationally expensive and have limited dynamic range. Utilizing this technique, we compute the cumulative Pop III and metal-enriched star formation rate density (SFRD) as a function of redshift at z ≥ 20. We find that varying the model parameters leads to significant qualitative changes in the global star formation history. The Pop III star formation efficiency and the delay time between Pop III and subsequent metal-enriched star formation are found to have the largest impact. The effect of clustering (i.e. including the three-dimensional positions of individual haloes) on various feedback mechanisms is also investigated. The impact of clustering on LW and ionization feedback is found to be relatively mild in our fiducial model, but can be larger if external metal enrichment can promote metal-enriched star formation over large distances.

  12. SEMI-ANALYTIC GALAXY EVOLUTION (SAGE): MODEL CALIBRATION AND BASIC RESULTS

    Croton, Darren J.; Stevens, Adam R. H.; Tonini, Chiara; Garel, Thibault; Bernyk, Maksym; Bibiano, Antonio; Hodkinson, Luke; Mutch, Simon J.; Poole, Gregory B.; Shattow, Genevieve M. [Centre for Astrophysics and Supercomputing, Swinburne University of Technology, P.O. Box 218, Hawthorn, Victoria 3122 (Australia)

    2016-02-15

    This paper describes a new publicly available codebase for modeling galaxy formation in a cosmological context, the “Semi-Analytic Galaxy Evolution” model, or sage for short.{sup 5} sage is a significant update to the 2006 model of Croton et al. and has been rebuilt to be modular and customizable. The model will run on any N-body simulation whose trees are organized in a supported format and contain a minimum set of basic halo properties. In this work, we present the baryonic prescriptions implemented in sage to describe the formation and evolution of galaxies, and their calibration for three N-body simulations: Millennium, Bolshoi, and GiggleZ. Updated physics include the following: gas accretion, ejection due to feedback, and reincorporation via the galactic fountain; a new gas cooling–radio mode active galactic nucleus (AGN) heating cycle; AGN feedback in the quasar mode; a new treatment of gas in satellite galaxies; and galaxy mergers, disruption, and the build-up of intra-cluster stars. Throughout, we show the results of a common default parameterization on each simulation, with a focus on the local galaxy population.

  13. SEMI-ANALYTIC GALAXY EVOLUTION (SAGE): MODEL CALIBRATION AND BASIC RESULTS

    Croton, Darren J.; Stevens, Adam R. H.; Tonini, Chiara; Garel, Thibault; Bernyk, Maksym; Bibiano, Antonio; Hodkinson, Luke; Mutch, Simon J.; Poole, Gregory B.; Shattow, Genevieve M.

    2016-01-01

    This paper describes a new publicly available codebase for modeling galaxy formation in a cosmological context, the “Semi-Analytic Galaxy Evolution” model, or sage for short. 5 sage is a significant update to the 2006 model of Croton et al. and has been rebuilt to be modular and customizable. The model will run on any N-body simulation whose trees are organized in a supported format and contain a minimum set of basic halo properties. In this work, we present the baryonic prescriptions implemented in sage to describe the formation and evolution of galaxies, and their calibration for three N-body simulations: Millennium, Bolshoi, and GiggleZ. Updated physics include the following: gas accretion, ejection due to feedback, and reincorporation via the galactic fountain; a new gas cooling–radio mode active galactic nucleus (AGN) heating cycle; AGN feedback in the quasar mode; a new treatment of gas in satellite galaxies; and galaxy mergers, disruption, and the build-up of intra-cluster stars. Throughout, we show the results of a common default parameterization on each simulation, with a focus on the local galaxy population

  14. A SEMI-ANALYTICAL LINE TRANSFER MODEL TO INTERPRET THE SPECTRA OF GALAXY OUTFLOWS

    Scarlata, C.; Panagia, N.

    2015-01-01

    We present a semi-analytical line transfer model, (SALT), to study the absorption and re-emission line profiles from expanding galactic envelopes. The envelopes are described as a superposition of shells with density and velocity varying with the distance from the center. We adopt the Sobolev approximation to describe the interaction between the photons escaping from each shell and the remainder of the envelope. We include the effect of multiple scatterings within each shell, properly accounting for the atomic structure of the scattering ions. We also account for the effect of a finite circular aperture on actual observations. For equal geometries and density distributions, our models reproduce the main features of the profiles generated with more complicated transfer codes. Also, our SALT line profiles nicely reproduce the typical asymmetric resonant absorption line profiles observed in starforming/starburst galaxies whereas these absorption profiles cannot be reproduced with thin shells moving at a fixed outflow velocity. We show that scattered resonant emission fills in the resonant absorption profiles, with a strength that is different for each transition. Observationally, the effect of resonant filling depends on both the outflow geometry and the size of the outflow relative to the spectroscopic aperture. Neglecting these effects will lead to incorrect values of gas covering fraction and column density. When a fluorescent channel is available, the resonant profiles alone cannot be used to infer the presence of scattered re-emission. Conversely, the presence of emission lines of fluorescent transitions reveals that emission filling cannot be neglected

  15. Semi-analytical model for hollow-core anti-resonant fibers

    Wei eDing

    2015-03-01

    Full Text Available We detailedly describe a recently-developed semi-analytical method to quantitatively calculate light transmission properties of hollow-core anti-resonant fibers (HC-ARFs. Formation of equiphase interface at fiber’s outermost boundary and outward light emission ruled by Helmholtz equation in fiber’s transverse plane constitute the basis of this method. Our semi-analytical calculation results agree well with those of precise simulations and clarify the light leakage dependences on azimuthal angle, geometrical shape and polarization. Using this method, we show investigations on HC-ARFs having various core shapes (e.g. polygon, hypocycloid with single- and multi-layered core-surrounds. The polarization properties of ARFs are also studied. Our semi-analytical method provides clear physical insights into the light guidance in ARF and can play as a fast and useful design aid for better ARFs.

  16. Numerical and semi-analytical modelling of the process induced distortions in pultrusion

    Baran, Ismet; Carlone, P.; Hattel, Jesper Henri

    2013-01-01

    , the transient distortions are inferred adopting a semi-analytical procedure, i.e. post processing numerical results by means of analytical methods. The predictions of the process induced distortion development using the aforementioned methods are found to be qualitatively close to each other...

  17. The effect of gas dynamics on semi-analytic modelling of cluster galaxies

    Saro, A.; De Lucia, G.; Dolag, K.; Borgani, S.

    2008-12-01

    We study the degree to which non-radiative gas dynamics affect the merger histories of haloes along with subsequent predictions from a semi-analytic model (SAM) of galaxy formation. To this aim, we use a sample of dark matter only and non-radiative smooth particle hydrodynamics (SPH) simulations of four massive clusters. The presence of gas-dynamical processes (e.g. ram pressure from the hot intra-cluster atmosphere) makes haloes more fragile in the runs which include gas. This results in a 25 per cent decrease in the total number of subhaloes at z = 0. The impact on the galaxy population predicted by SAMs is complicated by the presence of `orphan' galaxies, i.e. galaxies whose parent substructures are reduced below the resolution limit of the simulation. In the model employed in our study, these galaxies survive (unaffected by the tidal stripping process) for a residual merging time that is computed using a variation of the Chandrasekhar formula. Due to ram-pressure stripping, haloes in gas simulations tend to be less massive than their counterparts in the dark matter simulations. The resulting merging times for satellite galaxies are then longer in these simulations. On the other hand, the presence of gas influences the orbits of haloes making them on average more circular and therefore reducing the estimated merging times with respect to the dark matter only simulation. This effect is particularly significant for the most massive satellites and is (at least in part) responsible for the fact that brightest cluster galaxies in runs with gas have stellar masses which are about 25 per cent larger than those obtained from dark matter only simulations. Our results show that gas dynamics has only a marginal impact on the statistical properties of the galaxy population, but that its impact on the orbits and merging times of haloes strongly influences the assembly of the most massive galaxies.

  18. Massive quiescent galaxies at z > 3 in the Millennium simulation populated by a semi-analytic galaxy formation model

    Rong, Yu; Jing, Yingjie; Gao, Liang; Guo, Qi; Wang, Jie; Sun, Shuangpeng; Wang, Lin; Pan, Jun

    2017-10-01

    We take advantage of the statistical power of the large-volume dark-matter-only Millennium simulation (MS), combined with a sophisticated semi-analytic galaxy formation model, to explore whether the recently reported z = 3.7 quiescent galaxy ZF-COSMOS-20115 (ZF) can be accommodated in current galaxy formation models. In our model, a population of quiescent galaxies with stellar masses and star formation rates comparable to those of ZF naturally emerges at redshifts z 3.5 massive QGs are rare (about 2 per cent of the galaxies with the similar stellar masses), the existing AGN feedback model implemented in the semi-analytic galaxy formation model can successfully explain the formation of the high-redshift QGs as it does on their lower redshift counterparts.

  19. Semi-analytic models for the CANDELS survey: comparison of predictions for intrinsic galaxy properties

    Lu, Yu; Wechsler, Risa H.; Somerville, Rachel S.; Croton, Darren; Porter, Lauren; Primack, Joel; Moody, Chris; Behroozi, Peter S.; Ferguson, Henry C.; Koo, David C.; Guo, Yicheng; Safarzadeh, Mohammadtaher; White, Catherine E.; Finlator, Kristian; Castellano, Marco; Sommariva, Veronica

    2014-01-01

    We compare the predictions of three independently developed semi-analytic galaxy formation models (SAMs) that are being used to aid in the interpretation of results from the CANDELS survey. These models are each applied to the same set of halo merger trees extracted from the 'Bolshoi' high-resolution cosmological N-body simulation and are carefully tuned to match the local galaxy stellar mass function using the powerful method of Bayesian Inference coupled with Markov Chain Monte Carlo or by hand. The comparisons reveal that in spite of the significantly different parameterizations for star formation and feedback processes, the three models yield qualitatively similar predictions for the assembly histories of galaxy stellar mass and star formation over cosmic time. Comparing SAM predictions with existing estimates of the stellar mass function from z = 0-8, we show that the SAMs generally require strong outflows to suppress star formation in low-mass halos to match the present-day stellar mass function, as is the present common wisdom. However, all of the models considered produce predictions for the star formation rates (SFRs) and metallicities of low-mass galaxies that are inconsistent with existing data. The predictions for metallicity-stellar mass relations and their evolution clearly diverge between the models. We suggest that large differences in the metallicity relations and small differences in the stellar mass assembly histories of model galaxies stem from different assumptions for the outflow mass-loading factor produced by feedback. Importantly, while more accurate observational measurements for stellar mass, SFR and metallicity of galaxies at 1 < z < 5 will discriminate between models, the discrepancies between the constrained models and existing data of these observables have already revealed challenging problems in understanding star formation and its feedback in galaxy formation. The three sets of models are being used to construct catalogs

  20. Semi-analytic models for the CANDELS survey: comparison of predictions for intrinsic galaxy properties

    Lu, Yu; Wechsler, Risa H. [Kavli Institute for Particle Astrophysics and Cosmology, Physics Department, and SLAC National Accelerator Laboratory, Stanford University, Stanford, CA 94305 (United States); Somerville, Rachel S. [Department of Physics and Astronomy, Rutgers University, 136 Frelinghuysen Road, Piscataway, NJ 08854 (United States); Croton, Darren [Centre for Astrophysics and Supercomputing, Swinburne University of Technology, P.O. Box 218, Hawthorn, VIC 3122 (Australia); Porter, Lauren; Primack, Joel; Moody, Chris [Department of Physics, University of California at Santa Cruz, Santa Cruz, CA 95064 (United States); Behroozi, Peter S.; Ferguson, Henry C. [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Koo, David C.; Guo, Yicheng [UCO/Lick Observatory, Department of Astronomy and Astrophysics, University of California, Santa Cruz, CA 95064 (United States); Safarzadeh, Mohammadtaher; White, Catherine E. [Department of Physics and Astronomy, The Johns Hopkins University, 3400 North Charles Street, Baltimore, MD 21218 (United States); Finlator, Kristian [Dark Cosmology Centre, Niels Bohr Institute, University of Copenhagen, DK-2100 Copenhagen (Denmark); Castellano, Marco; Sommariva, Veronica, E-mail: luyu@stanford.edu, E-mail: rwechsler@stanford.edu [INAF-Osservatorio Astronomico di Roma, via Frascati 33, I-00040 Monteporzio (Italy)

    2014-11-10

    We compare the predictions of three independently developed semi-analytic galaxy formation models (SAMs) that are being used to aid in the interpretation of results from the CANDELS survey. These models are each applied to the same set of halo merger trees extracted from the 'Bolshoi' high-resolution cosmological N-body simulation and are carefully tuned to match the local galaxy stellar mass function using the powerful method of Bayesian Inference coupled with Markov Chain Monte Carlo or by hand. The comparisons reveal that in spite of the significantly different parameterizations for star formation and feedback processes, the three models yield qualitatively similar predictions for the assembly histories of galaxy stellar mass and star formation over cosmic time. Comparing SAM predictions with existing estimates of the stellar mass function from z = 0-8, we show that the SAMs generally require strong outflows to suppress star formation in low-mass halos to match the present-day stellar mass function, as is the present common wisdom. However, all of the models considered produce predictions for the star formation rates (SFRs) and metallicities of low-mass galaxies that are inconsistent with existing data. The predictions for metallicity-stellar mass relations and their evolution clearly diverge between the models. We suggest that large differences in the metallicity relations and small differences in the stellar mass assembly histories of model galaxies stem from different assumptions for the outflow mass-loading factor produced by feedback. Importantly, while more accurate observational measurements for stellar mass, SFR and metallicity of galaxies at 1 < z < 5 will discriminate between models, the discrepancies between the constrained models and existing data of these observables have already revealed challenging problems in understanding star formation and its feedback in galaxy formation. The three sets of models are being used to construct catalogs

  1. Hydrodynamical simulations and semi-analytic models of galaxy formation: two sides of the same coin

    Neistein, Eyal; Khochfar, Sadegh; Dalla Vecchia, Claudio; Schaye, Joop

    2012-04-01

    In this work we develop a new method to turn a state-of-the-art hydrodynamical cosmological simulation of galaxy formation (HYD) into a simple semi-analytic model (SAM). This is achieved by summarizing the efficiencies of accretion, cooling, star formation and feedback given by the HYD, as functions of the halo mass and redshift. The SAM then uses these functions to evolve galaxies within merger trees that are extracted from the same HYD. Surprisingly, by turning the HYD into a SAM, we conserve the mass of individual galaxies, with deviations at the level of 0.1 dex, on an object-by-object basis, with no significant systematics. This is true for all redshifts, and for the mass of stars and gas components, although the agreement reaches 0.2 dex for satellite galaxies at low redshift. We show that the same level of accuracy is obtained even in case the SAM uses only one phase of gas within each galaxy. Moreover, we demonstrate that the formation history of one massive galaxy provides sufficient information for the SAM to reproduce the population of galaxies within the entire cosmological box. The reasons for the small scatter between the HYD and SAM galaxies are as follows. (i) The efficiencies are matched as functions of the halo mass and redshift, meaning that the evolution within merger trees agrees on average. (ii) For a given galaxy, efficiencies fluctuate around the mean value on time-scales of 0.2-2 Gyr. (iii) The various mass components of galaxies are obtained by integrating the efficiencies over time, averaging out these fluctuations. We compare the efficiencies found here to standard SAM recipes and find that they often deviate significantly. For example, here the HYD shows smooth accretion that is less effective for low-mass haloes, and is always composed of hot or dilute gas; cooling is less effective at high redshift, and star formation changes only mildly with cosmic time. The method developed here can be applied in general to any HYD, and can thus

  2. Semi-analytical MBS Pricing

    Rom-Poulsen, Niels

    2007-01-01

    This paper presents a multi-factor valuation model for fixed-rate callable mortgage backed securities (MBS). The model yields semi-analytic solutions for the value of MBS in the sense that the MBS value is found by solving a system of ordinary differential equations. Instead of modelling the cond......This paper presents a multi-factor valuation model for fixed-rate callable mortgage backed securities (MBS). The model yields semi-analytic solutions for the value of MBS in the sense that the MBS value is found by solving a system of ordinary differential equations. Instead of modelling...... interest rate model. However, if the pool size is specified in a way that makes the expectations solvable using transform methods, semi-analytic pricing formulas are achieved. The affine and quadratic pricing frameworks are combined to get flexible and sophisticated prepayment functions. We show...

  3. Galaxy modelling. II. Multi-wavelength faint counts from a semi-analytic model of galaxy formation

    Devriendt, J. E. G.; Guiderdoni, B.

    2000-11-01

    This paper predicts self-consistent faint galaxy counts from the UV to the submm wavelength range. The stardust spectral energy distributions described in Devriendt et al. \\citeparyear{DGS99} (Paper I) are embedded within the explicit cosmological framework of a simple semi-analytic model of galaxy formation and evolution. We begin with a description of the non-dissipative and dissipative collapses of primordial perturbations, and plug in standard recipes for star formation, stellar evolution and feedback. We also model the absorption of starlight by dust and its re-processing in the IR and submm. We then build a class of models which capture the luminosity budget of the universe through faint galaxy counts and redshift distributions in the whole wavelength range spanned by our spectra. In contrast with a rather stable behaviour in the optical and even in the far-IR, the submm counts are dramatically sensitive to variations in the cosmological parameters and changes in the star formation history. Faint submm counts are more easily accommodated within an open universe with a low value of Omega_0 , or a flat universe with a non-zero cosmological constant. We confirm the suggestion of Guiderdoni et al. \\citeparyear{GHBM98} that matching the current multi-wavelength data requires a population of heavily-extinguished, massive galaxies with large star formation rates ( ~ 500 M_sun yr-1) at intermediate and high redshift (z >= 1.5). Such a population of objects probably is the consequence of an increase of interaction and merging activity at high redshift, but a realistic quantitative description can only be obtained through more detailed modelling of such processes. This study illustrates the implementation of multi-wavelength spectra into a semi-analytic model. In spite of its simplicity, it already provides fair fits of the current data of faint counts, and a physically motivated way of interpolating and extrapolating these data to other wavelengths and fainter flux

  4. Developing semi-analytical solution for multiple-zone transient storage model with spatially non-uniform storage

    Deng, Baoqing; Si, Yinbing; Wang, Jia

    2017-12-01

    Transient storages may vary along the stream due to stream hydraulic conditions and the characteristics of storage. Analytical solutions of transient storage models in literature didn't cover the spatially non-uniform storage. A novel integral transform strategy is presented that simultaneously performs integral transforms to the concentrations in the stream and in storage zones by using the single set of eigenfunctions derived from the advection-diffusion equation of the stream. The semi-analytical solution of the multiple-zone transient storage model with the spatially non-uniform storage is obtained by applying the generalized integral transform technique to all partial differential equations in the multiple-zone transient storage model. The derived semi-analytical solution is validated against the field data in literature. Good agreement between the computed data and the field data is obtained. Some illustrative examples are formulated to demonstrate the applications of the present solution. It is shown that solute transport can be greatly affected by the variation of mass exchange coefficient and the ratio of cross-sectional areas. When the ratio of cross-sectional areas is big or the mass exchange coefficient is small, more reaches are recommended to calibrate the parameter.

  5. A novel modular multilevel converter modelling technique based on semi-analytical models for HVDC application

    Ahmed Zama

    2016-12-01

    Full Text Available Thanks to scalability, performance and efficiency, the Modular Multilevel Converter (MMC, since its invention, becomes an attractive topology in industrial applications such as high voltage direct current (HVDC transmission system. However, modelling challenges related to the high number of switching elements in the MMC are highlighted when such systems are integrated into large simulated networks for stability or protection algorithms testing. In this work, a novel dynamic models for MMC is proposed. The proposed models are intended to simplify modeling challenges related to the high number of switching elements in the MMC. The models can be easily used to simulate the converter for stability analysis or protection algorithms for HVDC grids.

  6. A Semi-Analytical Model for Dispersion Modelling Studies in the Atmospheric Boundary Layer

    Gupta, A.; Sharan, M.

    2017-12-01

    The severe impact of harmful air pollutants has always been a cause of concern for a wide variety of air quality analysis. The analytical models based on the solution of the advection-diffusion equation have been the first and remain the convenient way for modeling air pollutant dispersion as it is easy to handle the dispersion parameters and related physics in it. A mathematical model describing the crosswind integrated concentration is presented. The analytical solution to the resulting advection-diffusion equation is limited to a constant and simple profiles of eddy diffusivity and wind speed. In practice, the wind speed depends on the vertical height above the ground and eddy diffusivity profiles on the downwind distance from the source as well as the vertical height. In the present model, a method of eigen-function expansion is used to solve the resulting partial differential equation with the appropriate boundary conditions. This leads to a system of first order ordinary differential equations with a coefficient matrix depending on the downwind distance. The solution of this system, in general, can be expressed in terms of Peano-baker series which is not easy to compute, particularly when the coefficient matrix becomes non-commutative (Martin et al., 1967). An approach based on Taylor's series expansion is introduced to find the numerical solution of first order system. The method is applied to various profiles of wind speed and eddy diffusivities. The solution computed from the proposed methodology is found to be efficient and accurate in comparison to those available in the literature. The performance of the model is evaluated with the diffusion datasets from Copenhagen (Gryning et al., 1987) and Hanford (Doran et al., 1985). In addition, the proposed method is used to deduce three dimensional concentrations by considering the Gaussian distribution in crosswind direction, which is also evaluated with diffusion data corresponding to a continuous point source.

  7. CCS Site Optimization by Applying a Multi-objective Evolutionary Algorithm to Semi-Analytical Leakage Models

    Cody, B. M.; Gonzalez-Nicolas, A.; Bau, D. A.

    2011-12-01

    Carbon capture and storage (CCS) has been proposed as a method of reducing global carbon dioxide (CO2) emissions. Although CCS has the potential to greatly retard greenhouse gas loading to the atmosphere while cleaner, more sustainable energy solutions are developed, there is a possibility that sequestered CO2 may leak and intrude into and adversely affect groundwater resources. It has been reported [1] that, while CO2 intrusion typically does not directly threaten underground drinking water resources, it may cause secondary effects, such as the mobilization of hazardous inorganic constituents present in aquifer minerals and changes in pH values. These risks must be fully understood and minimized before CCS project implementation. Combined management of project resources and leakage risk is crucial for the implementation of CCS. In this work, we present a method of: (a) minimizing the total CCS cost, the summation of major project costs with the cost associated with CO2 leakage; and (b) maximizing the mass of injected CO2, for a given proposed sequestration site. Optimization decision variables include the number of CO2 injection wells, injection rates, and injection well locations. The capital and operational costs of injection wells are directly related to injection well depth, location, injection flow rate, and injection duration. The cost of leakage is directly related to the mass of CO2 leaked through weak areas, such as abandoned oil wells, in the cap rock layers overlying the injected formation. Additional constraints on fluid overpressure caused by CO2 injection are imposed to maintain predefined effective stress levels that prevent cap rock fracturing. Here, both mass leakage and fluid overpressure are estimated using two semi-analytical models based upon work by [2,3]. A multi-objective evolutionary algorithm coupled with these semi-analytical leakage flow models is used to determine Pareto-optimal trade-off sets giving minimum total cost vs. maximum mass

  8. A Semi-analytical model for creep life prediction of butt-welded joints in cylindrical vessels

    Zarrabi, K.

    2001-01-01

    There have been many investigations on the life assessment of high temperature weldments used in cylindrical pressure vessels, pipes and tubes over the last two decades or so. But to the author's knowledge, currently, there exists no practical, economical and relatively accurate model for creep life assessment of butt-welded joints in cylindrical pressure vessels. This paper describes a semi-analytical and economical model for creep life assessment of butt-welded joints. The first stage of the development of the model is described where the model takes into account the material discontinuities at the welded joint only. The development of the model to include other factors such as geometrical stress concentrations, residual stresses, etc will be reported separately. It has been shown that the proposed model can estimate the redistributions of stresses in the weld and Haz with an error of less than 4%. It has also been shown that the proposed model can conservatively predict the creep life of a butt-welded joint with an error of less than 16%

  9. Semi-analytical model of filtering effects in microwave phase shifters based on semiconductor optical amplifiers

    Chen, Yaohui; Xue, Weiqi; Öhman, Filip

    2008-01-01

    We present a model to interpret enhanced microwave phase shifts based on filter assisted slow and fast light effects in semiconductor optical amplifiers. The model also demonstrates the spectral phase impact of input optical signals.......We present a model to interpret enhanced microwave phase shifts based on filter assisted slow and fast light effects in semiconductor optical amplifiers. The model also demonstrates the spectral phase impact of input optical signals....

  10. Semi-analytical model for a slab one-dimensional photonic crystal

    Libman, M.; Kondratyev, N. M.; Gorodetsky, M. L.

    2018-02-01

    In our work we justify the applicability of a dielectric mirror model to the description of a real photonic crystal. We demonstrate that a simple one-dimensional model of a multilayer mirror can be employed for modeling of a slab waveguide with periodically changing width. It is shown that this width change can be recalculated to the effective refraction index modulation. The applicability of transfer matrix method of reflection properties calculation was demonstrated. Finally, our 1-D model was employed to analyze reflection properties of a 2-D structure - a slab photonic crystal with a number of elliptic holes.

  11. A semi-analytic mathematical model for dissolved radionuclides dispersion in the Channel Isles region

    Salomon, J.C.; Breton, M. [IFREMER, 29 - Plouzane (France). Laboratoire d`Hydrodynamique et Sedimentologie; Fraizier, A.; Bailly Du Bois, P.; Guegueniat, P. [CEA Centre de La Hague, 50 - Cherbourg-Octeville (France). Laboratoire de Radioecologie Marine

    1997-12-31

    A simple mathematical model for transport and mixing of dissolved substances in the Channel Isles region is based on the well known residual circulation pattern in the area. The model is relevant for time scales greater than one month. Applied to a nearly conservative element such as {sup 125}Sb, it gives an interpretation of the observed time lag between nuclear material discharges at sea and their impact on coastal waters. An evaluation is done of the efficiency of the discharge scenario by the nuclear reprocessing plant of La Hague. Given its very fast running and the possibility of being applied to any chemical or biological element present in sea water, the model seems particularly suited for ecological modelling purposes. (author) 4 refs.

  12. A semi-analytic mathematical model for dissolved radionuclides dispersion in the Channel Isles region

    Salomon, J.C.; Breton, M.; Fraizier, A.; Bailly Du Bois, P.; Guegueniat, P.

    1997-01-01

    A simple mathematical model for transport and mixing of dissolved substances in the Channel Isles region is based on the well known residual circulation pattern in the area. The model is relevant for time scales greater than one month. Applied to a nearly conservative element such as 125 Sb, it gives an interpretation of the observed time lag between nuclear material discharges at sea and their impact on coastal waters. An evaluation is done of the efficiency of the discharge scenario by the nuclear reprocessing plant of La Hague. Given its very fast running and the possibility of being applied to any chemical or biological element present in sea water, the model seems particularly suited for ecological modelling purposes. (author)

  13. Calculation of induced rotor current in induction motors using a slotted semi-analytical harmonic model

    Sprangers, R.L.J.; Gysen, B.L.J.; Paulides, J.J.H.; Waarma, J.; Lomonova, E.A.

    2014-01-01

    Recently, strong improvements have been made in the applicability of harmonic modeling techniques for electrical machines with slotted structures. Various implementations for permanent magnet motors and actuators have been investigated and applied in design and optimization tools. For the slotted

  14. A semi-analytical model of a time reversal cavity for high-amplitude focused ultrasound applications

    Robin, J.; Tanter, M.; Pernot, M.

    2017-09-01

    Time reversal cavities (TRC) have been proposed as an efficient approach for 3D ultrasound therapy. They allow the precise spatio-temporal focusing of high-power ultrasound pulses within a large region of interest with a low number of transducers. Leaky TRCs are usually built by placing a multiple scattering medium, such as a random rod forest, in a reverberating cavity, and the final peak pressure gain of the device only depends on the temporal length of its impulse response. Such multiple scattering in a reverberating cavity is a complex phenomenon, and optimisation of the device’s gain is usually a cumbersome process, mostly empirical, and requiring numerical simulations with extremely long computation times. In this paper, we present a semi-analytical model for the fast optimisation of a TRC. This model decouples ultrasound propagation in an empty cavity and multiple scattering in a multiple scattering medium. It was validated numerically and experimentally using a 2D-TRC and numerically using a 3D-TRC. Finally, the model was used to determine rapidly the optimal parameters of the 3D-TRC which had been confirmed by numerical simulations.

  15. Flow modeling in a porous cylinder with regressing walls using semi analytical approach

    M Azimi

    2016-10-01

    Full Text Available In this paper, the mathematical modeling of the flow in a porous cylinder with a focus on applications to solid rocket motors is presented. As usual, the cylindrical propellant grain of a solid rocket motor is modeled as a long tube with one end closed at the headwall, while the other remains open. The cylindrical wall is assumed to be permeable so as to simulate the propellant burning and normal gas injection. At first, the problem description and formulation are considered. The Navier-Stokes equations for the viscous flow in a porous cylinder with regressing walls are reduced to a nonlinear ODE by using a similarity transformation in time and space. Application of Differential Transformation Method (DTM as an approximate analytical method has been successfully applied. Finally the results have been presented for various cases.

  16. A semi-analytical model of biological effectiveness for treatment planning in light ion radiotherapy

    Kundrát, Pavel

    2007-01-01

    Roč. 34, č. 6 (2007), s. 2654-2654 ISSN 0094-2405. [AAPM Annual Meeting. Minneapolis, 22.07.2007-26.07.2007] R&D Projects: GA ČR GA202/05/2728 Institutional research plan: CEZ:AV0Z10100502 Keywords : treatment planning * light-ion therapy * radiobiological models Subject RIV: BF - Elementary Particles and High Energy Physics Impact factor: 3.198, year: 2007

  17. A 2D semi-analytical model for Faraday shield in ICP source

    Zhang, L.G.; Chen, D.Z.; Li, D.; Liu, K.F.; Li, X.F.; Pan, R.M.; Fan, M.W.

    2016-01-01

    Highlights: • In this paper, a 2D model of ICP with faraday shield is proposed considering the complex structure of the Faraday shield. • Analytical solution is found to evaluate the electromagnetic field in the ICP source with Faraday shield. • The collision-free motion of electrons in the source is investigated and the results show that the electrons will oscillate along the radial direction, which brings insight into how the RF power couple to the plasma. - Abstract: Faraday shield is a thin copper structure with a large number of slits which is usually used in inductive coupled plasma (ICP) sources. RF power is coupled into the plasma through these slits, therefore Faraday shield plays an important role in ICP discharge. However, due to the complex structure of the Faraday shield, the resulted electromagnetic field is quite hard to evaluate. In this paper, a 2D model is proposed on the assumption that the Faraday shield is sufficiently long and the RF coil is uniformly distributed, and the copper is considered as ideal conductor. Under these conditions, the magnetic field inside the source is uniform with only the axial component, while the electric field can be decomposed into a vortex field generated by changing magnetic field together with a gradient field generated by electric charge accumulated on the Faraday shield surface, which can be easily found by solving Laplace's equation. The motion of the electrons in the electromagnetic field is investigated and the results show that the electrons will oscillate along the radial direction when taking no account of collision. This interesting result brings insight into how the RF power couples into the plasma.

  18. Variations of the stellar initial mass function in semi-analytical models - II. The impact of cosmic ray regulation

    Fontanot, Fabio; De Lucia, Gabriella; Xie, Lizhi; Hirschmann, Michaela; Bruzual, Gustavo; Charlot, Stéphane

    2018-04-01

    Recent studies proposed that cosmic rays (CRs) are a key ingredient in setting the conditions for star formation, thanks to their ability to alter the thermal and chemical state of dense gas in the ultraviolet-shielded cores of molecular clouds. In this paper, we explore their role as regulators of the stellar initial mass function (IMF) variations, using the semi-analytic model for GAlaxy Evolution and Assembly (GAEA). The new model confirms our previous results obtained using the integrated galaxy-wide IMF (IGIMF) theory. Both variable IMF models reproduce the observed increase of α-enhancement as a function of stellar mass and the measured z = 0 excess of dynamical mass-to-light ratios with respect to photometric estimates assuming a universal IMF. We focus here on the mismatch between the photometrically derived (M^app_{\\star }) and intrinsic (M⋆) stellar masses, by analysing in detail the evolution of model galaxies with different values of M_{\\star }/M^app_{\\star }. We find that galaxies with small deviations (i.e. formally consistent with a universal IMF hypothesis) are characterized by more extended star formation histories and live in less massive haloes with respect to the bulk of the galaxy population. In particular, the IGIMF theory does not change significantly the mean evolution of model galaxies with respect to the reference model, a CR-regulated IMF instead implies shorter star formation histories and higher peaks of star formation for objects more massive than 1010.5 M⊙. However, we also show that it is difficult to unveil this behaviour from observations, as the key physical quantities are typically derived assuming a universal IMF.

  19. A semi-analytical analysis of electro-thermo-hydrodynamic stability in dielectric nanofluids using Buongiorno's mathematical model together with more realistic boundary conditions

    Wakif, Abderrahim; Boulahia, Zoubair; Sehaqui, Rachid

    2018-06-01

    The main aim of the present analysis is to examine the electroconvection phenomenon that takes place in a dielectric nanofluid under the influence of a perpendicularly applied alternating electric field. In this investigation, we assume that the nanofluid has a Newtonian rheological behavior and verifies the Buongiorno's mathematical model, in which the effects of thermophoretic and Brownian diffusions are incorporated explicitly in the governing equations. Moreover, the nanofluid layer is taken to be confined horizontally between two parallel plate electrodes, heated from below and cooled from above. In a fast pulse electric field, the onset of electroconvection is due principally to the buoyancy forces and the dielectrophoretic forces. Within the framework of the Oberbeck-Boussinesq approximation and the linear stability theory, the governing stability equations are solved semi-analytically by means of the power series method for isothermal, no-slip and non-penetrability conditions. In addition, the computational implementation with the impermeability condition implies that there exists no nanoparticles mass flux on the electrodes. On the other hand, the obtained analytical solutions are validated by comparing them to those available in the literature for the limiting case of dielectric fluids. In order to check the accuracy of our semi-analytical results obtained for the case of dielectric nanofluids, we perform further numerical and semi-analytical computations by means of the Runge-Kutta-Fehlberg method, the Chebyshev-Gauss-Lobatto spectral method, the Galerkin weighted residuals technique, the polynomial collocation method and the Wakif-Galerkin weighted residuals technique. In this analysis, the electro-thermo-hydrodynamic stability of the studied nanofluid is controlled through the critical AC electric Rayleigh number Rec , whose value depends on several physical parameters. Furthermore, the effects of various pertinent parameters on the electro

  20. PROBING THE ROLE OF DYNAMICAL FRICTION IN SHAPING THE BSS RADIAL DISTRIBUTION. I. SEMI-ANALYTICAL MODELS AND PRELIMINARY N-BODY SIMULATIONS

    Miocchi, P.; Lanzoni, B.; Ferraro, F. R.; Dalessandro, E.; Alessandrini, E. [Dipartimento di Fisica e Astronomia, Università di Bologna, Viale Berti Pichat 6/2, I-40127 Bologna (Italy); Pasquato, M.; Lee, Y.-W. [Department of Astronomy and Center for Galaxy Evolution Research, Yonsei University, Seoul 120-749 (Korea, Republic of); Vesperini, E. [Department of Astronomy, Indiana University, Bloomington, IN 47405 (United States)

    2015-01-20

    We present semi-analytical models and simplified N-body simulations with 10{sup 4} particles aimed at probing the role of dynamical friction (DF) in determining the radial distribution of blue straggler stars (BSSs) in globular clusters. The semi-analytical models show that DF (which is the only evolutionary mechanism at work) is responsible for the formation of a bimodal distribution with a dip progressively moving toward the external regions of the cluster. However, these models fail to reproduce the formation of the long-lived central peak observed in all dynamically evolved clusters. The results of N-body simulations confirm the formation of a sharp central peak, which remains as a stable feature over time regardless of the initial concentration of the system. In spite of noisy behavior, a bimodal distribution forms in many cases, with the size of the dip increasing as a function of time. In the most advanced stages, the distribution becomes monotonic. These results are in agreement with the observations. Also, the shape of the peak and the location of the minimum (which, in most of cases, is within 10 core radii) turn out to be consistent with observational results. For a more detailed and close comparison with observations, including a proper calibration of the timescales of the dynamical processes driving the evolution of the BSS spatial distribution, more realistic simulations will be necessary.

  1. GWSCREEN: A semi-analytical model for assessment of the groundwater pathway from surface or buried contamination: Theory and user's manual

    Rood, A.S.

    1992-03-01

    GWSCREEN was developed for assessment of the groundwater pathway from leaching of radioactive and non radioactive substances from surface or buried sources. The code was designed for implementation in the Track 1 and Track 2 assessment of Comprehensive Environmental Response, Compensation and Liability Act (CERCLA) sites identified as low probability hazard at the Idaho National Engineering Laboratory (DOE, 1991). The code calculates the limiting soil concentration such that regulatory contaminant levels in groundwater are not exceeded. The code uses a mass conservation approach to model three processes: Contaminant release from a source volume, contaminant transport in the unsaturated zone, and contaminant transport in the saturated zone. The source model considers the sorptive properties and solubility of the contaminant. Transport in the unsaturated zone is described by a plug flow model. Transport in the saturated zone is calculated with a semi-analytical solution to the advection dispersion equation for transient mass flux input

  2. GWSCREEN: A semi-analytical model for assessment of the groundwater pathway from surface or buried contamination: Theory and user's manual

    Rood, A.S.

    1992-03-01

    GWSCREEN was developed for assessment of the groundwater pathway from leaching of radioactive and non radioactive substances from surface or buried sources. The code was designed for implementation in the Track 1 and Track 2 assessment of Comprehensive Environmental Response, Compensation and Liability Act (CERCLA) sites identified as low probability hazard at the Idaho National Engineering Laboratory (DOE, 1991). The code calculates the limiting soil concentration such that regulatory contaminant levels in groundwater are not exceeded. The code uses a mass conservation approach to model three processes: Contaminant release from a source volume, contaminant transport in the unsaturated zone, and contaminant transport in the saturated zone. The source model considers the sorptive properties and solubility of the contaminant. Transport in the unsaturated zone is described by a plug flow model. Transport in the saturated zone is calculated with a semi-analytical solution to the advection dispersion equation for transient mass flux input.

  3. GWSCREEN: A semi-analytical model for assessment of the groundwater pathway from surface or buried contamination: Theory and user`s manual

    Rood, A.S.

    1992-03-01

    GWSCREEN was developed for assessment of the groundwater pathway from leaching of radioactive and non radioactive substances from surface or buried sources. The code was designed for implementation in the Track 1 and Track 2 assessment of Comprehensive Environmental Response, Compensation and Liability Act (CERCLA) sites identified as low probability hazard at the Idaho National Engineering Laboratory (DOE, 1991). The code calculates the limiting soil concentration such that regulatory contaminant levels in groundwater are not exceeded. The code uses a mass conservation approach to model three processes: Contaminant release from a source volume, contaminant transport in the unsaturated zone, and contaminant transport in the saturated zone. The source model considers the sorptive properties and solubility of the contaminant. Transport in the unsaturated zone is described by a plug flow model. Transport in the saturated zone is calculated with a semi-analytical solution to the advection dispersion equation for transient mass flux input.

  4. Electric circuit coupling of a slotted semi-analytical model for induction motors based on harmonic modeling

    Sprangers, R.L.J.; Paulides, J.J.H.; Gysen, B.L.J.; Waarma, J.; Lomonova, E.A.

    2014-01-01

    The use of empirically determined coefficients to include the effects of leakage and fringing flux is a large drawback of traditional induction motor (IM) models, such as lumped parameter, magnetic equivalent circuit and anisotropic layer models. As an alternative, Finite Element Analysis (FEA) is

  5. GWSCREEN: A semi-analytical model for assessment of the groundwater pathway from surface or buried contamination: Version 2.0 theory and user's manual

    Rood, A.S.

    1993-06-01

    GWSCREEN was developed for assessment of the groundwater pathway from leaching of radioactive and non radioactive substances from surface or buried sources. The code was designed for implementation in the Track I and Track II assessment of CERCLA (Comprehensive Environmental Response, Compensation and Liability Act) sites identified as low probability hazard at the Idaho National Engineering Laboratory (DOE, 1992). The code calculates the limiting soil concentration such that, after leaching and transport to the aquifer, regulatory contaminant levels in groundwater are not exceeded. The code uses a mass conservation approach to model three processes: contaminant release from a source volume, contaminant transport in the unsaturated zone, and contaminant transport in the saturated zone. The source model considers the sorptive properties and solubility of the contaminant. Transport in the unsaturated zone is described by a plug flow model. Transport in the saturated zone is calculated with a semi-analytical solution to the advection dispersion equation in groundwater. In Version 2.0, GWSCREEN has incorporated an additional source model to calculate the impacts to groundwater resulting from the release to percolation ponds. In addition, transport of radioactive progeny has also been incorporated. GWSCREEN has shown comparable results when compared against other codes using similar algorithms and techniques. This code was designed for assessment and screening of the groundwater pathway when field data is limited. It was not intended to be a predictive tool

  6. GWSCREEN: A semi-analytical model for assessment of the groundwater pathway from surface or buried contamination: Version 2.0 theory and user`s manual

    Rood, A.S.

    1993-06-01

    GWSCREEN was developed for assessment of the groundwater pathway from leaching of radioactive and non radioactive substances from surface or buried sources. The code was designed for implementation in the Track I and Track II assessment of CERCLA (Comprehensive Environmental Response, Compensation and Liability Act) sites identified as low probability hazard at the Idaho National Engineering Laboratory (DOE, 1992). The code calculates the limiting soil concentration such that, after leaching and transport to the aquifer, regulatory contaminant levels in groundwater are not exceeded. The code uses a mass conservation approach to model three processes: contaminant release from a source volume, contaminant transport in the unsaturated zone, and contaminant transport in the saturated zone. The source model considers the sorptive properties and solubility of the contaminant. Transport in the unsaturated zone is described by a plug flow model. Transport in the saturated zone is calculated with a semi-analytical solution to the advection dispersion equation in groundwater. In Version 2.0, GWSCREEN has incorporated an additional source model to calculate the impacts to groundwater resulting from the release to percolation ponds. In addition, transport of radioactive progeny has also been incorporated. GWSCREEN has shown comparable results when compared against other codes using similar algorithms and techniques. This code was designed for assessment and screening of the groundwater pathway when field data is limited. It was not intended to be a predictive tool.

  7. A semi-analytical model for the acoustic impedance of finite length circular holes with mean flow

    Yang, Dong; Morgans, Aimee S.

    2016-12-01

    The acoustic response of a circular hole with mean flow passing through it is highly relevant to Helmholtz resonators, fuel injectors, perforated plates, screens, liners and many other engineering applications. A widely used analytical model [M.S. Howe. "Onthe theory of unsteady high Reynolds number flow through a circular aperture", Proc. of the Royal Soc. A. 366, 1725 (1979), 205-223] which assumes an infinitesimally short hole was recently shown to be insufficient for predicting the impedance of holes with a finite length. In the present work, an analytical model based on Green's function method is developed to take the hole length into consideration for "short" holes. The importance of capturing the modified vortex noise accurately is shown. The vortices shed at the hole inlet edge are convected to the hole outlet and further downstream to form a vortex sheet. This couples with the acoustic waves and this coupling has the potential to generate as well as absorb acoustic energy in the low frequency region. The impedance predicted by this model shows the importance of capturing the path of the shed vortex. When the vortex path is captured accurately, the impedance predictions agree well with previous experimental and CFD results, for example predicting the potential for generation of acoustic energy at higher frequencies. For "long" holes, a simplified model which combines Howe's model with plane acoustic waves within the hole is developed. It is shown that the most important effect in this case is the acoustic non-compactness of the hole.

  8. A Semi-Analytic Model for Estimating Total Suspended Sediment Concentration in Turbid Coastal Waters of Northern Western Australia Using MODIS-Aqua 250 m Data

    Passang Dorji

    2016-06-01

    Full Text Available Knowledge of the concentration of total suspended sediment (TSS in coastal waters is of significance to marine environmental monitoring agencies to determine the turbidity of water that serve as a proxy to estimate the availability of light at depth for benthic habitats. TSS models applicable to data collected by satellite sensors can be used to determine TSS with reasonable accuracy and of adequate spatial and temporal resolution to be of use for coastal water quality monitoring. Thus, a study is presented here where we develop a semi-analytic sediment model (SASM applicable to any sensor with red and near infrared (NIR bands. The calibration and validation of the SASM using bootstrap and cross-validation methods showed that the SASM applied to Moderate Resolution Imaging Spectroradiometer (MODIS-Aqua band 1 data retrieved TSS with a root mean square error (RMSE and mean averaged relative error (MARE of 5.75 mg/L and 33.33% respectively. The application of the SASM over our study region using MODIS-Aqua band 1 data showed that the SASM can be used to monitor the on-going, post and pre-dredging activities and identify daily TSS anomalies that are caused by natural and anthropogenic processes in coastal waters of northern Western Australia.

  9. Major Mergers in CANDELS up to z=3: Calibrating the Close-Pair Method Using Semi-Analytic Models and Baryonic Mass Ratio Estimates

    Mantha, Kameswara; McIntosh, Daniel H.; Conselice, Christopher; Cook, Joshua S.; Croton, Darren J.; Dekel, Avishai; Ferguson, Henry C.; Hathi, Nimish; Kodra, Dritan; Koo, David C.; Lotz, Jennifer M.; Newman, Jeffrey A.; Popping, Gergo; Rafelski, Marc; Rodriguez-Gomez, Vicente; Simmons, Brooke D.; Somerville, Rachel; Straughn, Amber N.; Snyder, Gregory; Wuyts, Stijn; Yu, Lu; Cosmic Assembly Near-Infrared Deep Extragalactic Legacy Survey (CANDELS) Team

    2018-01-01

    Cosmological simulations predict that the rate of merging between similar-mass massive galaxies should increase towards early cosmic-time. We study the incidence of major (stellar mass ratio SMR 10.3 galaxies spanning 01.5 in strong disagreement with theoretical merger rate predictions. On the other hand, if we compare to a simulation-tuned, evolving timescale prescription from Snyder et al., 2017, we find that the merger rate evolution agrees with theory out to z=3. These results highlight the need for robust calibrations on the complex and presumably redshift-dependent pair-to-merger-rate conversion factors to improve constraints of the empirical merger history. To address this, we use a unique compilation of mock datasets produced by three independent state-of-the-art Semi-Analytic Models (SAMs). We present preliminary calibrations of the close-pair observability timescale and outlier fraction as a function of redshift, stellar-mass, mass-ratio, and local over-density. Furthermore, to verify the hypothesis by previous empirical studies that SMR-selection of major pairs may be biased, we present a new analysis of the baryonic (gas+stars) mass ratios of a subset of close pairs in our sample. For the first time, our preliminary analysis highlights that a noticeable fraction of SMR-selected minor pairs (SMR>4) have major baryonic-mass ratios (BMR<4), which indicate that merger rates based on SMR selection may be under-estimated.

  10. Selection bias in dynamically measured supermassive black hole samples: scaling relations and correlations between residuals in semi-analytic galaxy formation models

    Barausse, Enrico; Shankar, Francesco; Bernardi, Mariangela; Dubois, Yohan; Sheth, Ravi K.

    2017-07-01

    Recent work has confirmed that the scaling relations between the masses of supermassive black holes and host-galaxy properties such as stellar masses and velocity dispersions may be biased high. Much of this may be caused by the requirement that the black hole sphere of influence must be resolved for the black hole mass to be reliably estimated. We revisit this issue with a comprehensive galaxy evolution semi-analytic model. Once tuned to reproduce the (mean) correlation of black hole mass with velocity dispersion, the model cannot account for the correlation with stellar mass. This is independent of the model's parameters, thus suggesting an internal inconsistency in the data. The predicted distributions, especially at the low-mass end, are also much broader than observed. However, if selection effects are included, the model's predictions tend to align with the observations. We also demonstrate that the correlations between the residuals of the scaling relations are more effective than the relations themselves at constraining models for the feedback of active galactic nuclei (AGNs). In fact, we find that our model, while in apparent broad agreement with the scaling relations when accounting for selection biases, yields very weak correlations between their residuals at fixed stellar mass, in stark contrast with observations. This problem persists when changing the AGN feedback strength, and is also present in the hydrodynamic cosmological simulation Horizon-AGN, which includes state-of-the-art treatments of AGN feedback. This suggests that current AGN feedback models are too weak or simply not capturing the effect of the black hole on the stellar velocity dispersion.

  11. A SEMI-ANALYTICAL MODEL OF VISIBLE-WAVELENGTH PHASE CURVES OF EXOPLANETS AND APPLICATIONS TO KEPLER- 7 B AND KEPLER- 10 B

    Hu, Renyu [Jet Propulsion Laboratory, California Institute of Technology, Pasadena, CA 91109 (United States); Demory, Brice-Olivier [Astrophysics Group, Cavendish Laboratory, J.J. Thomson Avenue, Cambridge CB3 0HE (United Kingdom); Seager, Sara; Lewis, Nikole [Department of Earth, Atmospheric and Planetary Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States); Showman, Adam P., E-mail: renyu.hu@jpl.nasa.gov [Department of Planetary Sciences, University of Arizona, Tucson, AZ 85721 (United States)

    2015-03-20

    Kepler has detected numerous exoplanet transits by measuring stellar light in a single visible-wavelength band. In addition to detection, the precise photometry provides phase curves of exoplanets, which can be used to study the dynamic processes on these planets. However, the interpretation of these observations can be complicated by the fact that visible-wavelength phase curves can represent both thermal emission and scattering from the planets. Here we present a semi-analytical model framework that can be applied to study Kepler and future visible-wavelength phase curve observations of exoplanets. The model efficiently computes reflection and thermal emission components for both rocky and gaseous planets, considering both homogeneous and inhomogeneous surfaces or atmospheres. We analyze the phase curves of the gaseous planet Kepler- 7 b and the rocky planet Kepler- 10 b using the model. In general, we find that a hot exoplanet’s visible-wavelength phase curve having a significant phase offset can usually be explained by two classes of solutions: one class requires a thermal hot spot shifted to one side of the substellar point, and the other class requires reflective clouds concentrated on the same side of the substellar point. Particularly for Kepler- 7 b, reflective clouds located on the west side of the substellar point can best explain its phase curve. The reflectivity of the clear part of the atmosphere should be less than 7% and that of the cloudy part should be greater than 80%, and the cloud boundary should be located at 11° ± 3° to the west of the substellar point. We suggest single-band photometry surveys could yield valuable information on exoplanet atmospheres and surfaces.

  12. A semi-analytical solution to accelerate spin-up of a coupled carbon and nitrogen land model to steady state

    J. Y. Xia

    2012-10-01

    Full Text Available The spin-up of land models to steady state of coupled carbon–nitrogen processes is computationally so costly that it becomes a bottleneck issue for global analysis. In this study, we introduced a semi-analytical solution (SAS for the spin-up issue. SAS is fundamentally based on the analytic solution to a set of equations that describe carbon transfers within ecosystems over time. SAS is implemented by three steps: (1 having an initial spin-up with prior pool-size values until net primary productivity (NPP reaches stabilization, (2 calculating quasi-steady-state pool sizes by letting fluxes of the equations equal zero, and (3 having a final spin-up to meet the criterion of steady state. Step 2 is enabled by averaged time-varying variables over one period of repeated driving forcings. SAS was applied to both site-level and global scale spin-up of the Australian Community Atmosphere Biosphere Land Exchange (CABLE model. For the carbon-cycle-only simulations, SAS saved 95.7% and 92.4% of computational time for site-level and global spin-up, respectively, in comparison with the traditional method (a long-term iterative simulation to achieve the steady states of variables. For the carbon–nitrogen coupled simulations, SAS reduced computational cost by 84.5% and 86.6% for site-level and global spin-up, respectively. The estimated steady-state pool sizes represent the ecosystem carbon storage capacity, which was 12.1 kg C m−2 with the coupled carbon–nitrogen global model, 14.6% lower than that with the carbon-only model. The nitrogen down-regulation in modeled carbon storage is partly due to the 4.6% decrease in carbon influx (i.e., net primary productivity and partly due to the 10.5% reduction in residence times. This steady-state analysis accelerated by the SAS method can facilitate comparative studies of structural differences in determining the ecosystem carbon storage capacity among biogeochemical models. Overall, the

  13. GWSCREEN: A Semi-analytical Model for Assessment of the Groundwater Pathway from Surface or Buried Contamination, Theory and User's Manual, Version 2.5

    Rood, Arthur South

    1998-08-01

    GWSCREEN was developed for assessment of the groundwater pathway from leaching of radioactive and non-radioactive substances from surface or buried sources. The code was designed for implementation in the Track I and Track II assessment of Comprehensive Environmental Response, Compensation, and Liability Act sites identified as low probability hazard at the Idaho National Engineering Laboratory. The code calculates 1) the limiting soil concentration such that, after leaching and transport to the aquifer regulatory contaminant levels in groundwater are not exceeded, 2) peak aquifer concentration and associated human health impacts, and 3) aquifer concentrations and associated human health impacts as a function of time and space. The code uses a mass conservation approach to model three processes: contaminant release from a source volume, vertical contaminant transport in the unsaturated zone, and 2D or 3D contaminant transport in the saturated zone. The source model considers the sorptive properties and solubility of the contaminant. In Version 2.5, transport in the unsaturated zone is described by a plug flow or dispersive solution model. Transport in the saturated zone is calculated with a semi-analytical solution to the advection dispersion equation in groundwater. Three source models are included; leaching from a surface or buried source, infiltration pond, or user-defined arbitrary release. Dispersion in the aquifer may be described by fixed dispersivity values or three, spatial-variable dispersivity functions. Version 2.5 also includes a Monte Carlo sampling routine for uncertainty/sensitivity analysis and a preprocessor to allow multiple input files and multiple contaminants to be run in a single simulation. GWSCREEN has been validated against other codes using similar algorithms and techniques. The code was originally designed for assessment and screening of the groundwater pathway when field data are limited. It was intended to simulate relatively simple

  14. Control Chart on Semi Analytical Weighting

    Miranda, G. S.; Oliveira, C. C.; Silva, T. B. S. C.; Stellato, T. B.; Monteiro, L. R.; Marques, J. R.; Faustino, M. G.; Soares, S. M. V.; Ulrich, J. C.; Pires, M. A. F.; Cotrim, M. E. B.

    2018-03-01

    Semi-analytical balance verification intends to assess the balance performance using graphs that illustrate measurement dispersion, trough time, and to demonstrate measurements were performed in a reliable manner. This study presents internal quality control of a semi-analytical balance (GEHAKA BG400) using control charts. From 2013 to 2016, 2 weight standards were monitored before any balance operation. This work intended to evaluate if any significant difference or bias were presented on weighting procedure over time, to check the generated data reliability. This work also exemplifies how control intervals are established.

  15. Interior beam searchlight semi-analytical benchmark

    Ganapol, Barry D.; Kornreich, Drew E.

    2008-01-01

    Multidimensional semi-analytical benchmarks to provide highly accurate standards to assess routine numerical particle transport algorithms are few and far between. Because of the well-established 1D theory for the analytical solution of the transport equation, it is sometimes possible to 'bootstrap' a 1D solution to generate a more comprehensive solution representation. Here, we consider the searchlight problem (SLP) as a multidimensional benchmark. A variation of the usual SLP is the interior beam SLP (IBSLP) where a beam source lies beneath the surface of a half space and emits directly towards the free surface. We consider the establishment of a new semi-analytical benchmark based on a new FN formulation. This problem is important in radiative transfer experimental analysis to determine cloud absorption and scattering properties. (authors)

  16. A fast semi-analytical model for the slotted structure of induction motors with 36/28 stator/rotor slot combination

    Sprangers, R.L.J.; Paulides, J.J.H.; Gysen, B.L.J.; Lomonova, E.A.

    2014-01-01

    A fast, semi-analyticalmodel for inductionmotors (IMs) with 36/28 stator/rotor slot combination is presented. In comparison to traditional analytical models for IMs, such as lumped parameter, magnetic equivalent circuit and anisotropic layer models, the presented model calculates a continuous

  17. A Semi-analytical Model for Wind-fed Black Hole High-mass X-Ray Binaries: State Transition Triggered by Magnetic Fields from the Companion Star

    Yaji, Kentaro; Yamada, Shinya; Masai, Kuniaki [Department of Physics, Tokyo Metropolitan University, Minami-Osawa 1-1, Hachioji, Tokyo 192-0397 (Japan)

    2017-10-01

    We propose a mechanism of state transition in wind-fed black hole (BH) binaries (high-mass X-ray binaries) such as Cyg X-1 and LMC X-1. Modeling a line-driven stellar wind from the companion by two-dimensional hydrodynamical calculations, we investigate the processes of wind capture by, and accretion onto, the BH. We assume that the wind acceleration is terminated at the He ii ionization front because ions responsible for line-driven acceleration are ionized within the front, i.e., the He iii region. It is found that the mass accretion rate inferred from the luminosity is remarkably smaller than the capture rate. Considering the difference, we construct a model for the state transition based on the accretion flow being controlled by magnetorotational instability. The outer flow is torus-like, and plays an important role to trigger the transition. The model can explain why state transition does occur in Cyg X-1, while not in LMC X-1. Cyg X-1 exhibits a relatively low luminosity, and then the He ii ionization front is located and can move between the companion and BH, depending on its ionizing photon flux. On the other hand, LMC X-1 exhibits too high luminosity for the front to move considerably; the front is too close to the companion atmosphere. The model also predicts that each state of high-soft or low-hard would last fairly long because the luminosity depends weakly on the wind velocity. In the context of the model, the state transition is triggered by a fluctuation of the magnetic field when its amplitude becomes comparable to the field strength in the torus-like outer flow.

  18. Semi-analytic techniques for calculating bubble wall profiles

    Akula, Sujeet; Balazs, Csaba; White, Graham A.

    2016-01-01

    We present semi-analytic techniques for finding bubble wall profiles during first order phase transitions with multiple scalar fields. Our method involves reducing the problem to an equation with a single field, finding an approximate analytic solution and perturbing around it. The perturbations can be written in a semi-analytic form. We assert that our technique lacks convergence problems and demonstrate the speed of convergence on an example potential. (orig.)

  19. Modal instability of rod fiber amplifiers: a semi-analytic approach

    Jørgensen, Mette Marie; Hansen, Kristian Rymann; Laurila, Marko

    2013-01-01

    The modal instability (MI) threshold is estimated for four rod fiber designs by combining a semi-analytic model with the finite element method. The thermal load due to the quantum defect is calculated and used to numerically determine the mode distributions on which the expression for the onset o...

  20. Semi-analytic solution to planar Helmholtz equation

    Tukač M.

    2013-06-01

    Full Text Available Acoustic solution of interior domains is of great interest. Solving acoustic pressure fields faster with lower computational requirements is demanded. A novel solution technique based on the analytic solution to the Helmholtz equation in rectangular domain is presented. This semi-analytic solution is compared with the finite element method, which is taken as the reference. Results show that presented method is as precise as the finite element method. As the semi-analytic method doesn’t require spatial discretization, it can be used for small and very large acoustic problems with the same computational costs.

  1. A semi-analytical treatment of xenon oscillations

    Zarei, M.; Minuchehr, A.; Ghaderi, R.

    2017-01-01

    Highlights: • A two-group two region kinetic core model is developed employing the eigenvalues separation index. • Poison dynamics are investigated within an adiabatic approach. • The overall nonlinear reactor model is recast into a linear time varying framework incorporating the matrix exponential numerical scheme. • The largest Lyapunov exponent is employed to analytically verify model stability. - Abstract: A novel approach is developed to investigate xenon oscillations within a two-group two-region coupled core reactor model incorporating thermal feedback and poison effects. Group-wise neutronic coupling coefficients between the core regions are calculated applying the associated fundamental and first mode eigenvalue separation values. The resultant nonlinear state space representation of the core behavior is quite suitable for evaluation of reactivity induced power transients such as load following operation. The model however comprises a multi-physics coupling of sub-systems with extremely distant relaxation times whose stiffness treatment inquire costly multistep implicit numerical methods. An adiabatic treatment of the sluggish poison dynamics is therefore proposed as a way out. The approach helps further investigate the nonlinear system within a linear time varying (LTV) framework whereby a semi-analytical framework is established. This scheme incorporates a matrix exponential analytical solution of the perturbed system as a quite efficient tool to study load following operation and control purposes. Poison dynamics are updated within larger intervals which exclude the need for specific numerical schemes of stiff systems. Simulation results of the axial offset conducted on a VVER-1000 reactor at the beginning (BOC) and the end of cycle (EOC) display quite acceptable results compared with available benchmarks. The LTV reactor model is further investigated within a stability analysis of the associated time varying systems at these two stages

  2. Semi-Analytical Benchmarks for MCNP6

    Grechanuk, Pavel Aleksandrovi [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-11-07

    Code verification is an extremely important process that involves proving or disproving the validity of code algorithms by comparing them against analytical results of the underlying physics or mathematical theory on which the code is based. Monte Carlo codes such as MCNP6 must undergo verification and testing upon every release to ensure that the codes are properly simulating nature. Specifically, MCNP6 has multiple sets of problems with known analytic solutions that are used for code verification. Monte Carlo codes primarily specify either current boundary sources or a volumetric fixed source, either of which can be very complicated functions of space, energy, direction and time. Thus, most of the challenges with modeling analytic benchmark problems in Monte Carlo codes come from identifying the correct source definition to properly simulate the correct boundary conditions. The problems included in this suite all deal with mono-energetic neutron transport without energy loss, in a homogeneous material. The variables that differ between the problems are source type (isotropic/beam), medium dimensionality (infinite/semi-infinite), etc.

  3. Universality and Realistic Extensions to the Semi-Analytic Simulation Principle in GNSS Signal Processing

    O. Jakubov

    2012-06-01

    Full Text Available Semi-analytic simulation principle in GNSS signal processing bypasses the bit-true operations at high sampling frequency. Instead, signals at the output branches of the integrate&dump blocks are successfully modeled, thus making extensive Monte Carlo simulations feasible. Methods for simulations of code and carrier tracking loops with BPSK, BOC signals have been introduced in the literature. Matlab toolboxes were designed and published. In this paper, we further extend the applicability of the approach. Firstly, we describe any GNSS signal as a special instance of linear multi-dimensional modulation. Thereby, we state universal framework for classification of differently modulated signals. Using such description, we derive the semi-analytic models generally. Secondly, we extend the model for realistic scenarios including delay in the feed back, slowly fading multipath effects, finite bandwidth, phase noise, and a combination of these. Finally, a discussion on connection of this semi-analytic model and position-velocity-time estimator is delivered, as well as comparison of theoretical and simulated characteristics, produced by a prototype simulator developed at CTU in Prague.

  4. Semi-analytic calculations for the impact parameter dependence of electromagnetic multi-lepton pair production

    Gueclue, M.C.

    2000-01-01

    We provide a new general semi-analytic derivation of the impact parameter dependence of lowest order electromagnetic lepton-pair production in relativistic heavy-ion collisions. By using this result we have also calculated the related analytic multiple-pair production in the two-photon external-field model. We have compared our results with the equivalent-photon approximation and other calculations

  5. Semi-analytical solution to arbitrarily shaped beam scattering

    Wang, Wenjie; Zhang, Huayong; Sun, Yufa

    2017-07-01

    Based on the field expansions in terms of appropriate spherical vector wave functions and the method of moments scheme, an exact semi-analytical solution to the scattering of an arbitrarily shaped beam is given. For incidence of a Gaussian beam, zero-order Bessel beam and Hertzian electric dipole radiation, numerical results of the normalized differential scattering cross section are presented to a spheroid and a circular cylinder of finite length, and the scattering properties are analyzed concisely.

  6. A two-dimensional, semi-analytic expansion method for nodal calculations

    Palmtag, S.P.

    1995-08-01

    Most modern nodal methods used today are based upon the transverse integration procedure in which the multi-dimensional flux shape is integrated over the transverse directions in order to produce a set of coupled one-dimensional flux shapes. The one-dimensional flux shapes are then solved either analytically or by representing the flux shape by a finite polynomial expansion. While these methods have been verified for most light-water reactor applications, they have been found to have difficulty predicting the large thermal flux gradients near the interfaces of highly-enriched MOX fuel assemblies. A new method is presented here in which the neutron flux is represented by a non-seperable, two-dimensional, semi-analytic flux expansion. The main features of this method are (1) the leakage terms from the node are modeled explicitly and therefore, the transverse integration procedure is not used, (2) the corner point flux values for each node are directly edited from the solution method, and a corner-point interpolation is not needed in the flux reconstruction, (3) the thermal flux expansion contains hyperbolic terms representing analytic solutions to the thermal flux diffusion equation, and (4) the thermal flux expansion contains a thermal to fast flux ratio term which reduces the number of polynomial expansion functions needed to represent the thermal flux. This new nodal method has been incorporated into the computer code COLOR2G and has been used to solve a two-dimensional, two-group colorset problem containing uranium and highly-enriched MOX fuel assemblies. The results from this calculation are compared to the results found using a code based on the traditional transverse integration procedure

  7. On accuracy problems for semi-analytical sensitivity analyses

    Pedersen, P.; Cheng, G.; Rasmussen, John

    1989-01-01

    The semi-analytical method of sensitivity analysis combines ease of implementation with computational efficiency. A major drawback to this method, however, is that severe accuracy problems have recently been reported. A complete error analysis for a beam problem with changing length is carried ou...... pseudo loads in order to obtain general load equilibrium with rigid body motions. Such a method would be readily applicable for any element type, whether analytical expressions for the element stiffnesses are available or not. This topic is postponed for a future study....

  8. Semi-analytic variable charge solitary waves involving dust phase-space vortices (holes)

    Tribeche, Mouloud; Younsi, Smain; Amour, Rabia; Aoutou, Kamel [Plasma Physics Group, Faculty of Sciences-Physics, Theoretical Physics Laboratory, University of Bab-Ezzouar, USTHB BP 32, El Alia, Algiers 16111 (Algeria)], E-mail: mtribeche@usthb.dz

    2009-09-15

    A semi-analytic model for highly nonlinear solitary waves involving dust phase-space vortices (holes) is outlined. The variable dust charge is expressed in terms of the Lambert function and we take advantage of this transcendental function to investigate the localized structures that may occur in a dusty plasma with variable charge trapped dust particles. Our results which complement the previously published work on this problem (Schamel et al 2001 Phys. Plasmas 8 671) should be of basic interest for experiments that involve the trapping of dust particles in ultra-low-frequency dust acoustic modes.

  9. Semi-analytic variable charge solitary waves involving dust phase-space vortices (holes)

    Tribeche, Mouloud; Younsi, Smain; Amour, Rabia; Aoutou, Kamel

    2009-01-01

    A semi-analytic model for highly nonlinear solitary waves involving dust phase-space vortices (holes) is outlined. The variable dust charge is expressed in terms of the Lambert function and we take advantage of this transcendental function to investigate the localized structures that may occur in a dusty plasma with variable charge trapped dust particles. Our results which complement the previously published work on this problem (Schamel et al 2001 Phys. Plasmas 8 671) should be of basic interest for experiments that involve the trapping of dust particles in ultra-low-frequency dust acoustic modes.

  10. A semi-analytical study of positive corona discharge in wire–plane electrode configuration

    Yanallah, K; Pontiga, F; Chen, J H

    2013-01-01

    Wire-to-plane positive corona discharge in air has been studied using an analytical model of two species (electrons and positive ions). The spatial distributions of electric field and charged species are obtained by integrating Gauss's law and the continuity equations of species along the Laplacian field lines. The experimental values of corona current intensity and applied voltage, together with Warburg's law, have been used to formulate the boundary condition for the electron density on the corona wire. To test the accuracy of the model, the approximate electric field distribution has been compared with the exact numerical solution obtained from a finite element analysis. A parametrical study of wire-to-plane corona discharge has then been undertaken using the approximate semi-analytical solutions. Thus, the spatial distributions of electric field and charged particles have been computed for different values of the gas pressure, wire radius and electrode separation. Also, the two dimensional distribution of ozone density has been obtained using a simplified plasma chemistry model. The approximate semi-analytical solutions can be evaluated in a negligible computational time, yet provide precise estimates of corona discharge variables. (paper)

  11. A semi-analytical study of positive corona discharge in wire-plane electrode configuration

    Yanallah, K.; Pontiga, F.; Chen, J. H.

    2013-08-01

    Wire-to-plane positive corona discharge in air has been studied using an analytical model of two species (electrons and positive ions). The spatial distributions of electric field and charged species are obtained by integrating Gauss's law and the continuity equations of species along the Laplacian field lines. The experimental values of corona current intensity and applied voltage, together with Warburg's law, have been used to formulate the boundary condition for the electron density on the corona wire. To test the accuracy of the model, the approximate electric field distribution has been compared with the exact numerical solution obtained from a finite element analysis. A parametrical study of wire-to-plane corona discharge has then been undertaken using the approximate semi-analytical solutions. Thus, the spatial distributions of electric field and charged particles have been computed for different values of the gas pressure, wire radius and electrode separation. Also, the two dimensional distribution of ozone density has been obtained using a simplified plasma chemistry model. The approximate semi-analytical solutions can be evaluated in a negligible computational time, yet provide precise estimates of corona discharge variables.

  12. SWIM: A Semi-Analytical Ocean Color Inversion Algorithm for Optically Shallow Waters

    McKinna, Lachlan I. W.; Werdell, P. Jeremy; Fearns, Peter R. C. S.; Weeks, Scarla J.; Reichstetter, Martina; Franz, Bryan A.; Shea, Donald M.; Feldman, Gene C.

    2014-01-01

    Ocean color remote sensing provides synoptic-scale, near-daily observations of marine inherent optical properties (IOPs). Whilst contemporary ocean color algorithms are known to perform well in deep oceanic waters, they have difficulty operating in optically clear, shallow marine environments where light reflected from the seafloor contributes to the water-leaving radiance. The effect of benthic reflectance in optically shallow waters is known to adversely affect algorithms developed for optically deep waters [1, 2]. Whilst adapted versions of optically deep ocean color algorithms have been applied to optically shallow regions with reasonable success [3], there is presently no approach that directly corrects for bottom reflectance using existing knowledge of bathymetry and benthic albedo.To address the issue of optically shallow waters, we have developed a semi-analytical ocean color inversion algorithm: the Shallow Water Inversion Model (SWIM). SWIM uses existing bathymetry and a derived benthic albedo map to correct for bottom reflectance using the semi-analytical model of Lee et al [4]. The algorithm was incorporated into the NASA Ocean Biology Processing Groups L2GEN program and tested in optically shallow waters of the Great Barrier Reef, Australia. In-lieu of readily available in situ matchup data, we present a comparison between SWIM and two contemporary ocean color algorithms, the Generalized Inherent Optical Property Algorithm (GIOP) and the Quasi-Analytical Algorithm (QAA).

  13. A Semi-Analytical Method for the PDFs of A Ship Rolling in Random Oblique Waves

    Liu, Li-qin; Liu, Ya-liu; Xu, Wan-hai; Li, Yan; Tang, You-gang

    2018-03-01

    The PDFs (probability density functions) and probability of a ship rolling under the random parametric and forced excitations were studied by a semi-analytical method. The rolling motion equation of the ship in random oblique waves was established. The righting arm obtained by the numerical simulation was approximately fitted by an analytical function. The irregular waves were decomposed into two Gauss stationary random processes, and the CARMA (2, 1) model was used to fit the spectral density function of parametric and forced excitations. The stochastic energy envelope averaging method was used to solve the PDFs and the probability. The validity of the semi-analytical method was verified by the Monte Carlo method. The C11 ship was taken as an example, and the influences of the system parameters on the PDFs and probability were analyzed. The results show that the probability of ship rolling is affected by the characteristic wave height, wave length, and the heading angle. In order to provide proper advice for the ship's manoeuvring, the parametric excitations should be considered appropriately when the ship navigates in the oblique seas.

  14. Semi-analytic flux formulas for shielding calculations

    Wallace, O.J.

    1976-06-01

    A special coordinate system based on the work of H. Ono and A. Tsuro has been used to derive exact semi-analytic formulas for the flux from cylindrical, spherical, toroidal, rectangular, annular and truncated cone volume sources; from cylindrical, spherical, truncated cone, disk and rectangular surface sources; and from curved and tilted line sources. In most of the cases where the source is curved, shields of the same curvature are allowed in addition to the standard slab shields; cylindrical shields are also allowed in the rectangular volume source flux formula. An especially complete treatment of a cylindrical volume source is given, in which dose points may be arbitrarily located both within and outside the source, and a finite cylindrical shield may be considered. Detector points may also be specified as lying within spherical and annular source volumes. The integral functions encountered in these formulas require at most two-dimensional numeric integration in order to evaluate the flux values. The classic flux formulas involving only slab shields and slab, disk, line, sphere and truncated cone sources become some of the many special cases which are given in addition to the more general formulas mentioned above

  15. A semi-analytical iterative technique for solving chemistry problems

    Majeed Ahmed AL-Jawary

    2017-07-01

    Full Text Available The main aim and contribution of the current paper is to implement a semi-analytical iterative method suggested by Temimi and Ansari in 2011 namely (TAM to solve two chemical problems. An approximate solution obtained by the TAM provides fast convergence. The current chemical problems are the absorption of carbon dioxide into phenyl glycidyl ether and the other system is a chemical kinetics problem. These problems are represented by systems of nonlinear ordinary differential equations that contain boundary conditions and initial conditions. Error analysis of the approximate solutions is studied using the error remainder and the maximal error remainder. Exponential rate for the convergence is observed. For both problems the results of the TAM are compared with other results obtained by previous methods available in the literature. The results demonstrate that the method has many merits such as being derivative-free, and overcoming the difficulty arising in calculating Adomian polynomials to handle the non-linear terms in Adomian Decomposition Method (ADM. It does not require to calculate Lagrange multiplier in Variational Iteration Method (VIM in which the terms of the sequence become complex after several iterations, thus, analytical evaluation of terms becomes very difficult or impossible in VIM. No need to construct a homotopy in Homotopy Perturbation Method (HPM and solve the corresponding algebraic equations. The MATHEMATICA® 9 software was used to evaluate terms in the iterative process.

  16. Heat Conduction Analysis Using Semi Analytical Finite Element Method

    Wargadipura, A. H. S.

    1997-01-01

    Heat conduction problems are very often found in science and engineering fields. It is of accrual importance to determine quantitative descriptions of this important physical phenomena. This paper discusses the development and application of a numerical formulation and computation that can be used to analyze heat conduction problems. The mathematical equation which governs the physical behaviour of heat conduction is in the form of second order partial differential equations. The numerical resolution used in this paper is performed using the finite element method and Fourier series, which is known as semi-analytical finite element methods. The numerical solution results in simultaneous algebraic equations which is solved using the Gauss elimination methodology. The computer implementation is carried out using FORTRAN language. In the final part of the paper, a heat conduction problem in a rectangular plate domain with isothermal boundary conditions in its edge is solved to show the application of the computer program developed and also a comparison with analytical solution is discussed to assess the accuracy of the numerical solution obtained

  17. GWSCREEN: A semi-analytical model for assessment of the groundwater pathway from surface or buried contamination. Theory and user`s manual, Version 2.0: Revision 2

    Rood, A.S.

    1994-06-01

    Multimedia exposure assessment of hazardous chemicals and radionuclides requires that all pathways of exposure be investigated. The GWSCREEN model was designed to perform initial screening calculations for groundwater pathway impacts resulting from the leaching of surficial and buried contamination at CERCLA sites identified as low probability hazard at the INEL. In Version 2.0, an additional model was added to calculate impacts to groundwater from the operation of a percolation pond. The model was designed to make best use of the data that would potentially be available. These data include the area and depth of contamination, sorptive properties and solubility limit of the contaminant, depth to aquifer, and the physical properties of the aquifer (porosity, velocity, and dispersivity). For the pond model, data on effluent flow rates and operation time are required. Model output includes the limiting soil concentration such that, after leaching and transport to the aquifer, regulatory contaminant levels in groundwater are not exceeded. Also, groundwater concentration as a function of time may be calculated. The model considers only drinking water consumption and does not include the transfer of contamination to food products due to irrigation with contaminated water. Radiological dose, carcinogenic risk, and the hazard quotient are calculated for the peak time using the user-defined input mass (or activity). Appendices contain sample problems and the source code listing.

  18. Semi-analytical calculation of fuel parameters for shock ignition fusion

    S A Ghasemi

    2017-02-01

    Full Text Available In this paper, semi-analytical relations of total energy, fuel gain and hot-spot radius in a non-isobaric model have been derived and compared with Schmitt (2010 numerical calculations for shock ignition scenario. in nuclear fusion. Results indicate that the approximations used by Rosen (1983 and Schmitt (2010 for the calculation of burn up fraction have not enough accuracy compared with numerical simulation. Meanwhile, it is shown that the obtained formulas of non-isobaric model cannot determine the model parameters of total energy, fuel gain and hot-spot radius uniquely. Therefore, employing more appropriate approximations, an improved semianalytical relations for non-isobaric model has been presented, which  are in a better agreement with numerical calculations of shock ignition by Schmitt (2010.

  19. Semi-analytic modeling of tokamak particle transport

    Shi Bingren; Long Yongxing; Li Jiquan

    2000-01-01

    The linear particle transport equation of tokamak plasma is analyzed. Particle flow consists of an outward diffusion and an inward convection. General solution is expressed in terms of a Green function constituted by eigen-functions of corresponding Sturm-Liouville problem. For a particle source near the plasma edge (shadow fueling), a well-behaved solution in terms of Fourier series can be constituted by using the complementarity relation. It can be seen from the lowest eigen-function that the particle density becomes peaked when the wall recycling reduced. For a transient point source in the inner region, a well-behaved solution can be obtained by the complementarity as well

  20. A semi-analytical solution for slug tests in an unconfined aquifer considering unsaturated flow

    Sun, Hongbing

    2016-01-01

    A semi-analytical solution considering the vertical unsaturated flow is developed for groundwater flow in response to a slug test in an unconfined aquifer in Laplace space. The new solution incorporates the effects of partial penetrating, anisotropy, vertical unsaturated flow, and a moving water table boundary. Compared to the Kansas Geological Survey (KGS) model, the new solution can significantly improve the fittings of the modeled to the measured hydraulic heads at the late stage of slug tests in an unconfined aquifer, particularly when the slug well has a partially submerged screen and moisture drainage above the water table is significant. The radial hydraulic conductivities estimated with the new solution are comparable to those from the KGS, Bouwer and Rice, and Hvorslev methods. In addition, the new solution also can be used to examine the vertical conductivity, specific storage, specific yield, and the moisture retention parameters in an unconfined aquifer based on slug test data.

  1. Reconstructing see-saw models

    Ibarra, Alejandro

    2007-01-01

    In this talk we discuss the prospects to reconstruct the high-energy see-saw Lagrangian from low energy experiments in supersymmetric scenarios. We show that the model with three right-handed neutrinos could be reconstructed in theory, but not in practice. Then, we discuss the prospects to reconstruct the model with two right-handed neutrinos, which is the minimal see-saw model able to accommodate neutrino observations. We identify the relevant processes to achieve this goal, and comment on the sensitivity of future experiments to them. We find the prospects much more promising and we emphasize in particular the importance of the observation of rare leptonic decays for the reconstruction of the right-handed neutrino masses

  2. The effect of inclined soil layers on surface vibration from underground railways using a semi-analytical approach

    Jones, S; Hunt, H

    2009-01-01

    Ground vibration due to underground railways is a significant source of disturbance for people living or working near the subways. The numerical models used to predict vibration levels have inherent uncertainty which must be understood to give confidence in the predictions. A semi-analytical approach is developed herein to investigate the effect of soil layering on the surface vibration of a halfspace where both soil properties and layer inclination angles are varied. The study suggests that both material properties and inclination angle of the layers have significant effect (± 10dB) on the surface vibration response.

  3. SWIM: A Semi-Analytical Ocean Color Inversion Algorithm for Optically Shallow Waters

    McKinna, Lachlan I. W.; Werdell, P. Jeremy; Fearns, Peter R. C. S.; Weeks, Scarla J.; Reichstetter, Martina; Franz, Bryan A.; Bailey, Sean W.; Shea, Donald M.; Feldman, Gene C.

    2014-01-01

    In clear shallow waters, light that is transmitted downward through the water column can reflect off the sea floor and thereby influence the water-leaving radiance signal. This effect can confound contemporary ocean color algorithms designed for deep waters where the seafloor has little or no effect on the water-leaving radiance. Thus, inappropriate use of deep water ocean color algorithms in optically shallow regions can lead to inaccurate retrievals of inherent optical properties (IOPs) and therefore have a detrimental impact on IOP-based estimates of marine parameters, including chlorophyll-a and the diffuse attenuation coefficient. In order to improve IOP retrievals in optically shallow regions, a semi-analytical inversion algorithm, the Shallow Water Inversion Model (SWIM), has been developed. Unlike established ocean color algorithms, SWIM considers both the water column depth and the benthic albedo. A radiative transfer study was conducted that demonstrated how SWIM and two contemporary ocean color algorithms, the Generalized Inherent Optical Properties algorithm (GIOP) and Quasi-Analytical Algorithm (QAA), performed in optically deep and shallow scenarios. The results showed that SWIM performed well, whilst both GIOP and QAA showed distinct positive bias in IOP retrievals in optically shallow waters. The SWIM algorithm was also applied to a test region: the Great Barrier Reef, Australia. Using a single test scene and time series data collected by NASA's MODIS-Aqua sensor (2002-2013), a comparison of IOPs retrieved by SWIM, GIOP and QAA was conducted.

  4. Application of Dynamic Analysis in Semi-Analytical Finite Element Method.

    Liu, Pengfei; Xing, Qinyan; Wang, Dawei; Oeser, Markus

    2017-08-30

    Analyses of dynamic responses are significantly important for the design, maintenance and rehabilitation of asphalt pavement. In order to evaluate the dynamic responses of asphalt pavement under moving loads, a specific computational program, SAFEM, was developed based on a semi-analytical finite element method. This method is three-dimensional and only requires a two-dimensional FE discretization by incorporating Fourier series in the third dimension. In this paper, the algorithm to apply the dynamic analysis to SAFEM was introduced in detail. Asphalt pavement models under moving loads were built in the SAFEM and commercial finite element software ABAQUS to verify the accuracy and efficiency of the SAFEM. The verification shows that the computational accuracy of SAFEM is high enough and its computational time is much shorter than ABAQUS. Moreover, experimental verification was carried out and the prediction derived from SAFEM is consistent with the measurement. Therefore, the SAFEM is feasible to reliably predict the dynamic response of asphalt pavement under moving loads, thus proving beneficial to road administration in assessing the pavement's state.

  5. Determination of flexibility factors in curved pipes with end restraints using a semi-analytic formulation

    Fonseca, E.M.M.; Melo, F.J.M.Q. de; Oliveira, C.A.M.

    2002-01-01

    Piping systems are structural sets used in the chemical industry, conventional or nuclear power plants and fluid transport in general-purpose process equipment. They include curved elements built as parts of toroidal thin-walled structures. The mechanical behaviour of such structural assemblies is of leading importance for satisfactory performance and safety standards of the installations. This paper presents a semi-analytic formulation based on Fourier trigonometric series for solving the pure bending problem in curved pipes. A pipe element is considered as a part of a toroidal shell. A displacement formulation pipe element was developed with Fourier series. The solution of this problem is solved from a system of differential equations using mathematical software. To build-up the solution, a simple but efficient deformation model, from a semi-membrane behaviour, was followed here, given the geometry and thin shell assumption. The flexibility factors are compared with the ASME code for some elbow dimensions adopted from ISO 1127. The stress field distribution was also calculated

  6. IPOLE - semi-analytic scheme for relativistic polarized radiative transport

    Mościbrodzka, M.; Gammie, C. F.

    2018-03-01

    We describe IPOLE, a new public ray-tracing code for covariant, polarized radiative transport. The code extends the IBOTHROS scheme for covariant, unpolarized transport using two representations of the polarized radiation field: In the coordinate frame, it parallel transports the coherency tensor; in the frame of the plasma it evolves the Stokes parameters under emission, absorption, and Faraday conversion. The transport step is implemented to be as spacetime- and coordinate- independent as possible. The emission, absorption, and Faraday conversion step is implemented using an analytic solution to the polarized transport equation with constant coefficients. As a result, IPOLE is stable, efficient, and produces a physically reasonable solution even for a step with high optical depth and Faraday depth. We show that the code matches analytic results in flat space, and that it produces results that converge to those produced by Dexter's GRTRANS polarized transport code on a complicated model problem. We expect IPOLE will mainly find applications in modelling Event Horizon Telescope sources, but it may also be useful in other relativistic transport problems such as modelling for the IXPE mission.

  7. Application of Semi-analytical Satellite Theory orbit propagator to orbit determination for space object catalog maintenance

    Setty, Srinivas J.; Cefola, Paul J.; Montenbruck, Oliver; Fiedler, Hauke

    2016-05-01

    Catalog maintenance for Space Situational Awareness (SSA) demands an accurate and computationally lean orbit propagation and orbit determination technique to cope with the ever increasing number of observed space objects. As an alternative to established numerical and analytical methods, we investigate the accuracy and computational load of the Draper Semi-analytical Satellite Theory (DSST). The standalone version of the DSST was enhanced with additional perturbation models to improve its recovery of short periodic motion. The accuracy of DSST is, for the first time, compared to a numerical propagator with fidelity force models for a comprehensive grid of low, medium, and high altitude orbits with varying eccentricity and different inclinations. Furthermore, the run-time of both propagators is compared as a function of propagation arc, output step size and gravity field order to assess its performance for a full range of relevant use cases. For use in orbit determination, a robust performance of DSST is demonstrated even in the case of sparse observations, which is most sensitive to mismodeled short periodic perturbations. Overall, DSST is shown to exhibit adequate accuracy at favorable computational speed for the full set of orbits that need to be considered in space surveillance. Along with the inherent benefits of a semi-analytical orbit representation, DSST provides an attractive alternative to the more common numerical orbit propagation techniques.

  8. Semi-Analytic Galaxies - I. Synthesis of environmental and star-forming regulation mechanisms

    Cora, Sofía A.; Vega-Martínez, Cristian A.; Hough, Tomás; Ruiz, Andrés N.; Orsi, Álvaro; Muñoz Arancibia, Alejandra M.; Gargiulo, Ignacio D.; Collacchioni, Florencia; Padilla, Nelson D.; Gottlöber, Stefan; Yepes, Gustavo

    2018-05-01

    We present results from the semi-analytic model of galaxy formation SAG applied on the MULTIDARK simulation MDPL2. SAG features an updated supernova (SN) feedback scheme and a robust modelling of the environmental effects on satellite galaxies. This incorporates a gradual starvation of the hot gas halo driven by the action of ram pressure stripping (RPS), that can affect the cold gas disc, and tidal stripping (TS), which can act on all baryonic components. Galaxy orbits of orphan satellites are integrated providing adequate positions and velocities for the estimation of RPS and TS. The star formation history and stellar mass assembly of galaxies are sensitive to the redshift dependence implemented in the SN feedback model. We discuss a variant of our model that allows to reconcile the predicted star formation rate density at z ≳ 3 with the observed one, at the expense of an excess in the faint end of the stellar mass function at z = 2. The fractions of passive galaxies as a function of stellar mass, halo mass and the halo-centric distances are consistent with observational measurements. The model also reproduces the evolution of the main sequence of star forming central and satellite galaxies. The similarity between them is a result of the gradual starvation of the hot gas halo suffered by satellites, in which RPS plays a dominant role. RPS of the cold gas does not affect the fraction of quenched satellites but it contributes to reach the right atomic hydrogen gas content for more massive satellites (M⋆ ≳ 1010 M⊙).

  9. Semi-analytic calculation of the gravitational wave signal from the electroweak phase transition for general quartic scalar effective potentials

    Kehayias, John; Profumo, Stefano

    2010-01-01

    Upcoming gravitational wave (GW) detectors might detect a stochastic background of GWs potentially arising from many possible sources, including bubble collisions from a strongly first-order electroweak phase transition. We investigate whether it is possible to connect, via a semi-analytical approximation to the tunneling rate of scalar fields with quartic potentials, the GW signal through detonations with the parameters entering the potential that drives the electroweak phase transition. To this end, we consider a finite temperature effective potential similar in form to the Higgs potential in the Standard Model (SM). In the context of a semi-analytic approximation to the three dimensional Euclidean action, we derive a general approximate form for the tunneling temperature and the relevant GW parameters. We explore the GW signal across the parameter space describing the potential which drives the phase transition. We comment on the potential detectability of a GW signal with future experiments, and physical relevance of the associated potential parameters in the context of theories which have effective potentials similar in form to that of the SM. In particular we consider singlet, triplet, higher dimensional operators, and top-flavor extensions to the Higgs sector of the SM. We find that the addition of a temperature independent cubic term in the potential, arising from a gauge singlet for instance, can greatly enhance the GW power. The other parameters have milder, but potentially noticeable, effects

  10. Semi-analytical Study of a One-dimensional Contaminant Flow in a ...

    ADOWIE PERE

    ABSTRACT: The Bubnov-Galerkin weighted residual method was used to solve a one- dimensional contaminant flow problem in this paper. The governing equation of the contaminant flow, which is characterized by advection, dispersion and adsorption was discretized and solved to obtain the semi-analytical solution.

  11. A semi-analytical computation of the theoretical uncertainties of the solar neutrino flux

    Jorgensen, Andreas C. S.; Christensen-Dalsgaard, Jorgen

    2017-01-01

    We present a comparison between Monte Carlo simulations and a semi-analytical approach that reproduces the theoretical probability distribution functions of the solar neutrino fluxes, stemming from the pp, pep, hep, Be-7, B-8, N-13, O-15 and F-17 source reactions. We obtain good agreement between...

  12. Elasto-plastic strain analysis by a semi-analytical method

    deformation problems following a semi-analytical method, incorporating the com- ..... The set of equations in (8) are non-linear in nature, which is solved by direct ...... Here, [K] and [M] are stiffness matrix and mass matrix which are of the form ...

  13. Comparing semi-analytic particle tagging and hydrodynamical simulations of the Milky Way's stellar halo

    Cooper, Andrew P.; Cole, Shaun; Frenk, Carlos S.; Le Bret, Theo; Pontzen, Andrew

    2017-08-01

    Particle tagging is an efficient, but approximate, technique for using cosmological N-body simulations to model the phase-space evolution of the stellar populations predicted, for example, by a semi-analytic model of galaxy formation. We test the technique developed by Cooper et al. (which we call stings here) by comparing particle tags with stars in a smooth particle hydrodynamic (SPH) simulation. We focus on the spherically averaged density profile of stars accreted from satellite galaxies in a Milky Way (MW)-like system. The stellar profile in the SPH simulation can be recovered accurately by tagging dark matter (DM) particles in the same simulation according to a prescription based on the rank order of particle binding energy. Applying the same prescription to an N-body version of this simulation produces a density profile differing from that of the SPH simulation by ≲10 per cent on average between 1 and 200 kpc. This confirms that particle tagging can provide a faithful and robust approximation to a self-consistent hydrodynamical simulation in this regime (in contradiction to previous claims in the literature). We find only one systematic effect, likely due to the collisionless approximation, namely that massive satellites in the SPH simulation are disrupted somewhat earlier than their collisionless counterparts. In most cases, this makes remarkably little difference to the spherically averaged distribution of their stellar debris. We conclude that, for galaxy formation models that do not predict strong baryonic effects on the present-day DM distribution of MW-like galaxies or their satellites, differences in stellar halo predictions associated with the treatment of star formation and feedback are much more important than those associated with the dynamical limitations of collisionless particle tagging.

  14. Semi-Analytical method for the pricing of barrier options in case of time-dependent parameters (with Matlab® codes

    Guardasoni C.

    2018-03-01

    Full Text Available A Semi-Analytical method for pricing of Barrier Options (SABO is presented. The method is based on the foundations of Boundary Integral Methods which is recast here for the application to barrier option pricing in the Black-Scholes model with time-dependent interest rate, volatility and dividend yield. The validity of the numerical method is illustrated by several numerical examples and comparisons.

  15. Semi-analytical solution for flow in a leaky unconfined aquifer toward a partially penetrating pumping well

    Malama, Bwalya; Kuhlman, Kristopher L.; Barrash, Warren

    2008-07-01

    SummaryA semi-analytical solution is presented for the problem of flow in a system consisting of unconfined and confined aquifers, separated by an aquitard. The unconfined aquifer is pumped continuously at a constant rate from a well of infinitesimal radius that partially penetrates its saturated thickness. The solution is termed semi-analytical because the exact solution obtained in double Laplace-Hankel transform space is inverted numerically. The solution presented here is more general than similar solutions obtained for confined aquifer flow as we do not adopt the assumption of unidirectional flow in the confined aquifer (typically assumed to be horizontal) and the aquitard (typically assumed to be vertical). Model predicted results show significant departure from the solution that does not take into account the effect of leakage even for cases where aquitard hydraulic conductivities are two orders of magnitude smaller than those of the aquifers. The results show low sensitivity to changes in radial hydraulic conductivities for aquitards that are two or more orders of magnitude smaller than those of the aquifers, in conformity to findings of earlier workers that radial flow in aquitards may be neglected under such conditions. Hence, for cases were aquitard hydraulic conductivities are two or more orders of magnitude smaller than aquifer conductivities, the simpler models that restrict flow to the radial direction in aquifers and to the vertical direction in aquitards may be sufficient. However, the model developed here can be used to model flow in aquifer-aquitard systems where radial flow is significant in aquitards.

  16. Radiographic observation and semi-analytical reconstruction of fracture process zone silicate composite specimen

    Vavřík, Daniel; Jandejsek, Ivan; Fíla, Tomáš; Veselý, V.

    2013-01-01

    Roč. 58, č. 3 (2013), s. 315-326 ISSN 0001-7043 R&D Projects: GA ČR(CZ) GAP105/11/1551 Institutional support: RVO:68378297 Keywords : cementitious composite * quasi-brittle fracture * fracture process zone * digital radiography Subject RIV: JL - Materials Fatigue, Friction Mechanics http://journal.it.cas.cz/index.php?stranka=contents

  17. A semi-analytical approach for solving of nonlinear systems of functional differential equations with delay

    Rebenda, Josef; Šmarda, Zdeněk

    2017-07-01

    In the paper, we propose a correct and efficient semi-analytical approach to solve initial value problem for systems of functional differential equations with delay. The idea is to combine the method of steps and differential transformation method (DTM). In the latter, formulas for proportional arguments and nonlinear terms are used. An example of using this technique for a system with constant and proportional delays is presented.

  18. Determination of Transport Properties From Flowing Fluid Temperature Logging In Unsaturated Fractured Rocks: Theory And Semi-Analytical Solution

    Mukhopadhyay, Sumit; Tsang, Yvonne W.

    2008-01-01

    Flowing fluid temperature logging (FFTL) has been recently proposed as a method to locate flowing fractures. We argue that FFTL, backed up by data from high-precision distributed temperature sensors, can be a useful tool in locating flowing fractures and in estimating the transport properties of unsaturated fractured rocks. We have developed the theoretical background needed to analyze data from FFTL. In this paper, we present a simplified conceptualization of FFTL in unsaturated fractured rock, and develop a semianalytical solution for spatial and temporal variations of pressure and temperature inside a borehole in response to an applied perturbation (pumping of air from the borehole). We compare the semi-analytical solution with predictions from the TOUGH2 numerical simulator. Based on the semi-analytical solution, we propose a method to estimate the permeability of the fracture continuum surrounding the borehole. Using this proposed method, we estimated the effective fracture continuum permeability of the unsaturated rock hosting the Drift Scale Test (DST) at Yucca Mountain, Nevada. Our estimate compares well with previous independent estimates for fracture permeability of the DST host rock. The conceptual model of FFTL presented in this paper is based on the assumptions of single-phase flow, convection-only heat transfer, and negligible change in system state of the rock formation. In a sequel paper (Mukhopadhyay et al., 2008), we extend the conceptual model to evaluate some of these assumptions. We also perform inverse modeling of FFTL data to estimate, in addition to permeability, other transport parameters (such as porosity and thermal conductivity) of unsaturated fractured rocks

  19. Using Fourier and Taylor series expansion in semi-analytical deformation analysis of thick-walled isotropic and wound composite structures

    Jiran L.

    2016-06-01

    Full Text Available Thick-walled tubes made from isotropic and anisotropic materials are subjected to an internal pressure while the semi-analytical method is employed to investigate their elastic deformations. The contribution and novelty of this method is that it works universally for different loads, different boundary conditions, and different geometry of analyzed structures. Moreover, even when composite material is considered, the method requires no simplistic assumptions. The method uses a curvilinear tensor calculus and it works with the analytical expression of the total potential energy while the unknown displacement functions are approximated by using appropriate series expansion. Fourier and Taylor series expansion are involved into analysis in which they are tested and compared. The main potential of the proposed method is in analyses of wound composite structures when a simple description of the geometry is made in a curvilinear coordinate system while material properties are described in their inherent Cartesian coordinate system. Validations of the introduced semi-analytical method are performed by comparing results with those obtained from three-dimensional finite element analysis (FEA. Calculations with Fourier series expansion show noticeable disagreement with results from the finite element model because Fourier series expansion is not able to capture the course of radial deformation. Therefore, it can be used only for rough estimations of a shape after deformation. On the other hand, the semi-analytical method with Fourier Taylor series expansion works very well for both types of material. Its predictions of deformations are reliable and widely exploitable.

  20. Calculation of photon attenuation coefficients of elements and compounds from approximate semi-analytical formulae

    Roteta, M; Baro, J; Fernandez-Varea, J M; Salvat, F

    1994-07-01

    The FORTRAN 77 code PHOTAC to compute photon attenuation coefficients of elements and compounds is described. The code is based on the semi analytical approximate atomic cross sections proposed by Baro et al. (1994). Photoelectric cross sections for coherent and incoherent scattering and for pair production are obtained as integrals of the corresponding differential cross sections. These integrals are evaluated, to a pre-selected accuracy, by using a 20-point Gauss adaptive integration algorithm. Calculated attenuation coefficients agree with recently compiled databases to within - 1%, in the energy range from 1 keV to 1 GeV. The complete source listing of the program PHOTAC is included. (Author) 14 refs.

  1. Calculation of photon attenuation coefficients of elements and compounds from approximate semi-analytical formulae

    Roteta, M.; Baro, J.; Fernandez-Varea, J. M.; Salvat, F.

    1994-01-01

    The FORTRAN 77 code PHOTAC to compute photon attenuation coefficients of elements and compounds is described. The code is based on the semi analytical approximate atomic cross sections proposed by Baro et al. (1994). Photoelectric cross sections for coherent and incoherent scattering and for pair production are obtained as integrals of the corresponding differential cross sections. These integrals are evaluated, to a pre-selected accuracy, by using a 20-point Gauss adaptive integration algorithm. Calculated attenuation coefficients agree with recently compiled databases to within - 1%, in the energy range from 1 keV to 1 GeV. The complete source listing of the program PHOTAC is included. (Author) 14 refs

  2. Semi-analytical prediction of hydraulic resistance and heat transfer for pipe and channel flows of water at supercritical pressure

    Laurien, E.

    2012-01-01

    Within the Generation IV International Forum the Supercritical Water Reactor is investigated. For its core design and safety analysis the efficient prediction of flow and heat transfer parameters such as the wall-shear stress and the heat-transfer coefficient for pipe and channel flows is needed. For circular pipe flows a numerical model based on the one-dimensional conservation equations of mass, momentum end energy in the radial direction is presented, referred to as a 'semi-analytical' method. An accurate, high-order numerical method is employed to evaluate previously derived analytical solutions of the governing equations. Flow turbulence is modeled using the algebraic approach of Prandtl/van-Karman, including a model for the buffer layer. The influence of wall roughness is taken into account by a new modified numerical damping function of the turbulence model. The thermo-hydraulic properties of water are implemented according to the international standard of 1997. This method has the potential to be used within a sub-channel analysis code and as wall-functions for CFD codes to predict the wall shear stress and the wall temperature. The present study presents a validation of the method with comparison of model results with experiments and multi-dimensional computational (CFD) studies in a wide range of flow parameters. The focus is laid on forced convection flows related to reactor design and near-design conditions. It is found, that the method can accurately predict the wall temperature even under deterioration conditions as they occur in the selected experiments (Yamagata el al. 1972 at 24.5 MPa, Ornatski et al. 1971 at 25.5 and Swenson et al. 1963 at 22.75 MPa). Comparison of the friction coefficient under high heat flux conditions including significant viscosity and density reductions near the wall with various correlations for the hydraulic resistance will be presented; the best agreement is achieve with the correlation of Pioro et al. 2004. It is

  3. Electro-osmotic and pressure-driven flow of viscoelastic fluids in microchannels: Analytical and semi-analytical solutions

    Ferrás, L. L.; Afonso, A. M.; Alves, M. A.; Nóbrega, J. M.; Pinho, F. T.

    2016-09-01

    In this work, we present a series of solutions for combined electro-osmotic and pressure-driven flows of viscoelastic fluids in microchannels. The solutions are semi-analytical, a feature made possible by the use of the Debye-Hückel approximation for the electrokinetic fields, thus restricted to cases with small electric double-layers, in which the distance between the microfluidic device walls is at least one order of magnitude larger than the electric double-layer thickness. To describe the complex fluid rheology, several viscoelastic differential constitutive models were used, namely, the simplified Phan-Thien-Tanner model with linear, quadratic or exponential kernel for the stress coefficient function, the Johnson-Segalman model, and the Giesekus model. The results obtained illustrate the effects of the Weissenberg number, the Johnson-Segalman slip parameter, the Giesekus mobility parameter, and the relative strengths of the electro-osmotic and pressure gradient-driven forcings on the dynamics of these viscoelastic flows.

  4. The ''2T'' ion-electron semi-analytic shock solution for code-comparison with xRAGE: A report for FY16

    Ferguson, Jim Michael

    2016-01-01

    This report documents an effort to generate the semi-analytic '2T' ion-electron shock solution developed in the paper by Masser, Wohlbier, and Lowrie, and the initial attempts to understand how to use this solution as a code-verification tool for one of LANL's ASC codes, xRAGE. Most of the work so far has gone into generating the semi-analytic solution. Considerable effort will go into understanding how to write the xRAGE input deck that both matches the boundary conditions imposed by the solution, and also what physics models must be implemented within the semi-analytic solution itself to match the model assumptions inherit within xRAGE. Therefore, most of this report focuses on deriving the equations for the semi-analytic 1D-planar time-independent '2T' ion-electron shock solution, and is written in a style that is intended to provide clear guidance for anyone writing their own solver.

  5. Calculation of photon attenuation coefficients of elements and compounds from approximate semi-analytical formulae

    Roteta, M.; Baro, J.; Fernandez-Varea, J.M.; Salvat, F.

    1994-01-01

    The FORTRAN 77 code PHOTAC to compute photon attenuation coefficients of elements and compounds is described. The code is based on the semi-analytical approximate atomic cross sections proposed by Baro et al. (1994). Photoelectric cross sections are calculated directly from a simple analytical expression. Atomic cross sections for coherent and incoherent scattering and for pair production are obtained as integrals of the corresponding differential cross sections. These integrals are evaluated, to a pre-selected accuracy, by using a 20-point Gauss adaptive integration algorithm. Calculated attenuation coefficients agree with recently compiled databases to within equal 1%, in the energy range from 1 KeV to 1 GeV. The complete source listing of the program PHOTAC is included

  6. Evaluation of evaporation coefficient for micro-droplets exposed to low pressure: A semi-analytical approach

    Chakraborty, Prodyut R., E-mail: pchakraborty@iitj.ac.in [Department of Mechanical Engineering, Indian Institute of Technology Jodhpur, 342011 (India); Hiremath, Kirankumar R., E-mail: k.r.hiremath@iitj.ac.in [Department of Mathematics, Indian Institute of Technology Jodhpur, 342011 (India); Sharma, Manvendra, E-mail: PG201283003@iitj.ac.in [Defence Laboratory Jodhpur, Defence Research & Development Organisation, 342011 (India)

    2017-02-05

    Evaporation rate of water is strongly influenced by energy barrier due to molecular collision and heat transfer limitations. The evaporation coefficient, defined as the ratio of experimentally measured evaporation rate to that maximum possible theoretical limit, varies over a conflicting three orders of magnitude. In the present work, a semi-analytical transient heat diffusion model of droplet evaporation is developed considering the effect of change in droplet size due to evaporation from its surface, when the droplet is injected into vacuum. Negligible effect of droplet size reduction due to evaporation on cooling rate is found to be true. However, the evaporation coefficient is found to approach theoretical limit of unity, when the droplet radius is less than that of mean free path of vapor molecules on droplet surface contrary to the reported theoretical predictions. Evaporation coefficient was found to reduce rapidly when the droplet under consideration has a radius larger than the mean free path of evaporating molecules, confirming the molecular collision barrier to evaporation rate. The trend of change in evaporation coefficient with increasing droplet size predicted by the proposed model will facilitate obtaining functional relation of evaporation coefficient with droplet size, and can be used for benchmarking the interaction between multiple droplets during evaporation in vacuum.

  7. Evaluation of evaporation coefficient for micro-droplets exposed to low pressure: A semi-analytical approach

    Chakraborty, Prodyut R.; Hiremath, Kirankumar R.; Sharma, Manvendra

    2017-01-01

    Evaporation rate of water is strongly influenced by energy barrier due to molecular collision and heat transfer limitations. The evaporation coefficient, defined as the ratio of experimentally measured evaporation rate to that maximum possible theoretical limit, varies over a conflicting three orders of magnitude. In the present work, a semi-analytical transient heat diffusion model of droplet evaporation is developed considering the effect of change in droplet size due to evaporation from its surface, when the droplet is injected into vacuum. Negligible effect of droplet size reduction due to evaporation on cooling rate is found to be true. However, the evaporation coefficient is found to approach theoretical limit of unity, when the droplet radius is less than that of mean free path of vapor molecules on droplet surface contrary to the reported theoretical predictions. Evaporation coefficient was found to reduce rapidly when the droplet under consideration has a radius larger than the mean free path of evaporating molecules, confirming the molecular collision barrier to evaporation rate. The trend of change in evaporation coefficient with increasing droplet size predicted by the proposed model will facilitate obtaining functional relation of evaporation coefficient with droplet size, and can be used for benchmarking the interaction between multiple droplets during evaporation in vacuum.

  8. Semi-analytical solution for electro-magneto-thermoelastic creep response of functionally graded piezoelectric rotating disk

    Loghman, A.; Abdollahian, M.; Jafarzadeh Jazi, A.; Ghorbanpour Arani, A.

    2013-01-01

    Time-dependent electro-magneto-thermoelastic creep response of rotating disk made of functionally graded piezoelectric materials (FGPM) is studied. The disk is placed in a uniform magnetic and a distributed temperature field and is subjected to an induced electric potential and a centrifugal body force. The material thermal, mechanical, magnetic and electric properties are represented by power-law distributions in radial direction. The creep constitutive model is Norton's law in which the creep parameters are also power functions of radius. Using equations of equilibrium, strain-displacement and stress-strain relations in conjunction with the potential-displacement equation a non-homogeneous differential equation containing time-dependent creep strains for displacement is derived. A semi-analytical solution followed by a numerical procedure has been developed to obtain history of stresses, strains, electric potential and creep-strain rates by using Prandtl-Reuss relations. History of electric potential, Radial, circumferential and effective stresses and strains as well as the creep stress rates and effective creep strain rate histories are presented. It has been found that tensile radial stress distribution decreases during the life of the FGPM rotating disk which is associated with major electric potential redistributions which can be used as a sensor for condition monitoring of the FGPM rotating disk. (authors)

  9. A Semi-Analytical Method for Rapid Estimation of Near-Well Saturation, Temperature, Pressure and Stress in Non-Isothermal CO2 Injection

    LaForce, T.; Ennis-King, J.; Paterson, L.

    2015-12-01

    Reservoir cooling near the wellbore is expected when fluids are injected into a reservoir or aquifer in CO2 storage, enhanced oil or gas recovery, enhanced geothermal systems, and water injection for disposal. Ignoring thermal effects near the well can lead to under-prediction of changes in reservoir pressure and stress due to competition between increased pressure and contraction of the rock in the cooled near-well region. In this work a previously developed semi-analytical model for immiscible, nonisothermal fluid injection is generalised to include partitioning of components between two phases. Advection-dominated radial flow is assumed so that the coupled two-phase flow and thermal conservation laws can be solved analytically. The temperature and saturation profiles are used to find the increase in reservoir pressure, tangential, and radial stress near the wellbore in a semi-analytical, forward-coupled model. Saturation, temperature, pressure, and stress profiles are found for parameters representative of several CO2 storage demonstration projects around the world. General results on maximum injection rates vs depth for common reservoir parameters are also presented. Prior to drilling an injection well there is often little information about the properties that will determine the injection rate that can be achieved without exceeding fracture pressure, yet injection rate and pressure are key parameters in well design and placement decisions. Analytical solutions to simplified models such as these can quickly provide order of magnitude estimates for flow and stress near the well based on a range of likely parameters.

  10. Reconstructing bidimensional scalar field theory models

    Flores, Gabriel H.; Svaiter, N.F.

    2001-07-01

    In this paper we review how to reconstruct scalar field theories in two dimensional spacetime starting from solvable Scrodinger equations. Theree different Schrodinger potentials are analyzed. We obtained two new models starting from the Morse and Scarf II hyperbolic potencials, the U (θ) θ 2 In 2 (θ 2 ) model and U (θ) = θ 2 cos 2 (In(θ 2 )) model respectively. (author)

  11. A semi-analytical approach towards plane wave analysis of local resonance metamaterials using a multiscale enriched continuum description

    Sridhar, A.; Kouznetsova, V.; Geers, M.G.D.

    2017-01-01

    This work presents a novel multiscale semi-analytical technique for the acoustic plane wave analysis of (negative) dynamic mass density type local resonance metamaterials with complex micro-structural geometry. A two step solution strategy is adopted, in which the unit cell problem at the

  12. A semi-analytical study of the vibrations induced by flow in the piping of nuclear power plants

    Maneschy, J.E.

    1981-01-01

    A semi-analytical method is presented to evaluate the piping system safety due to internal flow vibration excitation. The method is based on the application of a plane spectrum on the system, resulted by measured modal accelerations. A criteria is established to verify stress levels and compare with the allowable levels. (Author) [pt

  13. Strategy for solving semi-analytically three-dimensional transient flow in a coupled N-layer aquifer system

    Veling, E.J.M.; Maas, C.

    2008-01-01

    Efficient strategies for solving semi-analytically the transient groundwater head in a coupled N-layer aquifer system phi(i)(r, z, t), i = 1, ..., N, with radial symmetry, with full z-dependency, and partially penetrating wells are presented. Aquitards are treated as aquifers with their own

  14. Field-driven chiral bubble dynamics analysed by a semi-analytical approach

    Vandermeulen, J.; Leliaert, J.; Dupré, L.; Van Waeyenberge, B.

    2017-12-01

    Nowadays, field-driven chiral bubble dynamics in the presence of the Dzyaloshinskii-Moriya interaction are a topic of thorough investigation. In this paper, a semi-analytical approach is used to derive equations of motion that express the bubble wall (BW) velocity and the change in in-plane magnetization angle as function of the micromagnetic parameters of the involved interactions, thereby taking into account the two-dimensional nature of the bubble wall. It is demonstrated that the equations of motion enable an accurate description of the expanding and shrinking convex bubble dynamics and an expression for the transition field between shrinkage and expansion is derived. In addition, these equations of motion show that the BW velocity is not only dependent on the driving force, but also on the BW curvature. The absolute BW velocity increases for both a shrinking and an expanding bubble, but for different reasons: for expanding bubbles, it is due to the increasing importance of the driving force, while for shrinking bubbles, it is due to the increasing importance of contributions related to the BW curvature. Finally, using this approach we show how the recently proposed magnetic bubblecade memory can operate in the flow regime in the presence of a tilted sinusoidal magnetic field and at greatly reduced bubble sizes compared to the original device prototype.

  15. A Semi-analytic Criterion for the Spontaneous Initiation of Carbon Detonations in White Dwarfs

    Garg, Uma; Chang, Philip

    2017-01-01

    Despite over 40 years of active research, the nature of the white dwarf progenitors of SNe Ia remains unclear. However, in the last decade, various progenitor scenarios have highlighted the need for detonations to be the primary mechanism by which these white dwarfs are consumed, but it is unclear how these detonations are triggered. In this paper we study how detonations are spontaneously initiated due to temperature inhomogeneities, e.g., hotspots, in burning nuclear fuel in a simplified physical scenario. Following the earlier work by Zel’Dovich, we describe the physics of detonation initiation in terms of the comparison between the spontaneous wave speed and the Chapman–Jouguet speed. We develop an analytic expression for the spontaneous wave speed and utilize it to determine a semi-analytic criterion for the minimum size of a hotspot with a linear temperature gradient between a peak and base temperature for which detonations in burning carbon–oxygen material can occur. Our results suggest that spontaneous detonations may easily form under a diverse range of conditions, likely allowing a number of progenitor scenarios to initiate detonations that burn up the star.

  16. Semi-analytical solutions for flow to a well in an unconfined-fractured aquifer system

    Sedghi, Mohammad M.; Samani, Nozar

    2015-09-01

    Semi-analytical solutions of flow to a well in an unconfined single porosity aquifer underlain by a fractured double porosity aquifer, both of infinite radial extent, are obtained. The upper aquifer is pumped at a constant rate from a pumping well of infinitesimal radius. The solutions are obtained via Laplace and Hankel transforms and are then numerically inverted to time domain solutions using the de Hoog et al. algorithm and Gaussian quadrature. The results are presented in the form of dimensionless type curves. The solution takes into account the effects of pumping well partial penetration, water table with instantaneous drainage, leakage with storage in the lower aquifer into the upper aquifer, and storativity and hydraulic conductivity of both fractures and matrix blocks. Both spheres and slab-shaped matrix blocks are considered. The effects of the underlying fractured aquifer hydraulic parameters on the dimensionless drawdown produced by the pumping well in the overlying unconfined aquifer are examined. The presented solution can be used to estimate hydraulic parameters of the unconfined and the underlying fractured aquifer by type curve matching techniques or with automated optimization algorithms. Errors arising from ignoring the underlying fractured aquifer in the drawdown distribution in the unconfined aquifer are also investigated.

  17. A Semi-analytic Criterion for the Spontaneous Initiation of Carbon Detonations in White Dwarfs

    Garg, Uma; Chang, Philip, E-mail: umagarg@uwm.edu, E-mail: chang65@uwm.edu [Department of Physics, University of Wisconsin-Milwaukee, 3135 North Maryland Avenue, Milwaukee, WI 53211 (United States)

    2017-02-20

    Despite over 40 years of active research, the nature of the white dwarf progenitors of SNe Ia remains unclear. However, in the last decade, various progenitor scenarios have highlighted the need for detonations to be the primary mechanism by which these white dwarfs are consumed, but it is unclear how these detonations are triggered. In this paper we study how detonations are spontaneously initiated due to temperature inhomogeneities, e.g., hotspots, in burning nuclear fuel in a simplified physical scenario. Following the earlier work by Zel’Dovich, we describe the physics of detonation initiation in terms of the comparison between the spontaneous wave speed and the Chapman–Jouguet speed. We develop an analytic expression for the spontaneous wave speed and utilize it to determine a semi-analytic criterion for the minimum size of a hotspot with a linear temperature gradient between a peak and base temperature for which detonations in burning carbon–oxygen material can occur. Our results suggest that spontaneous detonations may easily form under a diverse range of conditions, likely allowing a number of progenitor scenarios to initiate detonations that burn up the star.

  18. The implementation of a simplified spherical harmonics semi-analytic nodal method in PANTHER

    Hall, S.K.; Eaton, M.D.; Knight, M.P.

    2013-01-01

    Highlights: ► An SP N nodal method is proposed. ► Consistent CMFD derived and tested. ► Mark vacuum boundary conditions applied. ► Benchmarked against other diffusions and transport codes. - Abstract: In this paper an SP N nodal method is proposed which can utilise existing multi-group neutron diffusion solvers to obtain the solution. The semi-analytic nodal method is used in conjunction with a coarse mesh finite difference (CMFD) scheme to solve the resulting set of equations. This is compared against various nuclear benchmarks to show that the method is capable of computing an accurate solution for practical cases. A few different CMFD formulations are implemented and their performance compared. It is found that the effective diffusion coefficent (EDC) can provide additional stability and require less power iterations on a coarse mesh. A re-arrangement of the EDC is proposed that allows the iteration matrix to be computed at the beginning of a calculation. Successive nodal updates only modify the source term unlike existing CMFD methods which update the iteration matrix. A set of Mark vacuum boundary conditions are also derived which can be applied to the SP N nodal method extending its validity. This is possible due to a similarity transformation of the angular coupling matrix, which is used when applying the nodal method. It is found that the Marshak vacuum condition can also be derived, but would require the significant modification of existing neutron diffusion codes to implement it

  19. A multi-band semi-analytical algorithm for estimating chlorophyll-a concentration in the Yellow River Estuary, China.

    Chen, Jun; Quan, Wenting; Cui, Tingwei

    2015-01-01

    In this study, two sample semi-analytical algorithms and one new unified multi-band semi-analytical algorithm (UMSA) for estimating chlorophyll-a (Chla) concentration were constructed by specifying optimal wavelengths. The three sample semi-analytical algorithms, including the three-band semi-analytical algorithm (TSA), four-band semi-analytical algorithm (FSA), and UMSA algorithm, were calibrated and validated by the dataset collected in the Yellow River Estuary between September 1 and 10, 2009. By comparing of the accuracy of assessment of TSA, FSA, and UMSA algorithms, it was found that the UMSA algorithm had a superior performance in comparison with the two other algorithms, TSA and FSA. Using the UMSA algorithm in retrieving Chla concentration in the Yellow River Estuary decreased by 25.54% NRMSE (normalized root mean square error) when compared with the FSA algorithm, and 29.66% NRMSE in comparison with the TSA algorithm. These are very significant improvements upon previous methods. Additionally, the study revealed that the TSA and FSA algorithms are merely more specific forms of the UMSA algorithm. Owing to the special form of the UMSA algorithm, if the same bands were used for both the TSA and UMSA algorithms or FSA and UMSA algorithms, the UMSA algorithm would theoretically produce superior results in comparison with the TSA and FSA algorithms. Thus, good results may also be produced if the UMSA algorithm were to be applied for predicting Chla concentration for datasets of Gitelson et al. (2008) and Le et al. (2009).

  20. A Generalized Semi-Analytical Solution for the Dispersive Henry Problem: Effect of Stratification and Anisotropy on Seawater Intrusion

    Marwan Fahs

    2018-02-01

    Full Text Available The Henry problem (HP continues to play a useful role in theoretical and practical studies related to seawater intrusion (SWI into coastal aquifers. The popularity of this problem is attributed to its simplicity and precision to the existence of semi-analytical (SA solutions. The first SA solution has been developed for a high uniform diffusion coefficient. Several further studies have contributed more realistic solutions with lower diffusion coefficients or velocity-dependent dispersion. All the existing SA solutions are limited to homogenous and isotropic domains. This work attempts to improve the realism of the SA solution of the dispersive HP by extending it to heterogeneous and anisotropic coastal aquifers. The solution is obtained using the Fourier series method. A special hydraulic conductivity–depth model describing stratified heterogeneity is used for mathematical convenience. An efficient technique is developed to solve the flow and transport equations in the spectral space. With this technique, we show that the HP can be solved in the spectral space with the salt concentration as primary unknown. Several examples are generated, and the SA solutions are compared against an in-house finite element code. The results provide high-quality data assessed by quantitative indicators that can be effectively used for code verification in realistic configurations of heterogeneity and anisotropy. The SA solution is used to explain contradictory results stated in the previous works about the effect of anisotropy on the saltwater wedge. It is also used to investigate the combined influence of stratification and anisotropy on relevant metrics characterizing SWI. At a constant gravity number, anisotropy leads to landward migration of the saltwater wedge, more intense saltwater flux, a wider mixing zone and shallower groundwater discharge zone to the sea. The influence of stratified heterogeneity is more pronounced in highly anisotropic aquifers. The

  1. Computing dispersion curves of elastic/viscoelastic transversely-isotropic bone plates coupled with soft tissue and marrow using semi-analytical finite element (SAFE) method.

    Nguyen, Vu-Hieu; Tran, Tho N H T; Sacchi, Mauricio D; Naili, Salah; Le, Lawrence H

    2017-08-01

    We present a semi-analytical finite element (SAFE) scheme for accurately computing the velocity dispersion and attenuation in a trilayered system consisting of a transversely-isotropic (TI) cortical bone plate sandwiched between the soft tissue and marrow layers. The soft tissue and marrow are mimicked by two fluid layers of finite thickness. A Kelvin-Voigt model accounts for the absorption of all three biological domains. The simulated dispersion curves are validated by the results from the commercial software DISPERSE and published literature. Finally, the algorithm is applied to a viscoelastic trilayered TI bone model to interpret the guided modes of an ex-vivo experimental data set from a bone phantom. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Semi-analytic equations to the Cox-Thompson inverse scattering method at fixed energy for special cases

    Palmai, T.; Apagyi, B.; Horvath, M.

    2008-01-01

    Solution of the Cox-Thompson inverse scattering problem at fixed energy 1-3 is reformulated resulting in semi-analytic equations. The new set of equations for the normalization constants and the nonphysical (shifted) angular momenta are free of matrix inversion operations. This simplification is a result of treating only the input phase shifts of partial waves of a given parity. Therefore, the proposed method can be applied for identical particle scattering of the bosonic type (or for certain cases of identical fermionic scattering). The new formulae are expected to be numerically more efficient than the previous ones. Based on the semi-analytic equations an approximate method is proposed for the generic inverse scattering problem, when partial waves of arbitrary parity are considered. (author)

  3. A Semi-Analytical Methodology for Multiwell Productivity Index of Well-Industry-Production-Scheme in Tight Oil Reservoirs

    Guangfeng Liu

    2018-04-01

    Full Text Available Recently, the well-industry-production-scheme (WIPS has attracted more and more attention to improve tight oil recovery. However, multi-well pressure interference (MWPI induced by well-industry-production-scheme (WIPS strongly challenges the traditional transient pressure analysis methods, which focus on single multi-fractured horizontal wells (SMFHWs without MWPI. Therefore, a semi-analytical methodology for multiwell productivity index (MPI was proposed to study well performance of WIPS scheme in tight reservoir. To facilitate methodology development, the conceptual models of tight formation and WIPS scheme were firstly described. Secondly, seepage models of tight reservoir and hydraulic fractures (HFs were sequentially established and then dynamically coupled. Numerical simulation was utilized to validate our model. Finally, identification of flow regimes and sensitivity analysis were conducted. Our results showed that there was good agreement between our proposed model and numerical simulation; moreover, our approach also gave promising calculation speed over numerical simulation. Some expected flow regimes were significantly distorted due to WIPS. The slope of type curves which characterize the linear or bi-linear flow regime is bigger than 0.5 or 0.25. The horizontal line which characterize radial flow regime is also bigger 0.5. The smaller the oil rate, the more severely flow regimes were distorted. Well rate mainly determines the distortion of MPI curves, while fracture length, well spacing, fracture spacing mainly determine when the distortion of the MPI curves occurs. The bigger the well rate, the more severely the MPI curves are distorted. While as the well spacing decreases, fracture length increases, fracture spacing increases, occurrence of MWPI become earlier. Stress sensitivity coefficient mainly affects the MPI at the formation pseudo-radial flow stage, almost has no influence on the occurrence of MWPI. This work gains some

  4. Human eyeball model reconstruction and quantitative analysis.

    Xing, Qi; Wei, Qi

    2014-01-01

    Determining shape of the eyeball is important to diagnose eyeball disease like myopia. In this paper, we present an automatic approach to precisely reconstruct three dimensional geometric shape of eyeball from MR Images. The model development pipeline involved image segmentation, registration, B-Spline surface fitting and subdivision surface fitting, neither of which required manual interaction. From the high resolution resultant models, geometric characteristics of the eyeball can be accurately quantified and analyzed. In addition to the eight metrics commonly used by existing studies, we proposed two novel metrics, Gaussian Curvature Analysis and Sphere Distance Deviation, to quantify the cornea shape and the whole eyeball surface respectively. The experiment results showed that the reconstructed eyeball models accurately represent the complex morphology of the eye. The ten metrics parameterize the eyeball among different subjects, which can potentially be used for eye disease diagnosis.

  5. Dose reconstruction modeling for medical radiation workers

    Choi, Yeong Chull; Cha, Eun Shil; Lee, Won Jin

    2017-01-01

    Exposure information is a crucial element for the assessment of health risk due to radiation. Radiation doses received by medical radiation workers have been collected and maintained by public registry since 1996. Since exposure levels in the remote past are greater concern, it is essential to reconstruct unmeasured doses in the past using known information. We developed retrodiction models for different groups of medical radiation workers and estimate individual past doses before 1996. Reconstruction models for past radiation doses received by medical radiation workers were developed, and the past doses were estimated. Using these estimates, organ doses should be calculated which, in turn, will be used to explore a wide range of health risks of medical occupational radiation exposure. Reconstruction models for past radiation doses received by medical radiation workers were developed, and the past doses were estimated. Using these estimates, organ doses should be calculated which, in turn, will be used to explore a wide range of health risks of medical occupational radiation exposure.

  6. Dose reconstruction modeling for medical radiation workers

    Choi, Yeong Chull; Cha, Eun Shil; Lee, Won Jin [Dept. of Preventive Medicine, Korea University, Seoul (Korea, Republic of)

    2017-04-15

    Exposure information is a crucial element for the assessment of health risk due to radiation. Radiation doses received by medical radiation workers have been collected and maintained by public registry since 1996. Since exposure levels in the remote past are greater concern, it is essential to reconstruct unmeasured doses in the past using known information. We developed retrodiction models for different groups of medical radiation workers and estimate individual past doses before 1996. Reconstruction models for past radiation doses received by medical radiation workers were developed, and the past doses were estimated. Using these estimates, organ doses should be calculated which, in turn, will be used to explore a wide range of health risks of medical occupational radiation exposure. Reconstruction models for past radiation doses received by medical radiation workers were developed, and the past doses were estimated. Using these estimates, organ doses should be calculated which, in turn, will be used to explore a wide range of health risks of medical occupational radiation exposure.

  7. A semi-analytical modelling of multistage bunch compression with collective effects

    Zagorodnov, Igor; Dohlus, Martin

    2010-07-01

    In this paper we introduce an analytical solution (up to the third order) for a multistage bunch compression and acceleration system without collective effects. The solution for the system with collective effects is found by an iterative procedure based on this analytical result. The developed formalism is applied to the FLASH facility at DESY. Analytical estimations of RF tolerances are given. (orig.)

  8. A semi-analytical modelling of multistage bunch compression with collective effects

    Zagorodnov, Igor; Dohlus, Martin

    2010-07-15

    In this paper we introduce an analytical solution (up to the third order) for a multistage bunch compression and acceleration system without collective effects. The solution for the system with collective effects is found by an iterative procedure based on this analytical result. The developed formalism is applied to the FLASH facility at DESY. Analytical estimations of RF tolerances are given. (orig.)

  9. Semi-analytical model of laser resonance absorption in plasmas with a parabolic density profile

    Pestehe, S J; Mohammadnejad, M

    2010-01-01

    Analytical expressions for mode conversion and resonance absorption of electromagnetic waves in inhomogeneous, unmagnetized plasmas are required for laboratory and simulation studies. Although most of the analyses of this problem have concentrated on the linear plasma density profile, there are a few research works that deal with different plasma density profiles including the parabolic profile. Almost none of them could give clear analytical formulae for the electric and magnetic components of the electromagnetic field propagating through inhomogeneous plasmas. In this paper, we have considered the resonant absorption of laser light near the critical density of plasmas with parabolic electron density profiles followed by a uniform over-dense region and have obtained expressions for the electric and magnetic vectors of laser light propagating through the plasma. An estimation of the fractional absorption of laser energy has also been carried out. It has been shown that, in contrast to the linear density profile, the energy absorption depends explicitly on the value of collision frequency as well as on a new parameter, N, called the over-dense density order.

  10. A semi-analytical radiobiological model may assist treatment planning in light ion radiotherapy

    Kundrát, Pavel

    2007-01-01

    Roč. 52, č. 23 (2007), s. 6813-6830 ISSN 0031-9155 R&D Projects: GA ČR GA202/05/2728 Institutional research plan: CEZ:AV0Z10100502 Keywords : Bragg peak * light ions * hadron * hadron radiotherapy * biological effectiveness * treatment planning Subject RIV: BF - Elementary Particles and High Energy Physics Impact factor: 2.528, year: 2007

  11. Semi-Analytic Solution of HIV and TB Co-Infection Model BOLARIN ...

    ADOWIE PERE

    HIV/TB co-infection is the most powerful known risk factor for ... homotopy transform to generate a convergent series solution of ... the boundary of the domain Ω. The operator A can be divided into two parts L and N, where L is the linear part,.

  12. A semi-analytical thermal modelling approach for selective laser melting

    Yang, Y.; van Keulen, A.; Ayas, C.

    2018-01-01

    Selective laser melting (SLM) wherein a metal part is built in a layer-by-layer manner in a powder bed is a promising and versatile way for manufacturing components with complex geometry. However, components built by SLM suffer from substantial deformation of the part and residual stresses.

  13. Reconstructing building mass models from UAV images

    Li, Minglei

    2015-07-26

    We present an automatic reconstruction pipeline for large scale urban scenes from aerial images captured by a camera mounted on an unmanned aerial vehicle. Using state-of-the-art Structure from Motion and Multi-View Stereo algorithms, we first generate a dense point cloud from the aerial images. Based on the statistical analysis of the footprint grid of the buildings, the point cloud is classified into different categories (i.e., buildings, ground, trees, and others). Roof structures are extracted for each individual building using Markov random field optimization. Then, a contour refinement algorithm based on pivot point detection is utilized to refine the contour of patches. Finally, polygonal mesh models are extracted from the refined contours. Experiments on various scenes as well as comparisons with state-of-the-art reconstruction methods demonstrate the effectiveness and robustness of the proposed method.

  14. Semi-analytical treatment of fracture/matrix flow in a dual-porosity simulator for unsaturated fractured rock masses

    Zimmerman, R.W.; Bodvarsson, G.S.

    1992-04-01

    A semi-analytical dual-porosity simulator for unsaturated flow in fractured rock masses has been developed. Fluid flow between the fracture network and the matrix blocks is described by analytical expressions that have been derived from approximate solutions to the imbibition equation. These expressions have been programmed into the unsaturated flow simulator, TOUGH, as a source/sink term. Flow processes are then simulated using only fracture elements in the computational grid. The modified code is used to simulate flow along single fractures, and infiltration into pervasively fractured formations

  15. Computation of potentials from current electrodes in cylindrically stratified media: A stable, rescaled semi-analytical formulation

    Moon, Haksu; Teixeira, Fernando L.; Donderici, Burkay

    2015-01-01

    We present an efficient and robust semi-analytical formulation to compute the electric potential due to arbitrary-located point electrodes in three-dimensional cylindrically stratified media, where the radial thickness and the medium resistivity of each cylindrical layer can vary by many orders of magnitude. A basic roadblock for robust potential computations in such scenarios is the poor scaling of modified-Bessel functions used for computation of the semi-analytical solution, for extreme arguments and/or orders. To accommodate this, we construct a set of rescaled versions of modified-Bessel functions, which avoids underflows and overflows in finite precision arithmetic, and minimizes round-off errors. In addition, several extrapolation methods are applied and compared to expedite the numerical evaluation of the (otherwise slowly convergent) associated Sommerfeld-type integrals. The proposed algorithm is verified in a number of scenarios relevant to geophysical exploration, but the general formulation presented is also applicable to other problems governed by Poisson equation such as Newtonian gravity, heat flow, and potential flow in fluid mechanics, involving cylindrically stratified environments.

  16. Improvement of spatial discretization error on the semi-analytic nodal method using the scattered source subtraction method

    Yamamoto, Akio; Tatsumi, Masahiro

    2006-01-01

    In this paper, the scattered source subtraction (SSS) method is newly proposed to improve the spatial discretization error of the semi-analytic nodal method with the flat-source approximation. In the SSS method, the scattered source is subtracted from both side of the diffusion or the transport equation to make spatial variation of the source term to be small. The same neutron balance equation is still used in the SSS method. Since the SSS method just modifies coefficients of node coupling equations (those used in evaluation for the response of partial currents), its implementation is easy. Validity of the present method is verified through test calculations that are carried out in PWR multi-assemblies configurations. The calculation results show that the SSS method can significantly improve the spatial discretization error. Since the SSS method does not have any negative impact on execution time, convergence behavior and memory requirement, it will be useful to reduce the spatial discretization error of the semi-analytic nodal method with the flat-source approximation. (author)

  17. Casting the Coronal Magnetic Field Reconstructions with Magnetic Field Constraints above the Photosphere in 3D Using MHD Bifrost Model

    Fleishman, G. D.; Anfinogentov, S.; Loukitcheva, M.; Mysh'yakov, I.; Stupishin, A.

    2017-12-01

    Measuring and modeling coronal magnetic field, especially above active regions (ARs), remains one of the central problems of solar physics given that the solar coronal magnetism is the key driver of all solar activity. Nowadays the coronal magnetic field is often modelled using methods of nonlinear force-free field reconstruction, whose accuracy has not yet been comprehensively assessed. Given that the coronal magnetic probing is routinely unavailable, only morphological tests have been applied to evaluate performance of the reconstruction methods and a few direct tests using available semi-analytical force-free field solution. Here we report a detailed casting of various tools used for the nonlinear force-free field reconstruction, such as disambiguation methods, photospheric field preprocessing methods, and volume reconstruction methods in a 3D domain using a 3D snapshot of the publicly available full-fledged radiative MHD model. We take advantage of the fact that from the realistic MHD model we know the magnetic field vector distribution in the entire 3D domain, which enables us to perform "voxel-by-voxel" comparison of the restored magnetic field and the true magnetic field in the 3D model volume. Our tests show that the available disambiguation methods often fail at the quiet sun areas, where the magnetic structure is dominated by small-scale magnetic elements, while they work really well at the AR photosphere and (even better) chromosphere. The preprocessing of the photospheric magnetic field, although does produce a more force-free boundary condition, also results in some effective `elevation' of the magnetic field components. The effective `elevation' height turns out to be different for the longitudinal and transverse components of the magnetic field, which results in a systematic error in absolute heights in the reconstructed magnetic data cube. The extrapolation performed starting from actual AR photospheric magnetogram (i.e., without preprocessing) are

  18. Atmospheric inverse modeling via sparse reconstruction

    Hase, Nils; Miller, Scot M.; Maaß, Peter; Notholt, Justus; Palm, Mathias; Warneke, Thorsten

    2017-10-01

    Many applications in atmospheric science involve ill-posed inverse problems. A crucial component of many inverse problems is the proper formulation of a priori knowledge about the unknown parameters. In most cases, this knowledge is expressed as a Gaussian prior. This formulation often performs well at capturing smoothed, large-scale processes but is often ill equipped to capture localized structures like large point sources or localized hot spots. Over the last decade, scientists from a diverse array of applied mathematics and engineering fields have developed sparse reconstruction techniques to identify localized structures. In this study, we present a new regularization approach for ill-posed inverse problems in atmospheric science. It is based on Tikhonov regularization with sparsity constraint and allows bounds on the parameters. We enforce sparsity using a dictionary representation system. We analyze its performance in an atmospheric inverse modeling scenario by estimating anthropogenic US methane (CH4) emissions from simulated atmospheric measurements. Different measures indicate that our sparse reconstruction approach is better able to capture large point sources or localized hot spots than other methods commonly used in atmospheric inversions. It captures the overall signal equally well but adds details on the grid scale. This feature can be of value for any inverse problem with point or spatially discrete sources. We show an example for source estimation of synthetic methane emissions from the Barnett shale formation.

  19. A semi-analytical method to evaluate the dielectric response of a tokamak plasma accounting for drift orbit effects

    Van Eester, Dirk

    2005-03-01

    A semi-analytical method is proposed to evaluate the dielectric response of a plasma to electromagnetic waves in the ion cyclotron domain of frequencies in a D-shaped but axisymmetric toroidal geometry. The actual drift orbit of the particles is accounted for. The method hinges on subdividing the orbit into elementary segments in which the integrations can be performed analytically or by tabulation, and it relies on the local book-keeping of the relation between the toroidal angular momentum and the poloidal flux function. Depending on which variables are chosen, the method allows computation of elementary building blocks for either the wave or the Fokker-Planck equation, but the accent is mainly on the latter. Two types of tangent resonance are distinguished.

  20. Semi-analytical approach for guided mode resonance in high-index-contrast photonic crystal slab: TE polarization.

    Yang, Yi; Peng, Chao; Li, Zhengbin

    2013-09-09

    In high-contrast (HC) photonic crystals (PC) slabs, the high-order coupling is so intense that it is indispensable for analyzing the guided mode resonance (GMR) effect. In this paper, a semi-analytical approach is proposed for analyzing GMR in HC PC slabs with TE-like polarization. The intense high-order coupling is included by using a convergent recursive procedure. The reflection of radiative waves at high-index-contrast interfaces is also considered by adopting a strict Green's function for multi-layer structures. Modal properties of interest like band structure, radiation constant, field profile are calculated, agreeing well with numerical finite-difference time-domain simulations. This analysis is promising for the design and optimization of various HC PC devices.

  1. Analytical and semi-analytical formalism for the voltage and the current sources of a superconducting cavity under dynamic detuning

    Doleans, M

    2003-01-01

    Elliptical superconducting radio frequency (SRF) cavities are sensitive to frequency detuning because they have a high Q value in comparison with normal conducting cavities and weak mechanical properties. Radiation pressure on the cavity walls, microphonics, and tuning system are possible sources of dynamic detuning during SRF cavity-pulsed operation. A general analytic relation between the cavity voltage, the dynamic detuning function, and the RF control function is developed. This expression for the voltage envelope in a cavity under dynamic detuning and dynamic RF controls is analytically expressed through an integral formulation. A semi-analytical scheme is derived to calculate the voltage behavior in any practical case. Examples of voltage envelope behavior for different cases of dynamic detuning and RF control functions are shown. The RF control function for a cavity under dynamic detuning is also investigated and as an application various filling schemes are presented.

  2. Weak form implementation of the semi-analytical finite element (SAFE) method for a variety of elastodynamic waveguides

    Hakoda, Christopher; Lissenden, Clifford; Rose, Joseph L.

    2018-04-01

    Dispersion curves are essential to any guided wave NDE project. The Semi-Analytical Finite Element (SAFE) method has significantly increased the ease by which these curves can be calculated. However, due to misconceptions regarding theory and fragmentation based on different finite-element software, the theory has stagnated, and adoption by researchers who are new to the field has been slow. This paper focuses on the relationship between the SAFE formulation and finite element theory, and the implementation of the SAFE method in a weak form for plates, pipes, layered waveguides/composites, curved waveguides, and arbitrary cross-sections is shown. The benefits of the weak form are briefly described, as is implementation in open-source and commercial finite element software.

  3. A semi-analytical solution for elastic analysis of rotating thick cylindrical shells with variable thickness using disk form multilayers.

    Zamani Nejad, Mohammad; Jabbari, Mehdi; Ghannad, Mehdi

    2014-01-01

    Using disk form multilayers, a semi-analytical solution has been derived for determination of displacements and stresses in a rotating cylindrical shell with variable thickness under uniform pressure. The thick cylinder is divided into disk form layers form with their thickness corresponding to the thickness of the cylinder. Due to the existence of shear stress in the thick cylindrical shell with variable thickness, the equations governing disk layers are obtained based on first-order shear deformation theory (FSDT). These equations are in the form of a set of general differential equations. Given that the cylinder is divided into n disks, n sets of differential equations are obtained. The solution of this set of equations, applying the boundary conditions and continuity conditions between the layers, yields displacements and stresses. A numerical solution using finite element method (FEM) is also presented and good agreement was found.

  4. A Semi-Analytical Solution for Elastic Analysis of Rotating Thick Cylindrical Shells with Variable Thickness Using Disk Form Multilayers

    Mohammad Zamani Nejad

    2014-01-01

    Full Text Available Using disk form multilayers, a semi-analytical solution has been derived for determination of displacements and stresses in a rotating cylindrical shell with variable thickness under uniform pressure. The thick cylinder is divided into disk form layers form with their thickness corresponding to the thickness of the cylinder. Due to the existence of shear stress in the thick cylindrical shell with variable thickness, the equations governing disk layers are obtained based on first-order shear deformation theory (FSDT. These equations are in the form of a set of general differential equations. Given that the cylinder is divided into n disks, n sets of differential equations are obtained. The solution of this set of equations, applying the boundary conditions and continuity conditions between the layers, yields displacements and stresses. A numerical solution using finite element method (FEM is also presented and good agreement was found.

  5. Free vibration of thin axisymmetric structures by a semi-analytical finite element scheme and isoparametric solid elements

    Akeju, T.A.I.; Kelly, D.W.; Zienkiewicz, O.C.; Kanaka Raju, K.

    1981-01-01

    The eigenvalue equations governing the free vibration of axisymmetric solids are derived by means of a semi-analytical finite element scheme. In particular we investigated the use of an 8-node solid element in structures which exhibit a 'shell-like' behaviour. Bathe-Wilson subspace iteration algorithm is employed for the solution of the equations. The element is shown to give good results for beam and shell vibration problems. It is also utilised to solve a complex solid in the form of an internal component of a modern jet engine. This particular application is of considerable practical importance as the dynamics of such components form a dominant design constraint. (orig./HP)

  6. Semi-analytical Karhunen-Loeve representation of irregular waves based on the prolate spheroidal wave functions

    Lee, Gibbeum; Cho, Yeunwoo

    2018-01-01

    A new semi-analytical approach is presented to solving the matrix eigenvalue problem or the integral equation in Karhunen-Loeve (K-L) representation of random data such as irregular ocean waves. Instead of direct numerical approach to this matrix eigenvalue problem, which may suffer from the computational inaccuracy for big data, a pair of integral and differential equations are considered, which are related to the so-called prolate spheroidal wave functions (PSWF). First, the PSWF is expressed as a summation of a small number of the analytical Legendre functions. After substituting them into the PSWF differential equation, a much smaller size matrix eigenvalue problem is obtained than the direct numerical K-L matrix eigenvalue problem. By solving this with a minimal numerical effort, the PSWF and the associated eigenvalue of the PSWF differential equation are obtained. Then, the eigenvalue of the PSWF integral equation is analytically expressed by the functional values of the PSWF and the eigenvalues obtained in the PSWF differential equation. Finally, the analytically expressed PSWFs and the eigenvalues in the PWSF integral equation are used to form the kernel matrix in the K-L integral equation for the representation of exemplary wave data such as ordinary irregular waves. It is found that, with the same accuracy, the required memory size of the present method is smaller than that of the direct numerical K-L representation and the computation time of the present method is shorter than that of the semi-analytical method based on the sinusoidal functions.

  7. Estimating Cloud optical thickness from SEVIRI, for air quality research, by implementing a semi-analytical cloud retrieval algorithm

    Pandey, Praveen; De Ridder, Koen; van Looy, Stijn; van Lipzig, Nicole

    2010-05-01

    Clouds play an important role in Earth's climate system. As they affect radiation hence photolysis rate coefficients (ozone formation),they also affect the air quality at the surface of the earth. Thus, a satellite remote sensing technique is used to retrieve the cloud properties for air quality research. The geostationary satellite, Meteosat Second Generation (MSG) has onboard, the Spinning Enhanced Visible and Infrared Imager (SEVIRI). The channels in the wavelength 0.6 µm and 1.64 µm are used to retrieve cloud optical thickness (COT). The study domain is over Europe covering a region between 35°N-70°N and 5°W-30°E, centred over Belgium. The steps involved in pre-processing the EUMETSAT level 1.5 images are described, which includes, acquisition of digital count number, radiometric conversion using offsets and slopes, estimation of radiance and calculation of reflectance. The Sun-earth-satellite geometry also plays an important role. A semi-analytical cloud retrieval algorithm (Kokhanovsky et al., 2003) is implemented for the estimation of COT. This approach doesn't involve the conventional look-up table approach, hence it makes the retrieval independent of numerical radiative transfer solutions. The semi-analytical algorithm is implemented on a monthly dataset of SEVIRI level 1.5 images. Minimum reflectance in the visible channel, at each pixel, during the month is accounted as the surface albedo of the pixel. Thus, monthly variation of COT over the study domain is prepared. The result so obtained, is compared with the COT products of Satellite Application Facility on Climate Monitoring (CM SAF). Henceforth, an approach to assimilate the COT for air quality research is presented. Address of corresponding author: Praveen Pandey, VITO- Flemish Institute for Technological Research, Boeretang 200, B 2400, Mol, Belgium E-mail: praveen.pandey@vito.be

  8. Thermal Analysis of Disposal of High-Level Nuclear Waste in a Generic Bedded Salt repository using the Semi-Analytical Method.

    Hadgu, Teklu [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Matteo, Edward N. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-05-01

    An example case is presented for testing analytical thermal models. The example case represents thermal analysis of a generic repository in bedded salt at 500 m depth. The analysis is part of the study reported in Matteo et al. (2016). Ambient average ground surface temperature of 15°C, and a natural geothermal gradient of 25°C/km, were assumed to calculate temperature at the near field. For generic salt repository concept crushed salt backfill is assumed. For the semi-analytical analysis crushed salt thermal conductivity of 0.57 W/m-K was used. With time the crushed salt is expected to consolidate into intact salt. In this study a backfill thermal conductivity of 3.2 W/m-K (same as intact) is used for sensitivity analysis. Decay heat data for SRS glass is given in Table 1. The rest of the parameter values are shown below. Results of peak temperatures at the waste package surface are given in Table 2.

  9. Model-centric software architecture reconstruction

    Stoermer, C.; Rowe, A.; O'Brien, L.; Verhoef, C.

    2006-01-01

    Much progress has been achieved in defining methods, techniques, and tools for software architecture reconstruction (SAR). However, less progress has been achieved in constructing reasoning frameworks from existing systems that support organizations in architecture analysis and design decisions.

  10. A semi-analytical solution for viscothermal wave propagation in narrow gaps with arbitrary boundary conditions.

    Wijnant, Ysbrand H.; Spiering, R.M.E.J.; Blijderveen, M.; de Boer, Andries

    2006-01-01

    Previous research has shown that viscothermal wave propagation in narrow gaps can efficiently be described by means of the low reduced frequency model. For simple geometries and boundary conditions, analytical solutions are available. For example, Beltman [4] gives the acoustic pressure in the gap

  11. A semi-analytical study of stick-slip oscillations in drilling systems

    Besselink, B.; Wouw, van de N.; Nijmeijer, H.

    2011-01-01

    Rotary drilling systems are known to exhibit torsional stick-slip vibrations, which decrease drilling efficiency and accelerate the wear of drag bits. The mechanisms leading to these torsional vibrations are analyzed using a model that includes both axial and torsional drill string dynamics, which

  12. Semi-analytical investigation of electronics cooling using developing nanofluid flow in rectangular microchannels

    Mital, Manu

    2013-01-01

    Thermal management issues are limiting barriers to high density electronics packaging and miniaturization. Liquid cooling using microchannels is an attractive alternative to bulky aluminum heat sinks. The channels can be integrated directly into a chip, and cooling can be further enhanced using nanofluids. The goals of this study are to evaluate heat transfer improvement of a rectangular channel nanofluid heat sink with developing laminar flow, taking into account the pumping power penalty. The proposed model uses semi-empirical correlations to calculate effective nanofluid thermophysical properties, which are then incorporated into heat transfer and friction factor correlations in literature for single-phase flows. The predictions of the model are found to be in good agreement with experimental studies. The validated model is used to predict the thermal resistance and pumping power as a function of four design variables that include the channel width, the wall width, the flow velocity and the particle volume fraction. The parameters are optimized using a Genetic Algorithm (GA) with minimum thermal resistance as the objective function, and fixed specified value of pumping power as the constraint. For a given value of pumping power, the benefit of nanoparticle addition is evaluated by independently optimizing the heat sink, first with nanofluid, and then with base fluid. Comparing the minimized thermal resistances revealed only a small benefit since the nanoparticles increase the pumping power which can alternately be diverted toward an increased velocity in a pure fluid heat sink. The benefit further diminishes with increase in available pumping power. -- Highlights: ► Validated model used to predict heat transfer and pumping power (p.p.) in nanofluids. ► Genetic algorithm used to minimize thermal resistance with p.p. constraint. ► Heat sink design independently optimized with nanofluid and base fluid coolant. ► No significant benefit through particle

  13. Porous media: Analysis, reconstruction and percolation

    Rogon, Thomas Alexander

    1995-01-01

    functions of Gaussian fields and spatial autocorrelation functions of binary fields. An enhanced approach which embodies semi-analytical solutions for the conversions has been made. The scope and limitations of the method have been analysed in terms of realizability of different model correlation functions...... stereological methods. The measured sample autocorrelations are modeled by analytical correlation functions. A method for simulating porous networks from their porosity and spatial correlation originally developed by Joshi (14) is presented. This method is based on a conversion between spatial autocorrelation...... in binary fields. Percolation threshold of reconstructed porous media has been determined for different discretizations of a selected model correlation function. Also critical exponents such as the correlation length exponent v, the strength of the infinite network and the mean size of finite clusters have...

  14. Semi-analytical Vibration Characteristics of Rotating Timoshenko Beams Made of Functionally Graded Materials

    Farzad Ebrahimia

    Full Text Available AbstractFree vibration analysis of rotating functionally graded (FG thick Timoshenko beams is presented. The material properties of FG beam vary along the thickness direction of the constituents according to power law model. Governing equations are derived through Hamilton's principle and they are solved applying differential transform method. The good agreement between the results of this article and those available in literature validated the presented approach. The emphasis is placed on investigating the effect of several beam parameters such as constituent volume fractions, slenderness ratios, rotational speed and hub radius on natural frequencies and mode shapes of the rotating thick FG beam.

  15. Full and semi-analytic analyses of two-pump parametric amplification with pump depletion

    Steffensen, Henrik; Ott, Johan Raunkjær; Rottwitt, Karsten

    2011-01-01

    This paper solves the four coupled equations describing non-degenerate four-wave mixing, with the focus on amplifying a signal in a fiber optical parametric amplifier (FOPA). Based on the full analytic solution, a simple approximate solution describing the gain is developed. The advantage...... of this new approximation is that it includes the depletion of the pumps, which is lacking in the usual quasi-linearized approximation. With the proposed model it is thus simple to predict the gain of a FOPA, which we demonstrate with a highly nonlinear fiber to show that an undepleted FOPA can produce a flat...

  16. Efficient robust control of first order scalar conservation laws using semi-analytical solutions

    Li, Yanning; Canepa, Edward S.; Claudel, Christian G.

    2014-01-01

    This article presents a new robust control framework for transportation problems in which the state is modeled by a first order scalar conservation law. Using an equivalent formulation based on a Hamilton-Jacobi equation, we pose the problem of controlling the state of the system on a network link, using initial density control and boundary flow control, as a Linear Program. We then show that this framework can be extended to arbitrary control problems involving the control of subsets of the initial and boundary conditions. Unlike many previously investigated transportation control schemes, this method yields a globally optimal solution and is capable of handling shocks (i.e. discontinuities in the state of the system). We also demonstrate that the same framework can handle robust control problems, in which the uncontrollable components of the initial and boundary conditions are encoded in intervals on the right hand side of inequalities in the linear program. The lower bound of the interval which defines the smallest feasible solution set is used to solve the robust LP/MILP. Since this framework leverages the intrinsic properties of the Hamilton-Jacobi equation used to model the state of the system, it is extremely fast. Several examples are given to demonstrate the performance of the robust control solution and the trade-off between the robustness and the optimality.

  17. Developing Semi-Analytical solutions for Saint-Venant Equations in the Uniform Flow Region

    M.M. Heidari

    2016-09-01

    Full Text Available Introduction: Unsteady flow in irrigation systems is the result of operations in response to changes in water demand that affect the hydraulic performance networks. The increased hydraulic performance needed to recognize unsteady flow and quantify the factors affecting it. Unsteady flow in open channels is governed by the fully dynamic Saint Venant equation, which express the principles of conservation of mass and momentum. Unsteady flow in open channels can be classified into two types: routing and operation-type problems. In the routing problems, The Saint Venant equations are solved to get the discharge and water level in the time series. Also, they are used in the operation problem to compute the inflow at the upstream section of the channel according to the prescribed downstream flow hydrographs. The Saint Venant equation has no analytical solution and in the majority cases of such methods use numerical integration of continuity and momentum equations, and are characterized by complicated numerical procedures that are not always convenient for carrying out practical engineering calculations. Therefore, approximate methods deserve attention since they would allow the solution of dynamic problems in analytical form with enough exactness. There are effective methods for automatic controller synthesis in control theory that provide the required performance optimization. It is therefore important to get simplified models of irrigation canals for control design. It would be even more interesting to have linear models that explicitly depend on physical parameters. Such models would allow one to, handle the dynamics of the system with fewer parameters, understand the impact of physical parameters on the dynamics, and facilitate the development a systematic design method. Many analytical models have been proposed in the literature, Most of them have been obtained in the frequency domain by applying Laplace transform to linearized Saint

  18. Model-Based Reconstructive Elasticity Imaging Using Ultrasound

    Salavat R. Aglyamov

    2007-01-01

    Full Text Available Elasticity imaging is a reconstructive imaging technique where tissue motion in response to mechanical excitation is measured using modern imaging systems, and the estimated displacements are then used to reconstruct the spatial distribution of Young's modulus. Here we present an ultrasound elasticity imaging method that utilizes the model-based technique for Young's modulus reconstruction. Based on the geometry of the imaged object, only one axial component of the strain tensor is used. The numerical implementation of the method is highly efficient because the reconstruction is based on an analytic solution of the forward elastic problem. The model-based approach is illustrated using two potential clinical applications: differentiation of liver hemangioma and staging of deep venous thrombosis. Overall, these studies demonstrate that model-based reconstructive elasticity imaging can be used in applications where the geometry of the object and the surrounding tissue is somewhat known and certain assumptions about the pathology can be made.

  19. Estimation of cloud optical thickness by processing SEVIRI images and implementing a semi analytical cloud property retrieval algorithm

    Pandey, P.; De Ridder, K.; van Lipzig, N.

    2009-04-01

    Clouds play a very important role in the Earth's climate system, as they form an intermediate layer between Sun and the Earth. Satellite remote sensing systems are the only means to provide information about clouds on large scales. The geostationary satellite, Meteosat Second Generation (MSG) has onboard an imaging radiometer, the Spinning Enhanced Visible and Infrared Imager (SEVIRI). SEVIRI is a 12 channel imager, with 11 channels observing the earth's full disk with a temporal resolution of 15 min and spatial resolution of 3 km at nadir, and a high resolution visible (HRV) channel. The visible channels (0.6 µm and 0.81 µm) and near infrared channel (1.6µm) of SEVIRI are being used to retrieve the cloud optical thickness (COT). The study domain is over Europe covering the region between 35°N - 70°N and 10°W - 30°E. SEVIRI level 1.5 images over this domain are being acquired from the European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT) archive. The processing of this imagery, involves a number of steps before estimating the COT. The steps involved in pre-processing are as follows. First, the digital count number is acquired from the imagery. Image geo-coding is performed in order to relate the pixel positions to the corresponding longitude and latitude. Solar zenith angle is determined as a function of latitude and time. The radiometric conversion is done using the values of offsets and slopes of each band. The values of radiance obtained are then used to calculate the reflectance for channels in the visible spectrum using the information of solar zenith angle. An attempt is made to estimate the COT from the observed radiances. A semi analytical algorithm [Kokhanovsky et al., 2003] is implemented for the estimation of cloud optical thickness from the visible spectrum of light intensity reflected from clouds. The asymptotical solution of the radiative transfer equation, for clouds with large optical thickness, is the basis of

  20. Semi-Analytical Solution of Optimization on Moon-Pool Shaped WEC

    Zhang W.C.

    2016-10-01

    Full Text Available In order to effectively extract and maximize the energy from ocean waves, a new kind of oscillating-body WEC (wave energy converter with moon pool has been put forward. The main emphasis in this paper is placed on inserting the damping into the equation of heaving motion applied for a complex wave energy converter and expressions for velocity potential added mass, damping coefficients associated with exciting forces were derived by using eigenfunction expansion matching method. By using surface-wave hydrodynamics, the exact theoretical conditions were solved to allow the maximum energy to be absorbed from regular waves. To optimize the ability of the wave energy conversion, oscillating system models under different radius-ratios are calculated and comparatively analyzed. Numerical calculations indicated that the capture width reaches the maximum in the vicinity of the natural frequency and the new kind of oscillating-body WEC has a positive ability of wave energy conversion.

  1. Quasi-normal frequencies: Semi-analytic results for highly damped modes

    Skakala, Jozef; Visser, Matt

    2011-01-01

    Black hole highly-damped quasi-normal frequencies (QNFs) are very often of the form ω n = (offset) + in (gap). We have investigated the genericity of this phenomenon for the Schwarzschild-deSitter (SdS) black hole by considering a model potential that is piecewise Eckart (piecewise Poschl-Teller), and developing an analytic 'quantization condition' for the highly-damped quasi-normal frequencies. We find that the ω n = (offset) + in (gap) behaviour is common but not universal, with the controlling feature being whether or not the ratio of the surface gravities is a rational number. We furthermore observed that the relation between rational ratios of surface gravities and periodicity of QNFs is very generic, and also occurs within different analytic approaches applied to various types of black hole spacetimes. These observations are of direct relevance to any physical situation where highly-damped quasi-normal modes are important.

  2. Dynamic Response of Dam-Reservoir Systems: Review and a Semi-Analytical Proposal

    Paulo Marcelo Vieira Ribeiro

    Full Text Available Abstract This paper presents a review of current techniques employed for dynamic analysis of concrete gravity dams under seismic action. Traditional procedures applied in design bureaus, such as the Pseudo-Static method, often neglect structural dynamic properties, as well as ground amplification effects. A practical alternative arises with the Pseudo-Dynamic method, which considers a simplified spectrum response in the fundamental mode. The authors propose a self-contained development and detailed examples of this latter method, including a comparison with finite element models using transient response of fluid-structure systems. It is verified that application of the traditional procedure should be done carefully and limited to extremely rigid dams. On the other hand, the proposed development is straightforward and in agreement with finite element results for general cases where dam flexibility plays an important role.

  3. Semi-analytical Formulas for the Fundamental Parameters of Galactic Early B Supergiants

    Zaninetti, L.

    2009-12-01

    Full Text Available The publication of new tables of calibration of some fundamental parameters of Galactic B0-B5 supergiants in the two classes $I_mathrm{a}$ and $I_mathrm{b} $ allows to particularize the eight parameters conjecture that model five fundamental parameters. The numerical expressions for visual magnitude, radius, mass, luminosity and surface gravity are derived for supergiants in the range of temperature between 29700 K and 15200 K. The availability of accurate tables of calibration allows us to estimate the efficiency of the derived formulas in reproducing the observed values. The average efficiency of the new formulas, expressed in percent, is 94 for the visual magnitude, 81 for the mass, 96 for the radius, 99 for the logarithm of the luminosity and 97 for the logarithm of the surface gravity.

  4. A semi-analytical three-dimensional free vibration analysis of functionally graded curved panels

    Zahedinejad, P. [Department of Mechanical Engineering, Islamic Azad University, Branch of Shiraz, Shiraz (Iran, Islamic Republic of); Malekzadeh, P., E-mail: malekzadeh@pgu.ac.i [Department of Mechanical Engineering, Persian Gulf University, Persian Gulf University Boulevard, Bushehr 75168 (Iran, Islamic Republic of); Center of Excellence for Computational Mechanics, Shiraz University, Shiraz (Iran, Islamic Republic of); Farid, M. [Department of Mechanical Engineering, Islamic Azad University, Branch of Shiraz, Shiraz (Iran, Islamic Republic of); Karami, G. [Department of Mechanical Engineering and Applied Mechanics, North Dakota State University, Fargo, ND 58105-5285 (United States)

    2010-08-15

    Based on the three-dimensional elasticity theory, free vibration analysis of functionally graded (FG) curved thick panels under various boundary conditions is studied. Panel with two opposite edges simply supported and arbitrary boundary conditions at the other edges are considered. Two different models of material properties variations based on the power law distribution in terms of the volume fractions of the constituents and the exponential distribution of the material properties through the thickness are considered. Differential quadrature method in conjunction with the trigonometric functions is used to discretize the governing equations. With a continuous material properties variation assumption through the thickness of the curved panel, differential quadrature method is efficiently used to discretize the governing equations and to implement the related boundary conditions at the top and bottom surfaces of the curved panel and in strong form. The convergence of the method is demonstrated and to validate the results, comparisons are made with the solutions for isotropic and FG curved panels. By examining the results of thick FG curved panels for various geometrical and material parameters and subjected to different boundary conditions, the influence of these parameters and in particular, those due to functionally graded material parameters are studied.

  5. A semi-analytical study on helical springs made of shape memory polymer

    Baghani, M; Naghdabadi, R; Arghavani, J

    2012-01-01

    In this paper, the responses of shape memory polymer (SMP) helical springs under axial force are studied both analytically and numerically. In the analytical solution, we first derive the response of a cylindrical tube under torsional loadings. This solution can be used for helical springs in which both the curvature and pitch effects are negligible. This is the case for helical springs with large ratios of the mean coil radius to the cross sectional radius (spring index) and also small pitch angles. Making use of this solution simplifies the analysis of the helical springs to that of the torsion of a straight bar with circular cross section. The 3D phenomenological constitutive model recently proposed for SMPs is also reduced to the 1D shear case. Thus, an analytical solution for the torsional response of SMP tubes in a full cycle of stress-free strain recovery is derived. In addition, the curvature effect is added to the formulation and the SMP helical spring is analyzed using the exact solution presented for torsion of curved SMP tubes. In this modified solution, the effect of the direct shear force is also considered. In the numerical analysis, the 3D constitutive equations are implemented in a finite element program and a full cycle of stress-free strain recovery of an SMP (extension or compression) helical spring is simulated. Analytical and numerical results are compared and it is shown that the analytical solution gives accurate stress distributions in the cross section of the helical SMP spring besides the global load–deflection response. Some case studies are presented to show the validity of the presented analytical method. (paper)

  6. Semi-analytical solution of flow to a well in an unconfined-fractured aquifer system separated by an aquitard

    Sedghi, Mohammad M.; Samani, Nozar; Barry, D. A.

    2018-04-01

    Semi-analytical solutions are presented for flow to a well in an extensive homogeneous and anisotropic unconfined-fractured aquifer system separated by an aquitard. The pumping well is of infinitesimal radius and screened in either the overlying unconfined aquifer or the underlying fractured aquifer. An existing linearization method was used to determine the watertable drainage. The solution was obtained via Laplace and Hankel transforms, with results calculated by numerical inversion. The main findings are presented in the form of non-dimensional drawdown-time curves, as well as scaled sensitivity-dimensionless time curves. The new solution permits determination of the influence of fractures, matrix blocks and watertable drainage parameters on the aquifer drawdown. The effect of the aquitard on the drawdown response of the overlying unconfined aquifer and the underlying fractured aquifer was also explored. The results permit estimation of the unconfined and fractured aquifer hydraulic parameters via type-curve matching or coupling of the solution with a parameter estimation code. The solution can also be used to determine aquifer hydraulic properties from an optimal pumping test set up and duration.

  7. A Semi-Analytical Extraction Method for Interface and Bulk Density of States in Metal Oxide Thin-Film Transistors.

    Chen, Weifeng; Wu, Weijing; Zhou, Lei; Xu, Miao; Wang, Lei; Ning, Honglong; Peng, Junbiao

    2018-03-11

    A semi-analytical extraction method of interface and bulk density of states (DOS) is proposed by using the low-frequency capacitance-voltage characteristics and current-voltage characteristics of indium zinc oxide thin-film transistors (IZO TFTs). In this work, an exponential potential distribution along the depth direction of the active layer is assumed and confirmed by numerical solution of Poisson's equation followed by device simulation. The interface DOS is obtained as a superposition of constant deep states and exponential tail states. Moreover, it is shown that the bulk DOS may be represented by the superposition of exponential deep states and exponential tail states. The extracted values of bulk DOS and interface DOS are further verified by comparing the measured transfer and output characteristics of IZO TFTs with the simulation results by a 2D device simulator ATLAS (Silvaco). As a result, the proposed extraction method may be useful for diagnosing and characterising metal oxide TFTs since it is fast to extract interface and bulk density of states (DOS) simultaneously.

  8. A Semi-Analytical Extraction Method for Interface and Bulk Density of States in Metal Oxide Thin-Film Transistors

    Weifeng Chen

    2018-03-01

    Full Text Available A semi-analytical extraction method of interface and bulk density of states (DOS is proposed by using the low-frequency capacitance–voltage characteristics and current–voltage characteristics of indium zinc oxide thin-film transistors (IZO TFTs. In this work, an exponential potential distribution along the depth direction of the active layer is assumed and confirmed by numerical solution of Poisson’s equation followed by device simulation. The interface DOS is obtained as a superposition of constant deep states and exponential tail states. Moreover, it is shown that the bulk DOS may be represented by the superposition of exponential deep states and exponential tail states. The extracted values of bulk DOS and interface DOS are further verified by comparing the measured transfer and output characteristics of IZO TFTs with the simulation results by a 2D device simulator ATLAS (Silvaco. As a result, the proposed extraction method may be useful for diagnosing and characterising metal oxide TFTs since it is fast to extract interface and bulk density of states (DOS simultaneously.

  9. Steady-state groundwater recharge in trapezoidal-shaped aquifers: A semi-analytical approach based on variational calculus

    Mahdavi, Ali; Seyyedian, Hamid

    2014-05-01

    This study presents a semi-analytical solution for steady groundwater flow in trapezoidal-shaped aquifers in response to an areal diffusive recharge. The aquifer is homogeneous, anisotropic and interacts with four surrounding streams of constant-head. Flow field in this laterally bounded aquifer-system is efficiently constructed by means of variational calculus. This is accomplished by minimizing a properly defined penalty function for the associated boundary value problem. Simple yet demonstrative scenarios are defined to investigate anisotropy effects on the water table variation. Qualitative examination of the resulting equipotential contour maps and velocity vector field illustrates the validity of the method, especially in the vicinity of boundary lines. Extension to the case of triangular-shaped aquifer with or without an impervious boundary line is also demonstrated through a hypothetical example problem. The present solution benefits from an extremely simple mathematical expression and exhibits strictly close agreement with the numerical results obtained from Modflow. Overall, the solution may be used to conduct sensitivity analysis on various hydrogeological parameters that affect water table variation in aquifers defined in trapezoidal or triangular-shaped domains.

  10. A three-dimensional semi-analytical solution for predicting drug release through the orifice of a spherical device.

    Simon, Laurent; Ospina, Juan

    2016-07-25

    Three-dimensional solute transport was investigated for a spherical device with a release hole. The governing equation was derived using the Fick's second law. A mixed Neumann-Dirichlet condition was imposed at the boundary to represent diffusion through a small region on the surface of the device. The cumulative percentage of drug released was calculated in the Laplace domain and represented by the first term of an infinite series of Legendre and modified Bessel functions of the first kind. Application of the Zakian algorithm yielded the time-domain closed-form expression. The first-order solution closely matched a numerical solution generated by Mathematica(®). The proposed method allowed computation of the characteristic time. A larger surface pore resulted in a smaller effective time constant. The agreement between the numerical solution and the semi-analytical method improved noticeably as the size of the orifice increased. It took four time constants for the device to release approximately ninety-eight of its drug content. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. A semi-analytical finite element process for nonlinear elastoplastic analysis of arbitrarily loaded shells of revolution

    Rensch, H.J.; Wunderlich, W.

    1981-01-01

    The governing partial differential equations used are valid for small strains and moderate rotations. Plasticity relations are based on J 2 -flow theory. In order to eliminate the circumferential coordinate, the loading as well as the unkown quantities are expanded in Fourier series in the circumferential direction. The nonlinear terms due to moderate rotations and plastic deformations are treated as pseudo load quantities. In this way, the governing equations can be reduced to uncoupled systems of first-order ordinary differential equations in the meridional direction. They are then integrated over a shell segment via a matrix series expansion. The resulting element transfer matrices are transformed into stiffness matrices, and for the analysis of the total structure the finite element method is employed. Thus, arbitrary branching of the shell geometry is possible. Compared to two-dimensional approximations, the major advantage of the semi-analytical procedure is that the structural stiffness matrix usually has a small handwidth, resulting in shorter computer run times. Moreover, its assemblage and triangularization has to be carried out only once bacause all nonlinear effects are treated as initial loads. (orig./HP)

  12. Reconstructing building mass models from UAV images

    Li, Minglei; Nan, Liangliang; Smith, Neil; Wonka, Peter

    2015-01-01

    We present an automatic reconstruction pipeline for large scale urban scenes from aerial images captured by a camera mounted on an unmanned aerial vehicle. Using state-of-the-art Structure from Motion and Multi-View Stereo algorithms, we first

  13. Connections model for tomographic images reconstruction

    Rodrigues, R.G.S.; Pela, C.A.; Roque, S.F. A.C.

    1998-01-01

    This paper shows an artificial neural network with an adequately topology for tomographic image reconstruction. The associated error function is derived and the learning algorithm is make. The simulated results are presented and demonstrate the existence of a generalized solution for nets with linear activation function. (Author)

  14. Right adrenal vein: comparison between adaptive statistical iterative reconstruction and model-based iterative reconstruction.

    Noda, Y; Goshima, S; Nagata, S; Miyoshi, T; Kawada, H; Kawai, N; Tanahashi, Y; Matsuo, M

    2018-06-01

    To compare right adrenal vein (RAV) visualisation and contrast enhancement degree on adrenal venous phase images reconstructed using adaptive statistical iterative reconstruction (ASiR) and model-based iterative reconstruction (MBIR) techniques. This prospective study was approved by the institutional review board, and written informed consent was waived. Fifty-seven consecutive patients who underwent adrenal venous phase imaging were enrolled. The same raw data were reconstructed using ASiR 40% and MBIR. The expert and beginner independently reviewed computed tomography (CT) images. RAV visualisation rates, background noise, and CT attenuation of the RAV, right adrenal gland, inferior vena cava (IVC), hepatic vein, and bilateral renal veins were compared between the two reconstruction techniques. RAV visualisation rates were higher with MBIR than with ASiR (95% versus 88%, p=0.13 in expert and 93% versus 75%, p=0.002 in beginner, respectively). RAV visualisation confidence ratings with MBIR were significantly greater than with ASiR (pASiR (pASiR (p=0.0013 and 0.02). Reconstruction of adrenal venous phase images using MBIR significantly reduces background noise, leading to an improvement in the RAV visualisation compared with ASiR. Copyright © 2018 The Royal College of Radiologists. Published by Elsevier Ltd. All rights reserved.

  15. Food Reconstruction Using Isotopic Transferred Signals (FRUITS): A Bayesian Model for Diet Reconstruction

    Fernandes, R.; Millard, A.R.; Brabec, Marek; Nadeau, M.J.; Grootes, P.

    2014-01-01

    Roč. 9, č. 2 (2014), Art . no. e87436 E-ISSN 1932-6203 Institutional support: RVO:67985807 Keywords : ancienit diet reconstruction * stable isotope measurements * mixture model * Bayesian estimation * Dirichlet prior Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 3.234, year: 2014

  16. Dynamic Ising model: reconstruction of evolutionary trees

    De Oliveira, P M C

    2013-01-01

    An evolutionary tree is a cascade of bifurcations starting from a single common root, generating a growing set of daughter species as time goes by. ‘Species’ here is a general denomination for biological species, spoken languages or any other entity which evolves through heredity. From the N currently alive species within a clade, distances are measured through pairwise comparisons made by geneticists, linguists, etc. The larger is such a distance that, for a pair of species, the older is their last common ancestor. The aim is to reconstruct the previously unknown bifurcations, i.e. the whole clade, from knowledge of the N(N − 1)/2 quoted distances, which are taken for granted. A mechanical method is presented and its applicability is discussed. (paper)

  17. Semi-analytic approach to higher-order corrections in simple muonic bound systems: vacuum polarization, self-energy and radiative-recoil

    Jentschura, U.D. [Department of Physics, Missouri University of Science and Technology, Rolla MO65409 (United States); Institut fur Theoretische Physik, Universitat Heidelberg, Philosophenweg 16, 69120 Heidelberg (Germany); Wundt, B.J. [Department of Physics, Missouri University of Science and Technology, Rolla MO65409 (United States)

    2011-12-15

    The current discrepancy of theory and experiment observed recently in muonic hydrogen necessitates a reinvestigation of all corrections to contribute to the Lamb shift in muonic hydrogen ({mu}H), muonic deuterium ({mu}D), the muonic {sup 3}He ion (denoted here as {mu}{sup 3}He{sup +}), as well as in the muonic {sup 4}He ion ({mu}{sup 4}He{sup +}). Here, we choose a semi-analytic approach and evaluate a number of higher-order corrections to vacuum polarization (VP) semi-analytically, while remaining integrals over the spectral density of VP are performed numerically. We obtain semi-analytic results for the second-order correction, and for the relativistic correction to VP. The self-energy correction to VP is calculated, including the perturbations of the Bethe logarithms by vacuum polarization. Sub-leading logarithmic terms in the radiative-recoil correction to the 2S-2P Lamb shift of order {alpha}(Z{alpha}){sup 5{mu}3}ln(Z{alpha})/(m{sub {mu}mN}) where {alpha} is the fine structure constant, are also obtained. All calculations are nonperturbative in the mass ratio of orbiting particle and nucleus. (authors)

  18. Semi-analytic approach to higher-order corrections in simple muonic bound systems: vacuum polarization, self-energy and radiative-recoil

    Jentschura, U.D.; Wundt, B.J.

    2011-01-01

    The current discrepancy of theory and experiment observed recently in muonic hydrogen necessitates a reinvestigation of all corrections to contribute to the Lamb shift in muonic hydrogen (μH), muonic deuterium (μD), the muonic 3 He ion (denoted here as μ 3 He + ), as well as in the muonic 4 He ion (μ 4 He + ). Here, we choose a semi-analytic approach and evaluate a number of higher-order corrections to vacuum polarization (VP) semi-analytically, while remaining integrals over the spectral density of VP are performed numerically. We obtain semi-analytic results for the second-order correction, and for the relativistic correction to VP. The self-energy correction to VP is calculated, including the perturbations of the Bethe logarithms by vacuum polarization. Sub-leading logarithmic terms in the radiative-recoil correction to the 2S-2P Lamb shift of order α(Zα) 5 μ 3 ln(Zα)/(m μ m N ) where α is the fine structure constant, are also obtained. All calculations are nonperturbative in the mass ratio of orbiting particle and nucleus. (authors)

  19. "Growing trees backwards": Description of a stand reconstruction model

    Jonathan D. Bakker; Andrew J. Sanchez Meador; Peter Z. Fule; David W. Huffman; Margaret M. Moore

    2008-01-01

    We describe an individual-tree model that uses contemporary measurements to "grow trees backward" and reconstruct past tree diameters and stand structure in ponderosa pine dominated stands of the Southwest. Model inputs are contemporary structural measurements of all snags, logs, stumps, and living trees, and radial growth measurements, if available. Key...

  20. Use of a model for 3D image reconstruction

    Delageniere, S.; Grangeat, P.

    1991-01-01

    We propose a software for 3D image reconstruction in transmission tomography. This software is based on the use of a model and of the RADON algorithm developed at LETI. The introduction of a markovian model helps us to enhance contrast and straitened the natural transitions existing in the objects studied, whereas standard transform methods smoothe them

  1. Computed Tomography Image Quality Evaluation of a New Iterative Reconstruction Algorithm in the Abdomen (Adaptive Statistical Iterative Reconstruction-V) a Comparison With Model-Based Iterative Reconstruction, Adaptive Statistical Iterative Reconstruction, and Filtered Back Projection Reconstructions.

    Goodenberger, Martin H; Wagner-Bartak, Nicolaus A; Gupta, Shiva; Liu, Xinming; Yap, Ramon Q; Sun, Jia; Tamm, Eric P; Jensen, Corey T

    The purpose of this study was to compare abdominopelvic computed tomography images reconstructed with adaptive statistical iterative reconstruction-V (ASIR-V) with model-based iterative reconstruction (Veo 3.0), ASIR, and filtered back projection (FBP). Abdominopelvic computed tomography scans for 36 patients (26 males and 10 females) were reconstructed using FBP, ASIR (80%), Veo 3.0, and ASIR-V (30%, 60%, 90%). Mean ± SD patient age was 32 ± 10 years with mean ± SD body mass index of 26.9 ± 4.4 kg/m. Images were reviewed by 2 independent readers in a blinded, randomized fashion. Hounsfield unit, noise, and contrast-to-noise ratio (CNR) values were calculated for each reconstruction algorithm for further comparison. Phantom evaluation of low-contrast detectability (LCD) and high-contrast resolution was performed. Adaptive statistical iterative reconstruction-V 30%, ASIR-V 60%, and ASIR 80% were generally superior qualitatively compared with ASIR-V 90%, Veo 3.0, and FBP (P ASIR-V 60% with respective CNR values of 5.54 ± 2.39, 8.78 ± 3.15, and 3.49 ± 1.77 (P ASIR 80% had the best and worst spatial resolution, respectively. Adaptive statistical iterative reconstruction-V 30% and ASIR-V 60% provided the best combination of qualitative and quantitative performance. Adaptive statistical iterative reconstruction 80% was equivalent qualitatively, but demonstrated inferior spatial resolution and LCD.

  2. RECONSTRUCTION OF HUMAN LUNG MORPHOLOGY MODELS FROM MAGNETIC RESONANCE IMAGES

    Reconstruction of Human Lung Morphology Models from Magnetic Resonance ImagesT. B. Martonen (Experimental Toxicology Division, U.S. EPA, Research Triangle Park, NC 27709) and K. K. Isaacs (School of Public Health, University of North Carolina, Chapel Hill, NC 27514)

  3. Discussion of Source Reconstruction Models Using 3D MCG Data

    Melis, Massimo De; Uchikawa, Yoshinori

    In this study we performed the source reconstruction of magnetocardiographic signals generated by the human heart activity to localize the site of origin of the heart activation. The localizations were performed in a four compartment model of the human volume conductor. The analyses were conducted on normal subjects and on a subject affected by the Wolff-Parkinson-White syndrome. Different models of the source activation were used to evaluate whether a general model of the current source can be applied in the study of the cardiac inverse problem. The data analyses were repeated using normal and vector component data of the MCG. The results show that a distributed source model has the better accuracy in performing the source reconstructions, and that 3D MCG data allow finding smaller differences between the different source models.

  4. Modelling the physics in iterative reconstruction for transmission computed tomography

    Nuyts, Johan; De Man, Bruno; Fessler, Jeffrey A.; Zbijewski, Wojciech; Beekman, Freek J.

    2013-01-01

    There is an increasing interest in iterative reconstruction (IR) as a key tool to improve quality and increase applicability of X-ray CT imaging. IR has the ability to significantly reduce patient dose, it provides the flexibility to reconstruct images from arbitrary X-ray system geometries and it allows to include detailed models of photon transport and detection physics, to accurately correct for a wide variety of image degrading effects. This paper reviews discretisation issues and modelling of finite spatial resolution, Compton scatter in the scanned object, data noise and the energy spectrum. Widespread implementation of IR with highly accurate model-based correction, however, still requires significant effort. In addition, new hardware will provide new opportunities and challenges to improve CT with new modelling. PMID:23739261

  5. Development of a Semi-Analytical Algorithm for the Retrieval of Suspended Particulate Matter from Remote Sensing over Clear to Very Turbid Waters

    Bing Han

    2016-03-01

    Full Text Available Remote sensing of suspended particulate matter, SPM, from space has long been used to assess its spatio-temporal variability in various coastal areas. The associated algorithms were generally site specific or developed over a relatively narrow range of concentration, which make them inappropriate for global applications (or at least over broad SPM range. In the frame of the GlobCoast project, a large in situ data set of SPM and remote sensing reflectance, Rrs(λ, has been built gathering together measurements from various coastal areas around Europe, French Guiana, North Canada, Vietnam, and China. This data set covers various contrasting coastal environments diversely affected by different biogeochemical and physical processes such as sediment resuspension, phytoplankton bloom events, and rivers discharges (Amazon, Mekong, Yellow river, MacKenzie, etc.. The SPM concentration spans about four orders of magnitude, from 0.15 to 2626 g·m−3. Different empirical and semi-analytical approaches developed to assess SPM from Rrs(λ were tested over this in situ data set. As none of them provides satisfactory results over the whole SPM range, a generic semi-analytical approach has been developed. This algorithm is based on two standard semi-analytical equations calibrated for low-to-medium and highly turbid waters, respectively. A mixing law has also been developed for intermediate environments. Sources of uncertainties in SPM retrieval such as the bio-optical variability, atmospheric correction errors, and spectral bandwidth have been evaluated. The coefficients involved in these different algorithms have been calculated for ocean color (SeaWiFS, MODIS-A/T, MERIS/OLCI, VIIRS and high spatial resolution (LandSat8-OLI, and Sentinel2-MSI sensors. The performance of the proposed algorithm varies only slightly from one sensor to another demonstrating the great potential applicability of the proposed approach over global and contrasting coastal waters.

  6. Reconstruction of hyperspectral image using matting model for classification

    Xie, Weiying; Li, Yunsong; Ge, Chiru

    2016-05-01

    Although hyperspectral images (HSIs) captured by satellites provide much information in spectral regions, some bands are redundant or have large amounts of noise, which are not suitable for image analysis. To address this problem, we introduce a method for reconstructing the HSI with noise reduction and contrast enhancement using a matting model for the first time. The matting model refers to each spectral band of an HSI that can be decomposed into three components, i.e., alpha channel, spectral foreground, and spectral background. First, one spectral band of an HSI with more refined information than most other bands is selected, and is referred to as an alpha channel of the HSI to estimate the hyperspectral foreground and hyperspectral background. Finally, a combination operation is applied to reconstruct the HSI. In addition, the support vector machine (SVM) classifier and three sparsity-based classifiers, i.e., orthogonal matching pursuit (OMP), simultaneous OMP, and OMP based on first-order neighborhood system weighted classifiers, are utilized on the reconstructed HSI and the original HSI to verify the effectiveness of the proposed method. Specifically, using the reconstructed HSI, the average accuracy of the SVM classifier can be improved by as much as 19%.

  7. Ekofisk chalk: core measurements, stochastic reconstruction, network modeling and simulation

    Talukdar, Saifullah

    2002-07-01

    This dissertation deals with (1) experimental measurements on petrophysical, reservoir engineering and morphological properties of Ekofisk chalk, (2) numerical simulation of core flood experiments to analyze and improve relative permeability data, (3) stochastic reconstruction of chalk samples from limited morphological information, (4) extraction of pore space parameters from the reconstructed samples, development of network model using pore space information, and computation of petrophysical and reservoir engineering properties from network model, and (5) development of 2D and 3D idealized fractured reservoir models and verification of the applicability of several widely used conventional up scaling techniques in fractured reservoir simulation. Experiments have been conducted on eight Ekofisk chalk samples and porosity, absolute permeability, formation factor, and oil-water relative permeability, capillary pressure and resistivity index are measured at laboratory conditions. Mercury porosimetry data and backscatter scanning electron microscope images have also been acquired for the samples. A numerical simulation technique involving history matching of the production profiles is employed to improve the relative permeability curves and to analyze hysteresis of the Ekofisk chalk samples. The technique was found to be a powerful tool to supplement the uncertainties in experimental measurements. Porosity and correlation statistics obtained from backscatter scanning electron microscope images are used to reconstruct microstructures of chalk and particulate media. The reconstruction technique involves a simulated annealing algorithm, which can be constrained by an arbitrary number of morphological parameters. This flexibility of the algorithm is exploited to successfully reconstruct particulate media and chalk samples using more than one correlation functions. A technique based on conditional simulated annealing has been introduced for exact reproduction of vuggy

  8. Reconstructing the dark sector interaction with LISA

    Cai, Rong-Gen; Yang, Tao [CAS Key Laboratory of Theoretical Physics, Institute of Theoretical Physics, Chinese Academy of Sciences, P.O. Box 2735, Beijing 100190 (China); Tamanini, Nicola, E-mail: cairg@itp.ac.cn, E-mail: nicola.tamanini@cea.fr, E-mail: yangtao@itp.ac.cn [Institut de Physique Théorique, CEA-Saclay, CNRS UMR 3681, Université Paris-Saclay, F-91191 Gif-sur-Yvette (France)

    2017-05-01

    We perform a forecast analysis of the ability of the LISA space-based interferometer to reconstruct the dark sector interaction using gravitational wave standard sirens at high redshift. We employ Gaussian process methods to reconstruct the distance-redshift relation in a model independent way. We adopt simulated catalogues of standard sirens given by merging massive black hole binaries visible by LISA, with an electromagnetic counterpart detectable by future telescopes. The catalogues are based on three different astrophysical scenarios for the evolution of massive black hole mergers based on the semi-analytic model of E. Barausse, Mon. Not. Roy. Astron. Soc. 423 (2012) 2533. We first use these standard siren datasets to assess the potential of LISA in reconstructing a possible interaction between vacuum dark energy and dark matter. Then we combine the LISA cosmological data with supernovae data simulated for the Dark Energy Survey. We consider two scenarios distinguished by the time duration of the LISA mission: 5 and 10 years. Using only LISA standard siren data, the dark sector interaction can be well reconstructed from redshift z ∼1 to z ∼3 (for a 5 years mission) and z ∼1 up to z ∼5 (for a 10 years mission), though the reconstruction is inefficient at lower redshift. When combined with the DES datasets, the interaction is well reconstructed in the whole redshift region from 0 z ∼ to z ∼3 (5 yr) and z ∼0 to z ∼5 (10 yr), respectively. Massive black hole binary standard sirens can thus be used to constrain the dark sector interaction at redshift ranges not reachable by usual supernovae datasets which probe only the z ∼< 1.5 range. Gravitational wave standard sirens will not only constitute a complementary and alternative way, with respect to familiar electromagnetic observations, to probe the cosmic expansion, but will also provide new tests to constrain possible deviations from the standard ΛCDM dynamics, especially at high redshift.

  9. A Taxonomic Reduced-Space Pollen Model for Paleoclimate Reconstruction

    Wahl, E. R.; Schoelzel, C.

    2010-12-01

    Paleoenvironmental reconstruction from fossil pollen often attempts to take advantage of the rich taxonomic diversity in such data. Here, a taxonomically "reduced-space" reconstruction model is explored that would be parsimonious in introducing parameters needing to be estimated within a Bayesian Hierarchical Modeling context. This work involves a refinement of the traditional pollen ratio method. This method is useful when one (or a few) dominant pollen type(s) in a region have a strong positive correlation with a climate variable of interest and another (or a few) dominant pollen type(s) have a strong negative correlation. When, e.g., counts of pollen taxa a and b (r >0) are combined with pollen types c and d (r logistic generalized linear model (GLM). The GLM can readily model this relationship in the forward form, pollen = g(climate), which is more physically realistic than inverse models often used in paleoclimate reconstruction [climate = f(pollen)]. The specification of the model is: rnum Bin(n,p), where E(r|T) = p = exp(η)/[1+exp(η)], and η = α + β(T); r is the pollen ratio formed as above, rnum is the ratio numerator, n is the ratio denominator (i.e., the sum of pollen counts), the denominator-specific count is (n - rnum), and T is the temperature at each site corresponding to a specific value of r. Ecological and empirical screening identified the model (Spruce+Birch) / (Spruce+Birch+Oak+Hickory) for use in temperate eastern N. America. α and β were estimated using both "traditional" and Bayesian GLM algorithms (in R). Although it includes only four pollen types, the ratio model yields more explained variation ( 80%) in the pollen-temperature relationship of the study region than a 64-taxon modern analog technique (MAT). Thus, the new pollen ratio method represents an information-rich, reduced space data model that can be efficiently employed in a BHM framework. The ratio model can directly reconstruct past temperature by solving the GLM equations

  10. Semi-analytical quasi-normal mode theory for the local density of states in coupled photonic crystal cavity-waveguide structures

    de Lasson, Jakob Rosenkrantz; Kristensen, Philip Trøst; Mørk, Jesper

    2015-01-01

    We present and validate a semi-analytical quasi-normal mode (QNM) theory for the local density of states (LDOS) in coupled photonic crystal (PhC) cavity-waveguide structures. By means of an expansion of the Green's function on one or a few QNMs, a closed-form expression for the LDOS is obtained, ......-trivial spectrum with a peak and a dip is found, which is reproduced only when including both the two relevant QNMs in the theory. In both cases, we find relative errors below 1% in the bandwidth of interest.......We present and validate a semi-analytical quasi-normal mode (QNM) theory for the local density of states (LDOS) in coupled photonic crystal (PhC) cavity-waveguide structures. By means of an expansion of the Green's function on one or a few QNMs, a closed-form expression for the LDOS is obtained......, and for two types of two-dimensional PhCs, with one and two cavities side-coupled to an extended waveguide, the theory is validated against numerically exact computations. For the single cavity, a slightly asymmetric spectrum is found, which the QNM theory reproduces, and for two cavities a non...

  11. GPU based Monte Carlo for PET image reconstruction: detector modeling

    Légrády; Cserkaszky, Á; Lantos, J.; Patay, G.; Bükki, T.

    2011-01-01

    Monte Carlo (MC) calculations and Graphical Processing Units (GPUs) are almost like the dedicated hardware designed for the specific task given the similarities between visible light transport and neutral particle trajectories. A GPU based MC gamma transport code has been developed for Positron Emission Tomography iterative image reconstruction calculating the projection from unknowns to data at each iteration step taking into account the full physics of the system. This paper describes the simplified scintillation detector modeling and its effect on convergence. (author)

  12. Joint model of motion and anatomy for PET image reconstruction

    Qiao Feng; Pan Tinsu; Clark, John W. Jr.; Mawlawi, Osama

    2007-01-01

    Anatomy-based positron emission tomography (PET) image enhancement techniques have been shown to have the potential for improving PET image quality. However, these techniques assume an accurate alignment between the anatomical and the functional images, which is not always valid when imaging the chest due to respiratory motion. In this article, we present a joint model of both motion and anatomical information by integrating a motion-incorporated PET imaging system model with an anatomy-based maximum a posteriori image reconstruction algorithm. The mismatched anatomical information due to motion can thus be effectively utilized through this joint model. A computer simulation and a phantom study were conducted to assess the efficacy of the joint model, whereby motion and anatomical information were either modeled separately or combined. The reconstructed images in each case were compared to corresponding reference images obtained using a quadratic image prior based maximum a posteriori reconstruction algorithm for quantitative accuracy. Results of these studies indicated that while modeling anatomical information or motion alone improved the PET image quantitation accuracy, a larger improvement in accuracy was achieved when using the joint model. In the computer simulation study and using similar image noise levels, the improvement in quantitation accuracy compared to the reference images was 5.3% and 19.8% when using anatomical or motion information alone, respectively, and 35.5% when using the joint model. In the phantom study, these results were 5.6%, 5.8%, and 19.8%, respectively. These results suggest that motion compensation is important in order to effectively utilize anatomical information in chest imaging using PET. The joint motion-anatomy model presented in this paper provides a promising solution to this problem

  13. Reconstructing plateau icefields: Evaluating empirical and modelled approaches

    Pearce, Danni; Rea, Brice; Barr, Iestyn

    2013-04-01

    Glacial landforms are widely utilised to reconstruct former glacier geometries with a common aim to estimate the Equilibrium Line Altitudes (ELAs) and from these, infer palaeoclimatic conditions. Such inferences may be studied on a regional scale and used to correlate climatic gradients across large distances (e.g., Europe). In Britain, the traditional approach uses geomorphological mapping with hand contouring to derive the palaeo-ice surface. Recently, ice surface modelling enables an equilibrium profile reconstruction tuned using the geomorphology. Both methods permit derivation of palaeo-climate but no study has compared the two methods for the same ice-mass. This is important because either approach may result in differences in glacier limits, ELAs and palaeo-climate. This research uses both methods to reconstruct a plateau icefield and quantifies the results from a cartographic and geometrical aspect. Detailed geomorphological mapping of the Tweedsmuir Hills in the Southern Uplands, Scotland (c. 320 km2) was conducted to examine the extent of Younger Dryas (YD; 12.9 -11.7 cal. ka BP) glaciation. Landform evidence indicates a plateau icefield configuration of two separate ice-masses during the YD covering an area c. 45 km2 and 25 km2. The interpreted age is supported by new radiocarbon dating of basal stratigraphies and Terrestrial Cosmogenic Nuclide Analysis (TCNA) of in situ boulders. Both techniques produce similar configurations however; the model results in a coarser resolution requiring further processing if a cartographic map is required. When landforms are absent or fragmentary (e.g., trimlines and lateral moraines), like in many accumulation zones on plateau icefields, the geomorphological approach increasingly relies on extrapolation between lines of evidence and on the individual's perception of how the ice-mass ought to look. In some locations this results in an underestimation of the ice surface compared to the modelled surface most likely due to

  14. Fusion of intraoperative force sensoring, surface reconstruction and biomechanical modeling

    Röhl, S.; Bodenstedt, S.; Küderle, C.; Suwelack, S.; Kenngott, H.; Müller-Stich, B. P.; Dillmann, R.; Speidel, S.

    2012-02-01

    Minimally invasive surgery is medically complex and can heavily benefit from computer assistance. One way to help the surgeon is to integrate preoperative planning data into the surgical workflow. This information can be represented as a customized preoperative model of the surgical site. To use it intraoperatively, it has to be updated during the intervention due to the constantly changing environment. Hence, intraoperative sensor data has to be acquired and registered with the preoperative model. Haptic information which could complement the visual sensor data is still not established. In addition, biomechanical modeling of the surgical site can help in reflecting the changes which cannot be captured by intraoperative sensors. We present a setting where a force sensor is integrated into a laparoscopic instrument. In a test scenario using a silicone liver phantom, we register the measured forces with a reconstructed surface model from stereo endoscopic images and a finite element model. The endoscope, the instrument and the liver phantom are tracked with a Polaris optical tracking system. By fusing this information, we can transfer the deformation onto the finite element model. The purpose of this setting is to demonstrate the principles needed and the methods developed for intraoperative sensor data fusion. One emphasis lies on the calibration of the force sensor with the instrument and first experiments with soft tissue. We also present our solution and first results concerning the integration of the force sensor as well as accuracy to the fusion of force measurements, surface reconstruction and biomechanical modeling.

  15. Reconstructing Climate Change: The Model-Data Ping-Pong

    Stocker, T. F.

    2017-12-01

    When Cesare Emiliani, the father of paleoceanography, made the first attempts at a quantitative reconstruction of Pleistocene climate change in the early 1950s, climate models were not yet conceived. The understanding of paleoceanographic records was therefore limited, and scientists had to resort to plausibility arguments to interpret their data. With the advent of coupled climate models in the early 1970s, for the first time hypotheses about climate processes and climate change could be tested in a dynamically consistent framework. However, only a model hierarchy can cope with the long time scales and the multi-component physical-biogeochemical Earth System. There are many examples how climate models have inspired the interpretation of paleoclimate data on the one hand, and conversely, how data have questioned long-held concepts and models. In this lecture I critically revisit a few examples of this model-data ping-pong, such as the bipolar seesaw, the mid-Holocene greenhouse gas increase, millennial and rapid CO2 changes reconstructed from polar ice cores, and the interpretation of novel paleoceanographic tracers. These examples also highlight many of the still unsolved questions and provide guidance for future research. The combination of high-resolution paleoceanographic data and modeling has never been more relevant than today. It will be the key for an appropriate risk assessment of impacts on the Earth System that are already underway in the Anthropocene.

  16. Reconstruction of missing daily streamflow data using dynamic regression models

    Tencaliec, Patricia; Favre, Anne-Catherine; Prieur, Clémentine; Mathevet, Thibault

    2015-12-01

    River discharge is one of the most important quantities in hydrology. It provides fundamental records for water resources management and climate change monitoring. Even very short data-gaps in this information can cause extremely different analysis outputs. Therefore, reconstructing missing data of incomplete data sets is an important step regarding the performance of the environmental models, engineering, and research applications, thus it presents a great challenge. The objective of this paper is to introduce an effective technique for reconstructing missing daily discharge data when one has access to only daily streamflow data. The proposed procedure uses a combination of regression and autoregressive integrated moving average models (ARIMA) called dynamic regression model. This model uses the linear relationship between neighbor and correlated stations and then adjusts the residual term by fitting an ARIMA structure. Application of the model to eight daily streamflow data for the Durance river watershed showed that the model yields reliable estimates for the missing data in the time series. Simulation studies were also conducted to evaluate the performance of the procedure.

  17. Expediting model-based optoacoustic reconstructions with tomographic symmetries

    Lutzweiler, Christian; Deán-Ben, Xosé Luís; Razansky, Daniel

    2014-01-01

    Purpose: Image quantification in optoacoustic tomography implies the use of accurate forward models of excitation, propagation, and detection of optoacoustic signals while inversions with high spatial resolution usually involve very large matrices, leading to unreasonably long computation times. The development of fast and memory efficient model-based approaches represents then an important challenge to advance on the quantitative and dynamic imaging capabilities of tomographic optoacoustic imaging. Methods: Herein, a method for simplification and acceleration of model-based inversions, relying on inherent symmetries present in common tomographic acquisition geometries, has been introduced. The method is showcased for the case of cylindrical symmetries by using polar image discretization of the time-domain optoacoustic forward model combined with efficient storage and inversion strategies. Results: The suggested methodology is shown to render fast and accurate model-based inversions in both numerical simulations andpost mortem small animal experiments. In case of a full-view detection scheme, the memory requirements are reduced by one order of magnitude while high-resolution reconstructions are achieved at video rate. Conclusions: By considering the rotational symmetry present in many tomographic optoacoustic imaging systems, the proposed methodology allows exploiting the advantages of model-based algorithms with feasible computational requirements and fast reconstruction times, so that its convenience and general applicability in optoacoustic imaging systems with tomographic symmetries is anticipated

  18. Automated reconstruction of 3D models from real environments

    Sequeira, V.; Ng, K.; Wolfart, E.; Gonçalves, J. G. M.; Hogg, D.

    This paper describes an integrated approach to the construction of textured 3D scene models of building interiors from laser range data and visual images. This approach has been implemented in a collection of algorithms and sensors within a prototype device for 3D reconstruction, known as the EST (Environmental Sensor for Telepresence). The EST can take the form of a push trolley or of an autonomous mobile platform. The Autonomous EST (AEST) has been designed to provide an integrated solution for automating the creation of complete models. Embedded software performs several functions, including triangulation of the range data, registration of video texture, registration and integration of data acquired from different capture points. Potential applications include facilities management for the construction industry and creating reality models to be used in general areas of virtual reality, for example, virtual studios, virtualised reality for content-related applications (e.g., CD-ROMs), social telepresence, architecture and others. The paper presents the main components of the EST/AEST, and presents some example results obtained from the prototypes. The reconstructed model is encoded in VRML format so that it is possible to access and view the model via the World Wide Web.

  19. Photorealistic large-scale urban city model reconstruction.

    Poullis, Charalambos; You, Suya

    2009-01-01

    The rapid and efficient creation of virtual environments has become a crucial part of virtual reality applications. In particular, civil and defense applications often require and employ detailed models of operations areas for training, simulations of different scenarios, planning for natural or man-made events, monitoring, surveillance, games, and films. A realistic representation of the large-scale environments is therefore imperative for the success of such applications since it increases the immersive experience of its users and helps reduce the difference between physical and virtual reality. However, the task of creating such large-scale virtual environments still remains a time-consuming and manual work. In this work, we propose a novel method for the rapid reconstruction of photorealistic large-scale virtual environments. First, a novel, extendible, parameterized geometric primitive is presented for the automatic building identification and reconstruction of building structures. In addition, buildings with complex roofs containing complex linear and nonlinear surfaces are reconstructed interactively using a linear polygonal and a nonlinear primitive, respectively. Second, we present a rendering pipeline for the composition of photorealistic textures, which unlike existing techniques, can recover missing or occluded texture information by integrating multiple information captured from different optical sensors (ground, aerial, and satellite).

  20. Muon reconstruction with a geometrical model in JUNO

    Genster, C.; Schever, M.; Ludhova, L.; Soiron, M.; Stahl, A.; Wiebusch, C.

    2018-03-01

    The Jiangmen Neutrino Underground Observatory (JUNO) is a 20 kton liquid scintillator detector currently under construction near Kaiping in China. The physics program focuses on the determination of the neutrino mass hierarchy with reactor anti-neutrinos. For this purpose, JUNO is located 650 m underground with a distance of 53 km to two nuclear power plants. As a result, it is exposed to a muon flux that requires a precise muon reconstruction to make a veto of cosmogenic backgrounds viable. Established muon tracking algorithms use time residuals to a track hypothesis. We developed an alternative muon tracking algorithm that utilizes the geometrical shape of the fastest light. It models the full shape of the first, direct light produced along the muon track. From the intersection with the spherical PMT array, the track parameters are extracted with a likelihood fit. The algorithm finds a selection of PMTs based on their first hit times and charges. Subsequently, it fits on timing information only. On a sample of through-going muons with a full simulation of readout electronics, we report a spatial resolution of 20 cm of distance from the detector's center and an angular resolution of 1.6o over the whole detector. Additionally, a dead time estimation is performed to measure the impact of the muon veto. Including the step of waveform reconstruction on top of the track reconstruction, a loss in exposure of only 4% can be achieved compared to the case of a perfect tracking algorithm. When including only the PMT time resolution, but no further electronics simulation and waveform reconstruction, the exposure loss is only 1%.

  1. Model-based image reconstruction in X-ray computed tomography

    Zbijewski, Wojciech Bartosz

    2006-01-01

    The thesis investigates the applications of iterative, statistical reconstruction (SR) algorithms in X-ray Computed Tomography. Emphasis is put on various aspects of system modeling in statistical reconstruction. Fundamental issues such as effects of object discretization and algorithm

  2. Image quality of iterative reconstruction in cranial CT imaging: comparison of model-based iterative reconstruction (MBIR) and adaptive statistical iterative reconstruction (ASiR).

    Notohamiprodjo, S; Deak, Z; Meurer, F; Maertz, F; Mueck, F G; Geyer, L L; Wirth, S

    2015-01-01

    The purpose of this study was to compare cranial CT (CCT) image quality (IQ) of the MBIR algorithm with standard iterative reconstruction (ASiR). In this institutional review board (IRB)-approved study, raw data sets of 100 unenhanced CCT examinations (120 kV, 50-260 mAs, 20 mm collimation, 0.984 pitch) were reconstructed with both ASiR and MBIR. Signal-to-noise (SNR) and contrast-to-noise (CNR) were calculated from attenuation values measured in caudate nucleus, frontal white matter, anterior ventricle horn, fourth ventricle, and pons. Two radiologists, who were blinded to the reconstruction algorithms, evaluated anonymized multiplanar reformations of 2.5 mm with respect to depiction of different parenchymal structures and impact of artefacts on IQ with a five-point scale (0: unacceptable, 1: less than average, 2: average, 3: above average, 4: excellent). MBIR decreased artefacts more effectively than ASiR (p ASiR was 2 (p ASiR (p ASiR. As CCT is an examination that is frequently required, the use of MBIR may allow for substantial reduction of radiation exposure caused by medical diagnostics. • Model-Based iterative reconstruction (MBIR) effectively decreased artefacts in cranial CT. • MBIR reconstructed images were rated with significantly higher scores for image quality. • Model-Based iterative reconstruction may allow reduced-dose diagnostic examination protocols.

  3. Integrated Main Propulsion System Performance Reconstruction Process/Models

    Lopez, Eduardo; Elliott, Katie; Snell, Steven; Evans, Michael

    2013-01-01

    The Integrated Main Propulsion System (MPS) Performance Reconstruction process provides the MPS post-flight data files needed for postflight reporting to the project integration management and key customers to verify flight performance. This process/model was used as the baseline for the currently ongoing Space Launch System (SLS) work. The process utilizes several methodologies, including multiple software programs, to model integrated propulsion system performance through space shuttle ascent. It is used to evaluate integrated propulsion systems, including propellant tanks, feed systems, rocket engine, and pressurization systems performance throughout ascent based on flight pressure and temperature data. The latest revision incorporates new methods based on main engine power balance model updates to model higher mixture ratio operation at lower engine power levels.

  4. Reconstruction of electrocardiogram using ionic current models for heart muscles.

    Yamanaka, A; Okazaki, K; Urushibara, S; Kawato, M; Suzuki, R

    1986-11-01

    A digital computer model is presented for the simulation of the electrocardiogram during ventricular activation and repolarization (QRS-T waves). The part of the ventricular septum and the left ventricular free wall of the heart are represented by a two dimensional array of 730 homogeneous functional units. Ionic currents models are used to determine the spatial distribution of the electrical activities of these units at each instant of time during simulated cardiac cycle. In order to reconstruct the electrocardiogram, the model is expanded three-dimensionally with equipotential assumption along the third axis and then the surface potentials are calculated using solid angle method. Our digital computer model can be used to improve the understanding of the relationship between body surface potentials and intracellular electrical events.

  5. Reconstructing marginality: a new model of cultural diversity in nursing.

    Southwick, Margaret; Polaschek, Nick

    2014-05-01

    This article presents a new model of cultural diversity in nursing that critically reconstructs the concept of marginality that underpins other models. Rather than viewing the marginal as "other," marginality is redefined as the space in between the dominant cultural reality and the cultural realities of minority groups located within a society. Members of a minority cultural group who become skilled in the difficult process of negotiating this in-between space open the possibility of transformation within nursing education and practice. This model has been applied in a study of the experience of nursing students of Pacific ethnicity in New Zealand. Subsequently, an undergraduate Pacific nursing program was developed, with greatly increased success rates in registration of Pacific nurses. This model of cultural diversity can also be used to understand nursing practice involving people from minority cultures or other socially excluded categories. Copyright 2014, SLACK Incorporated.

  6. Experiments of reconstructing discrete atmospheric dynamic models from data (I)

    Lin, Zhenshan; Zhu, Yanyu; Deng, Ziwang

    1995-03-01

    In this paper, we give some experimental results of our study in reconstructing discrete atmospheric dynamic models from data. After a great deal of numerical experiments, we found that the logistic map, x n + 1 = 1- μx {2/n}, could be used in monthly mean temperature prediction when it was approaching the chaotic region, and its predictive results were in reverse states to the practical data. This means that the nonlinear developing behavior of the monthly mean temperature system is bifurcating back into the critical chaotic states from the chaotic ones.

  7. A synthesis of light absorption properties of the Pan-Arctic Ocean: application to semi-analytical estimates of dissolved organic carbon concentrations from space

    Matsuoka, A.; Babin, M.; Doxaran, D.; Hooker, S. B.; Mitchell, B. G.; Bélanger, S.; Bricaud, A.

    2013-11-01

    The light absorption coefficients of particulate and dissolved materials are the main factors determining the light propagation of the visible part of the spectrum and are, thus, important for developing ocean color algorithms. While these absorption properties have recently been documented by a few studies for the Arctic Ocean (e.g., Matsuoka et al., 2007, 2011; Ben Mustapha et al., 2012), the datasets used in the literature were sparse and individually insufficient to draw a general view of the basin-wide spatial and temporal variations in absorption. To achieve such a task, we built a large absorption database at the pan-Arctic scale by pooling the majority of published datasets and merging new datasets. Our results showed that the total non-water absorption coefficients measured in the Eastern Arctic Ocean (EAO; Siberian side) are significantly higher than in the Western Arctic Ocean (WAO; North American side). This higher absorption is explained by higher concentration of colored dissolved organic matter (CDOM) in watersheds on the Siberian side, which contains a large amount of dissolved organic carbon (DOC) compared to waters off North America. In contrast, the relationship between the phytoplankton absorption (aφ(λ)) and chlorophyll a (chl a) concentration in the EAO was not significantly different from that in the WAO. Because our semi-analytical CDOM absorption algorithm is based on chl a-specific aφ(λ) values (Matsuoka et al., 2013), this result indirectly suggests that CDOM absorption can be appropriately derived not only for the WAO but also for the EAO using ocean color data. Derived CDOM absorption values were reasonable compared to in situ measurements. By combining this algorithm with empirical DOC vs. CDOM relationships, a semi-analytical algorithm for estimating DOC concentrations for coastal waters at the Pan-Arctic scale is presented and applied to satellite ocean color data.

  8. A Synthesis of Light Absorption Properties of the Arctic Ocean: Application to Semi-analytical Estimates of Dissolved Organic Carbon Concentrations from Space

    Matsuoka, A.; Babin, M.; Doxaran, D.; Hooker, S. B.; Mitchell, B. G.; Belanger, S.; Bricaud, A.

    2014-01-01

    The light absorption coefficients of particulate and dissolved materials are the main factors determining the light propagation of the visible part of the spectrum and are, thus, important for developing ocean color algorithms. While these absorption properties have recently been documented by a few studies for the Arctic Ocean [e.g., Matsuoka et al., 2007, 2011; Ben Mustapha et al., 2012], the datasets used in the literature were sparse and individually insufficient to draw a general view of the basin-wide spatial and temporal variations in absorption. To achieve such a task, we built a large absorption database at the pan-Arctic scale by pooling the majority of published datasets and merging new datasets. Our results showed that the total non-water absorption coefficients measured in the Eastern Arctic Ocean (EAO; Siberian side) are significantly higher 74 than in the Western Arctic Ocean (WAO; North American side). This higher absorption is explained 75 by higher concentration of colored dissolved organic matter (CDOM) in watersheds on the Siberian 76 side, which contains a large amount of dissolved organic carbon (DOC) compared to waters off 77 North America. In contrast, the relationship between the phytoplankton absorption (a()) and chlorophyll a (chl a) concentration in the EAO was not significantly different from that in the WAO. Because our semi-analytical CDOM absorption algorithm is based on chl a-specific a() values [Matsuoka et al., 2013], this result indirectly suggests that CDOM absorption can be appropriately erived not only for the WAO but also for the EAO using ocean color data. Derived CDOM absorption values were reasonable compared to in situ measurements. By combining this algorithm with empirical DOC versus CDOM relationships, a semi-analytical algorithm for estimating DOC concentrations for coastal waters at the Pan-Arctic scale is presented and applied to satellite ocean color data.

  9. Hierarchical Bayesian Model for Simultaneous EEG Source and Forward Model Reconstruction (SOFOMORE)

    Stahlhut, Carsten; Mørup, Morten; Winther, Ole

    2009-01-01

    In this paper we propose an approach to handle forward model uncertainty for EEG source reconstruction. A stochastic forward model is motivated by the many uncertain contributions that form the forward propagation model including the tissue conductivity distribution, the cortical surface, and ele......In this paper we propose an approach to handle forward model uncertainty for EEG source reconstruction. A stochastic forward model is motivated by the many uncertain contributions that form the forward propagation model including the tissue conductivity distribution, the cortical surface...

  10. FIRST PRISMATIC BUILDING MODEL RECONSTRUCTION FROM TOMOSAR POINT CLOUDS

    Y. Sun

    2016-06-01

    Full Text Available This paper demonstrates for the first time the potential of explicitly modelling the individual roof surfaces to reconstruct 3-D prismatic building models using spaceborne tomographic synthetic aperture radar (TomoSAR point clouds. The proposed approach is modular and works as follows: it first extracts the buildings via DSM generation and cutting-off the ground terrain. The DSM is smoothed using BM3D denoising method proposed in (Dabov et al., 2007 and a gradient map of the smoothed DSM is generated based on height jumps. Watershed segmentation is then adopted to oversegment the DSM into different regions. Subsequently, height and polygon complexity constrained merging is employed to refine (i.e., to reduce the retrieved number of roof segments. Coarse outline of each roof segment is then reconstructed and later refined using quadtree based regularization plus zig-zag line simplification scheme. Finally, height is associated to each refined roof segment to obtain the 3-D prismatic model of the building. The proposed approach is illustrated and validated over a large building (convention center in the city of Las Vegas using TomoSAR point clouds generated from a stack of 25 images using Tomo-GENESIS software developed at DLR.

  11. Technical Note: Probabilistically constraining proxy age–depth models within a Bayesian hierarchical reconstruction model

    J. P. Werner

    2015-03-01

    Full Text Available Reconstructions of the late-Holocene climate rely heavily upon proxies that are assumed to be accurately dated by layer counting, such as measurements of tree rings, ice cores, and varved lake sediments. Considerable advances could be achieved if time-uncertain proxies were able to be included within these multiproxy reconstructions, and if time uncertainties were recognized and correctly modeled for proxies commonly treated as free of age model errors. Current approaches for accounting for time uncertainty are generally limited to repeating the reconstruction using each one of an ensemble of age models, thereby inflating the final estimated uncertainty – in effect, each possible age model is given equal weighting. Uncertainties can be reduced by exploiting the inferred space–time covariance structure of the climate to re-weight the possible age models. Here, we demonstrate how Bayesian hierarchical climate reconstruction models can be augmented to account for time-uncertain proxies. Critically, although a priori all age models are given equal probability of being correct, the probabilities associated with the age models are formally updated within the Bayesian framework, thereby reducing uncertainties. Numerical experiments show that updating the age model probabilities decreases uncertainty in the resulting reconstructions, as compared with the current de facto standard of sampling over all age models, provided there is sufficient information from other data sources in the spatial region of the time-uncertain proxy. This approach can readily be generalized to non-layer-counted proxies, such as those derived from marine sediments.

  12. Model-based image reconstruction for four-dimensional PET

    Li Tianfang; Thorndyke, Brian; Schreibmann, Eduard; Yang Yong; Xing Lei

    2006-01-01

    Positron emission tonography (PET) is useful in diagnosis and radiation treatment planning for a variety of cancers. For patients with cancers in thoracic or upper abdominal region, the respiratory motion produces large distortions in the tumor shape and size, affecting the accuracy in both diagnosis and treatment. Four-dimensional (4D) (gated) PET aims to reduce the motion artifacts and to provide accurate measurement of the tumor volume and the tracer concentration. A major issue in 4D PET is the lack of statistics. Since the collected photons are divided into several frames in the 4D PET scan, the quality of each reconstructed frame degrades as the number of frames increases. The increased noise in each frame heavily degrades the quantitative accuracy of the PET imaging. In this work, we propose a method to enhance the performance of 4D PET by developing a new technique of 4D PET reconstruction with incorporation of an organ motion model derived from 4D-CT images. The method is based on the well-known maximum-likelihood expectation-maximization (ML-EM) algorithm. During the processes of forward- and backward-projection in the ML-EM iterations, all projection data acquired at different phases are combined together to update the emission map with the aid of deformable model, the statistics is therefore greatly improved. The proposed algorithm was first evaluated with computer simulations using a mathematical dynamic phantom. Experiment with a moving physical phantom was then carried out to demonstrate the accuracy of the proposed method and the increase of signal-to-noise ratio over three-dimensional PET. Finally, the 4D PET reconstruction was applied to a patient case

  13. Integration of models for the Hanford Environmental Dose Reconstruction Project

    Napier, B.A.

    1991-01-01

    The objective of the Hanford Environmental Dose Reconstruction Project is to estimate the radiation dose that individuals could have received as a result of emissions from nuclear operations at Hanford since 1944. The objective of phase 1 of the project was to demonstrate through calculations that adequate models and support data exist or could be developed to allow realistic estimations of doses to individuals from releases of radionuclides to the environment that occurred as long as 45 years ago. Much of the data used in phase 1 was preliminary; therefore, the doses calculated must be considered preliminary approximations. This paper describes the integration of various models that was implemented for initial computer calculations. Models were required for estimating the quantity of radioactive material released, for evaluating its transport through the environment, for estimating human exposure, and for evaluating resultant doses

  14. The mathematical cell model reconstructed from interference microscopy data

    Rogotnev, A. A.; Nikitiuk, A. S.; Naimark, O. B.; Nebogatikov, V. O.; Grishko, V. V.

    2017-09-01

    The mathematical model of cell dynamics is developed to link the dynamics of the phase cell thickness with the signs of the oncological pathology. The measurements of irregular oscillations of cancer cells phase thickness were made with laser interference microscope MIM-340 in order to substantiate this model. These data related to the dynamics of phase thickness for different cross-sections of cells (nuclei, nucleolus, and cytoplasm) allow the reconstruction of the attractor of dynamic system. The attractor can be associated with specific types of collective modes of phase thickness responsible for the normal and cancerous cell dynamics. Specific type of evolution operator was determined using an algorithm of designing of the mathematical cell model and temporal phase thickness data for cancerous and normal cells. Qualitative correspondence of attractor types to the cell states was analyzed in terms of morphological signs associated with maximum value of mean square irregular oscillations of phase thickness dynamics.

  15. Improving head and neck CTA with hybrid and model-based iterative reconstruction techniques

    Niesten, J. M.; van der Schaaf, I. C.; Vos, P. C.; Willemink, MJ; Velthuis, B. K.

    2015-01-01

    AIM: To compare image quality of head and neck computed tomography angiography (CTA) reconstructed with filtered back projection (FBP), hybrid iterative reconstruction (HIR) and model-based iterative reconstruction (MIR) algorithms. MATERIALS AND METHODS: The raw data of 34 studies were

  16. A comparison of linear interpolation models for iterative CT reconstruction.

    Hahn, Katharina; Schöndube, Harald; Stierstorfer, Karl; Hornegger, Joachim; Noo, Frédéric

    2016-12-01

    Recent reports indicate that model-based iterative reconstruction methods may improve image quality in computed tomography (CT). One difficulty with these methods is the number of options available to implement them, including the selection of the forward projection model and the penalty term. Currently, the literature is fairly scarce in terms of guidance regarding this selection step, whereas these options impact image quality. Here, the authors investigate the merits of three forward projection models that rely on linear interpolation: the distance-driven method, Joseph's method, and the bilinear method. The authors' selection is motivated by three factors: (1) in CT, linear interpolation is often seen as a suitable trade-off between discretization errors and computational cost, (2) the first two methods are popular with manufacturers, and (3) the third method enables assessing the importance of a key assumption in the other methods. One approach to evaluate forward projection models is to inspect their effect on discretized images, as well as the effect of their transpose on data sets, but significance of such studies is unclear since the matrix and its transpose are always jointly used in iterative reconstruction. Another approach is to investigate the models in the context they are used, i.e., together with statistical weights and a penalty term. Unfortunately, this approach requires the selection of a preferred objective function and does not provide clear information on features that are intrinsic to the model. The authors adopted the following two-stage methodology. First, the authors analyze images that progressively include components of the singular value decomposition of the model in a reconstructed image without statistical weights and penalty term. Next, the authors examine the impact of weights and penalty on observed differences. Image quality metrics were investigated for 16 different fan-beam imaging scenarios that enabled probing various aspects

  17. Extension of local front reconstruction method with controlled coalescence model

    Rajkotwala, A. H.; Mirsandi, H.; Peters, E. A. J. F.; Baltussen, M. W.; van der Geld, C. W. M.; Kuerten, J. G. M.; Kuipers, J. A. M.

    2018-02-01

    The physics of droplet collisions involves a wide range of length scales. This poses a challenge to accurately simulate such flows with standard fixed grid methods due to their inability to resolve all relevant scales with an affordable number of computational grid cells. A solution is to couple a fixed grid method with subgrid models that account for microscale effects. In this paper, we improved and extended the Local Front Reconstruction Method (LFRM) with a film drainage model of Zang and Law [Phys. Fluids 23, 042102 (2011)]. The new framework is first validated by (near) head-on collision of two equal tetradecane droplets using experimental film drainage times. When the experimental film drainage times are used, the LFRM method is better in predicting the droplet collisions, especially at high velocity in comparison with other fixed grid methods (i.e., the front tracking method and the coupled level set and volume of fluid method). When the film drainage model is invoked, the method shows a good qualitative match with experiments, but a quantitative correspondence of the predicted film drainage time with the experimental drainage time is not obtained indicating that further development of film drainage model is required. However, it can be safely concluded that the LFRM coupled with film drainage models is much better in predicting the collision dynamics than the traditional methods.

  18. A General Semi-Analytical Solution for Three Types of Well Tests in Confined Aquifers with a Partially Penetrating Well

    Shaw-Yang Yang Hund-Der Yeh

    2012-01-01

    Full Text Available This note develops a general mathematical model for describing the transient hydraulic head response for constant-head test, constant-flux test, and slug test in a radial confined aquifer system with a partially penetrating well. The Laplace-domain solution for the model is derived by applying the Laplace transform with respect to time and finite Fourier cosine transform with respect to the z-direction. This new solution has been shown to reduce to the constant-head test when discounting the wellbore storage and maintaining a constant well water level. This solution can also be reduced to the constant-flux test solution when discounting the wellbore storage and keeping a constant pumping rate in the well. Moreover, the solution becomes the slug test solution when there is no pumping in the well. This general solution can be used to develop a single computer code to estimate aquifer parameters if coupled with an optimization algorithm or to assess the effect of well partial penetration on hydraulic head distribution for three types of aquifer tests.

  19. SWRT: A package for semi-analytical solutions of surface wave propagation, including mode conversion, across transversely aligned vertical discontinuities

    Datta, Arjun

    2018-03-01

    We present a suite of programs that implement decades-old algorithms for computation of seismic surface wave reflection and transmission coefficients at a welded contact between two laterally homogeneous quarter-spaces. For Love as well as Rayleigh waves, the algorithms are shown to be capable of modelling multiple mode conversions at a lateral discontinuity, which was not shown in the original publications or in the subsequent literature. Only normal incidence at a lateral boundary is considered so there is no Love-Rayleigh coupling, but incidence of any mode and coupling to any (other) mode can be handled. The code is written in Python and makes use of SciPy's Simpson's rule integrator and NumPy's linear algebra solver for its core functionality. Transmission-side results from this code are found to be in good agreement with those from finite-difference simulations. In today's research environment of extensive computing power, the coded algorithms are arguably redundant but SWRT can be used as a valuable testing tool for the ever evolving numerical solvers of seismic wave propagation. SWRT is available via GitHub (https://github.com/arjundatta23/SWRT.git).

  20. SWRT: A package for semi-analytical solutions of surface wave propagation, including mode conversion, across transversely aligned vertical discontinuities

    A. Datta

    2018-03-01

    Full Text Available We present a suite of programs that implement decades-old algorithms for computation of seismic surface wave reflection and transmission coefficients at a welded contact between two laterally homogeneous quarter-spaces. For Love as well as Rayleigh waves, the algorithms are shown to be capable of modelling multiple mode conversions at a lateral discontinuity, which was not shown in the original publications or in the subsequent literature. Only normal incidence at a lateral boundary is considered so there is no Love–Rayleigh coupling, but incidence of any mode and coupling to any (other mode can be handled. The code is written in Python and makes use of SciPy's Simpson's rule integrator and NumPy's linear algebra solver for its core functionality. Transmission-side results from this code are found to be in good agreement with those from finite-difference simulations. In today's research environment of extensive computing power, the coded algorithms are arguably redundant but SWRT can be used as a valuable testing tool for the ever evolving numerical solvers of seismic wave propagation. SWRT is available via GitHub (https://github.com/arjundatta23/SWRT.git.

  1. Reconstruction of Consistent 3d CAD Models from Point Cloud Data Using a Priori CAD Models

    Bey, A.; Chaine, R.; Marc, R.; Thibault, G.; Akkouche, S.

    2011-09-01

    We address the reconstruction of 3D CAD models from point cloud data acquired in industrial environments, using a pre-existing 3D model as an initial estimate of the scene to be processed. Indeed, this prior knowledge can be used to drive the reconstruction so as to generate an accurate 3D model matching the point cloud. We more particularly focus our work on the cylindrical parts of the 3D models. We propose to state the problem in a probabilistic framework: we have to search for the 3D model which maximizes some probability taking several constraints into account, such as the relevancy with respect to the point cloud and the a priori 3D model, and the consistency of the reconstructed model. The resulting optimization problem can then be handled using a stochastic exploration of the solution space, based on the random insertion of elements in the configuration under construction, coupled with a greedy management of the conflicts which efficiently improves the configuration at each step. We show that this approach provides reliable reconstructed 3D models by presenting some results on industrial data sets.

  2. Model-based iterative reconstruction for reduction of radiation dose in abdominopelvic CT: comparison to adaptive statistical iterative reconstruction.

    Yasaka, Koichiro; Katsura, Masaki; Akahane, Masaaki; Sato, Jiro; Matsuda, Izuru; Ohtomo, Kuni

    2013-12-01

    To evaluate dose reduction and image quality of abdominopelvic computed tomography (CT) reconstructed with model-based iterative reconstruction (MBIR) compared to adaptive statistical iterative reconstruction (ASIR). In this prospective study, 85 patients underwent referential-, low-, and ultralow-dose unenhanced abdominopelvic CT. Images were reconstructed with ASIR for low-dose (L-ASIR) and ultralow-dose CT (UL-ASIR), and with MBIR for ultralow-dose CT (UL-MBIR). Image noise was measured in the abdominal aorta and iliopsoas muscle. Subjective image analyses and a lesion detection study (adrenal nodules) were conducted by two blinded radiologists. A reference standard was established by a consensus panel of two different radiologists using referential-dose CT reconstructed with filtered back projection. Compared to low-dose CT, there was a 63% decrease in dose-length product with ultralow-dose CT. UL-MBIR had significantly lower image noise than L-ASIR and UL-ASIR (all pASIR and UL-ASIR (all pASIR in diagnostic acceptability (p>0.65), or diagnostic performance for adrenal nodules (p>0.87). MBIR significantly improves image noise and streak artifacts compared to ASIR, and can achieve radiation dose reduction without severely compromising image quality.

  3. Linking plate reconstructions with deforming lithosphere to geodynamic models

    Müller, R. D.; Gurnis, M.; Flament, N.; Seton, M.; Spasojevic, S.; Williams, S.; Zahirovic, S.

    2011-12-01

    While global computational models are rapidly advancing in terms of their capabilities, there is an increasing need for assimilating observations into these models and/or ground-truthing model outputs. The open-source and platform independent GPlates software fills this gap. It was originally conceived as a tool to interactively visualize and manipulate classical rigid plate reconstructions and represent them as time-dependent topological networks of editable plate boundaries. The user can export time-dependent plate velocity meshes that can be used either to define initial surface boundary conditions for geodynamic models or alternatively impose plate motions throughout a geodynamic model run. However, tectonic plates are not rigid, and neglecting plate deformation, especially that of the edges of overriding plates, can result in significant misplacing of plate boundaries through time. A new, substantially re-engineered version of GPlates is now being developed that allows an embedding of deforming plates into topological plate boundary networks. We use geophysical and geological data to define the limit between rigid and deforming areas, and the deformation history of non-rigid blocks. The velocity field predicted by these reconstructions can then be used as a time-dependent surface boundary condition in regional or global 3-D geodynamic models, or alternatively as an initial boundary condition for a particular plate configuration at a given time. For time-dependent models with imposed plate motions (e.g. using CitcomS) we incorporate the continental lithosphere by embedding compositionally distinct crust and continental lithosphere within the thermal lithosphere. We define three isostatic columns of different thickness and buoyancy based on the tectonothermal age of the continents: Archean, Proterozoic and Phanerozoic. In the fourth isostatic column, the oceans, the thickness of the thermal lithosphere is assimilated using a half-space cooling model. We also

  4. A semi-analytical method to estimate the effective slip length of spreading spherical-cap shaped droplets using Cox theory

    Wörner, M.; Cai, X.; Alla, H.; Yue, P.

    2018-03-01

    The Cox–Voinov law on dynamic spreading relates the difference between the cubic values of the apparent contact angle (θ) and the equilibrium contact angle to the instantaneous contact line speed (U). Comparing spreading results with this hydrodynamic wetting theory requires accurate data of θ and U during the entire process. We consider the case when gravitational forces are negligible, so that the shape of the spreading drop can be closely approximated by a spherical cap. Using geometrical dependencies, we transform the general Cox law in a semi-analytical relation for the temporal evolution of the spreading radius. Evaluating this relation numerically shows that the spreading curve becomes independent from the gas viscosity when the latter is less than about 1% of the drop viscosity. Since inertia may invalidate the made assumptions in the initial stage of spreading, a quantitative criterion for the time when the spherical-cap assumption is reasonable is derived utilizing phase-field simulations on the spreading of partially wetting droplets. The developed theory allows us to compare experimental/computational spreading curves for spherical-cap shaped droplets with Cox theory without the need for instantaneous data of θ and U. Furthermore, the fitting of Cox theory enables us to estimate the effective slip length. This is potentially useful for establishing relationships between slip length and parameters in numerical methods for moving contact lines.

  5. Revisiting a model-independent dark energy reconstruction method

    Lazkoz, Ruth; Salzano, Vincenzo; Sendra, Irene [Euskal Herriko Unibertsitatea, Fisika Teorikoaren eta Zientziaren Historia Saila, Zientzia eta Teknologia Fakultatea, Bilbao (Spain)

    2012-09-15

    In this work we offer new insights into the model-independent dark energy reconstruction method developed by Daly and Djorgovski (Astrophys. J. 597:9, 2003; Astrophys. J. 612:652, 2004; Astrophys. J. 677:1, 2008). Our results, using updated SNeIa and GRBs, allow to highlight some of the intrinsic weaknesses of the method. Conclusions on the main dark energy features as drawn from this method are intimately related to the features of the samples themselves, particularly for GRBs, which are poor performers in this context and cannot be used for cosmological purposes, that is, the state of the art does not allow to regard them on the same quality basis as SNeIa. We find there is a considerable sensitivity to some parameters (window width, overlap, selection criteria) affecting the results. Then, we try to establish what the current redshift range is for which one can make solid predictions on dark energy evolution. Finally, we strengthen the former view that this model is modest in the sense it provides only a picture of the global trend and has to be managed very carefully. But, on the other hand, we believe it offers an interesting complement to other approaches, given that it works on minimal assumptions. (orig.)

  6. Bayesian model selection of template forward models for EEG source reconstruction.

    Strobbe, Gregor; van Mierlo, Pieter; De Vos, Maarten; Mijović, Bogdan; Hallez, Hans; Van Huffel, Sabine; López, José David; Vandenberghe, Stefaan

    2014-06-01

    Several EEG source reconstruction techniques have been proposed to identify the generating neuronal sources of electrical activity measured on the scalp. The solution of these techniques depends directly on the accuracy of the forward model that is inverted. Recently, a parametric empirical Bayesian (PEB) framework for distributed source reconstruction in EEG/MEG was introduced and implemented in the Statistical Parametric Mapping (SPM) software. The framework allows us to compare different forward modeling approaches, using real data, instead of using more traditional simulated data from an assumed true forward model. In the absence of a subject specific MR image, a 3-layered boundary element method (BEM) template head model is currently used including a scalp, skull and brain compartment. In this study, we introduced volumetric template head models based on the finite difference method (FDM). We constructed a FDM head model equivalent to the BEM model and an extended FDM model including CSF. These models were compared within the context of three different types of source priors related to the type of inversion used in the PEB framework: independent and identically distributed (IID) sources, equivalent to classical minimum norm approaches, coherence (COH) priors similar to methods such as LORETA, and multiple sparse priors (MSP). The resulting models were compared based on ERP data of 20 subjects using Bayesian model selection for group studies. The reconstructed activity was also compared with the findings of previous studies using functional magnetic resonance imaging. We found very strong evidence in favor of the extended FDM head model with CSF and assuming MSP. These results suggest that the use of realistic volumetric forward models can improve PEB EEG source reconstruction. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. Automated comparison of Bayesian reconstructions of experimental profiles with physical models

    Irishkin, Maxim

    2014-01-01

    In this work we developed an expert system that carries out in an integrated and fully automated way i) a reconstruction of plasma profiles from the measurements, using Bayesian analysis ii) a prediction of the reconstructed quantities, according to some models and iii) an intelligent comparison of the first two steps. This system includes systematic checking of the internal consistency of the reconstructed quantities, enables automated model validation and, if a well-validated model is used, can be applied to help detecting interesting new physics in an experiment. The work shows three applications of this quite general system. The expert system can successfully detect failures in the automated plasma reconstruction and provide (on successful reconstruction cases) statistics of agreement of the models with the experimental data, i.e. information on the model validity. (author) [fr

  8. Research on compressive sensing reconstruction algorithm based on total variation model

    Gao, Yu-xuan; Sun, Huayan; Zhang, Tinghua; Du, Lin

    2017-12-01

    Compressed sensing for breakthrough Nyquist sampling theorem provides a strong theoretical , making compressive sampling for image signals be carried out simultaneously. In traditional imaging procedures using compressed sensing theory, not only can it reduces the storage space, but also can reduce the demand for detector resolution greatly. Using the sparsity of image signal, by solving the mathematical model of inverse reconfiguration, realize the super-resolution imaging. Reconstruction algorithm is the most critical part of compression perception, to a large extent determine the accuracy of the reconstruction of the image.The reconstruction algorithm based on the total variation (TV) model is more suitable for the compression reconstruction of the two-dimensional image, and the better edge information can be obtained. In order to verify the performance of the algorithm, Simulation Analysis the reconstruction result in different coding mode of the reconstruction algorithm based on the TV reconstruction algorithm. The reconstruction effect of the reconfigurable algorithm based on TV based on the different coding methods is analyzed to verify the stability of the algorithm. This paper compares and analyzes the typical reconstruction algorithm in the same coding mode. On the basis of the minimum total variation algorithm, the Augmented Lagrangian function term is added and the optimal value is solved by the alternating direction method.Experimental results show that the reconstruction algorithm is compared with the traditional classical algorithm based on TV has great advantages, under the low measurement rate can be quickly and accurately recovers target image.

  9. Parallelization of the model-based iterative reconstruction algorithm DIRA

    Oertenberg, A.; Sandborg, M.; Alm Carlsson, G.; Malusek, A.; Magnusson, M.

    2016-01-01

    New paradigms for parallel programming have been devised to simplify software development on multi-core processors and many-core graphical processing units (GPU). Despite their obvious benefits, the parallelization of existing computer programs is not an easy task. In this work, the use of the Open Multiprocessing (OpenMP) and Open Computing Language (OpenCL) frameworks is considered for the parallelization of the model-based iterative reconstruction algorithm DIRA with the aim to significantly shorten the code's execution time. Selected routines were parallelized using OpenMP and OpenCL libraries; some routines were converted from MATLAB to C and optimised. Parallelization of the code with the OpenMP was easy and resulted in an overall speedup of 15 on a 16-core computer. Parallelization with OpenCL was more difficult owing to differences between the central processing unit and GPU architectures. The resulting speedup was substantially lower than the theoretical peak performance of the GPU; the cause was explained. (authors)

  10. Simulation of Anterior Cruciate Ligament Reconstruction in a Dry Model.

    Dwyer, Tim; Slade Shantz, Jesse; Chahal, Jaskarndip; Wasserstein, David; Schachar, Rachel; Kulasegaram, K Mahan; Theodoropoulos, John; Greben, Rachel; Ogilvie-Harris, Darrell

    2015-12-01

    As the demand increases for demonstration of competence in surgical skill, the need for validated assessment tools also increases. The purpose of this study was to validate a dry knee model for the assessment of performance of anterior cruciate ligament reconstruction (ACLR). The hypothesis was that the combination of a checklist and a previously validated global rating scale would be a valid and reliable means of assessing ACLR when performed by residents in a dry model. Controlled laboratory study. All residents, sports medicine staff, and fellows were invited to perform a hamstring ACLR using anteromedial drilling and Endobutton fixation on a dry model of an anterior cruciate ligament. Previous exposure to knee arthroscopy and ACLR was recorded. A detailed surgical manuscript and technique video were sent to all participants before the study. Residents were evaluated by staff surgeons with task-specific checklists created by use of a modified Delphi procedure and the Arthroscopic Surgical Skill Evaluation Tool (ASSET). Each procedure (hand movements and arthroscopic video) was recorded and scored by a fellow blinded to the year of training of each participant. A total of 29 residents, 5 fellows, and 6 staff surgeons (40 participants total) performed an ACLR on the dry model. The internal reliability (Cronbach alpha) of the test when using the total ASSET score was very high (>0.9). One-way analysis of variance for the total ASSET score and the total checklist score demonstrated a difference between participants based on year of training (P .05). A good correlation was seen between the total ASSET score and prior exposure to knee arthroscopy (0.73) and ACLR (0.65). The interrater reliability (intraclass correlation coefficient) between the examiner ratings and the blinded assessor ratings for the total ASSET score was very high (>0.8). The results of this study provide evidence that the performance of an ACLR in a dry model is a reliable method of assessing a

  11. Assessing Women’s Preferences and Preference Modeling for Breast Reconstruction Decision Making

    Clement S. Sun, MS

    2014-03-01

    Conclusions: We recommend the risk-averse multiplicative model for modeling the preferences of patients considering different forms of breast reconstruction because it agreed most often with the participants in this study.

  12. Sensing of complex buildings and reconstruction into photo-realistic 3D models

    Heredia Soriano, F.J.

    2012-01-01

    The 3D reconstruction of indoor and outdoor environments has received an interest only recently, as companies began to recognize that using reconstructed models is a way to generate revenue through location-based services and advertisements. A great amount of research has been done in the field of

  13. Simple method of modelling of digital holograms registering and their optical reconstruction

    Evtikhiev, N N; Cheremkhin, P A; Krasnov, V V; Kurbatova, E A; Molodtsov, D Yu; Porshneva, L A; Rodin, V G

    2016-01-01

    The technique of modeling of digital hologram recording and image optical reconstruction from these holograms is described. The method takes into account characteristics of the object, digital camera's photosensor and spatial light modulator used for digital holograms displaying. Using the technique, equipment can be chosen for experiments for obtaining good reconstruction quality and/or holograms diffraction efficiency. Numerical experiments were conducted. (paper)

  14. Active numerical model of human body for reconstruction of falls from height.

    Milanowicz, Marcin; Kędzior, Krzysztof

    2017-01-01

    Falls from height constitute the largest group of incidents out of approximately 90,000 occupational accidents occurring each year in Poland. Reconstruction of the exact course of a fall from height is generally difficult due to lack of sufficient information from the accident scene. This usually results in several contradictory versions of an incident and impedes, for example, determination of the liability in a judicial process. In similar situations, in many areas of human activity, researchers apply numerical simulation. They use it to model physical phenomena to reconstruct their real course over time; e.g. numerical human body models are frequently used for investigation and reconstruction of road accidents. However, they are validated in terms of specific road traffic accidents and are considerably limited when applied to the reconstruction of other types of accidents. The objective of the study was to develop an active numerical human body model to be used for reconstruction of accidents associated with falling from height. Development of the model involved extension and adaptation of the existing Pedestrian human body model (available in the MADYMO package database) for the purposes of reconstruction of falls from height by taking into account the human reaction to the loss of balance. The model was developed by using the results of experimental tests of the initial phase of the fall from height. The active numerical human body model covering 28 sets of initial conditions related to various human reactions to the loss of balance was developed. The application of the model was illustrated by using it to reconstruct a real fall from height. From among the 28 sets of initial conditions, those whose application made it possible to reconstruct the most probable version of the incident was selected. The selection was based on comparison of the results of the reconstruction with information contained in the accident report. Results in the form of estimated

  15. Algorithms For Phylogeny Reconstruction In a New Mathematical Model

    Lenzini, Gabriele; Marianelli, Silvia

    1997-01-01

    The evolutionary history of a set of species is represented by a tree called phylogenetic tree or phylogeny. Its structure depends on precise biological assumptions about the evolution of species. Problems related to phylogeny reconstruction (i.e., finding a tree representation of information

  16. Model-based iterative reconstruction and adaptive statistical iterative reconstruction: dose-reduced CT for detecting pancreatic calcification

    Yasaka, Koichiro; Katsura, Masaki; Akahane, Masaaki; Sato, Jiro; Matsuda, Izuru; Ohtomo, Kuni

    2016-01-01

    Iterative reconstruction methods have attracted attention for reducing radiation doses in computed tomography (CT). To investigate the detectability of pancreatic calcification using dose-reduced CT reconstructed with model-based iterative construction (MBIR) and adaptive statistical iterative reconstruction (ASIR). This prospective study approved by Institutional Review Board included 85 patients (57 men, 28 women; mean age, 69.9 years; mean body weight, 61.2 kg). Unenhanced CT was performed three times with different radiation doses (reference-dose CT [RDCT], low-dose CT [LDCT], ultralow-dose CT [ULDCT]). From RDCT, LDCT, and ULDCT, images were reconstructed with filtered-back projection (R-FBP, used for establishing reference standard), ASIR (L-ASIR), and MBIR and ASIR (UL-MBIR and UL-ASIR), respectively. A lesion (pancreatic calcification) detection test was performed by two blinded radiologists with a five-point certainty level scale. Dose-length products of RDCT, LDCT, and ULDCT were 410, 97, and 36 mGy-cm, respectively. Nine patients had pancreatic calcification. The sensitivity for detecting pancreatic calcification with UL-MBIR was high (0.67–0.89) compared to L-ASIR or UL-ASIR (0.11–0.44), and a significant difference was seen between UL-MBIR and UL-ASIR for one reader (P = 0.014). The area under the receiver-operating characteristic curve for UL-MBIR (0.818–0.860) was comparable to that for L-ASIR (0.696–0.844). The specificity was lower with UL-MBIR (0.79–0.92) than with L-ASIR or UL-ASIR (0.96–0.99), and a significant difference was seen for one reader (P < 0.01). In UL-MBIR, pancreatic calcification can be detected with high sensitivity, however, we should pay attention to the slightly lower specificity

  17. Model-based iterative reconstruction and adaptive statistical iterative reconstruction: dose-reduced CT for detecting pancreatic calcification.

    Yasaka, Koichiro; Katsura, Masaki; Akahane, Masaaki; Sato, Jiro; Matsuda, Izuru; Ohtomo, Kuni

    2016-01-01

    Iterative reconstruction methods have attracted attention for reducing radiation doses in computed tomography (CT). To investigate the detectability of pancreatic calcification using dose-reduced CT reconstructed with model-based iterative construction (MBIR) and adaptive statistical iterative reconstruction (ASIR). This prospective study approved by Institutional Review Board included 85 patients (57 men, 28 women; mean age, 69.9 years; mean body weight, 61.2 kg). Unenhanced CT was performed three times with different radiation doses (reference-dose CT [RDCT], low-dose CT [LDCT], ultralow-dose CT [ULDCT]). From RDCT, LDCT, and ULDCT, images were reconstructed with filtered-back projection (R-FBP, used for establishing reference standard), ASIR (L-ASIR), and MBIR and ASIR (UL-MBIR and UL-ASIR), respectively. A lesion (pancreatic calcification) detection test was performed by two blinded radiologists with a five-point certainty level scale. Dose-length products of RDCT, LDCT, and ULDCT were 410, 97, and 36 mGy-cm, respectively. Nine patients had pancreatic calcification. The sensitivity for detecting pancreatic calcification with UL-MBIR was high (0.67-0.89) compared to L-ASIR or UL-ASIR (0.11-0.44), and a significant difference was seen between UL-MBIR and UL-ASIR for one reader (P = 0.014). The area under the receiver-operating characteristic curve for UL-MBIR (0.818-0.860) was comparable to that for L-ASIR (0.696-0.844). The specificity was lower with UL-MBIR (0.79-0.92) than with L-ASIR or UL-ASIR (0.96-0.99), and a significant difference was seen for one reader (P < 0.01). In UL-MBIR, pancreatic calcification can be detected with high sensitivity, however, we should pay attention to the slightly lower specificity.

  18. Evaluation of two semi-analytical techniques in air quality applications Avaliação de duas técnicas semi-analíticas em aplicações na qualidade do ar

    Jonas C. Carvalho

    2007-04-01

    Full Text Available In this article an evaluation of two semi-analytical techniques is carried out, considering the quality and accuracy of these techniques in reproducing the ground-level concentration values of passive pollutant released from low and high sources. The first technique is an Eulerian model based on the solution of the advection-diffusion equation by the Laplace transform technique. The second is a Lagrangian model based on solution of the Langevin equation through the Picard Iterative Method. Turbulence parameters are calculated according to a parameterization capable of generating continuous values in all stability conditions and in all heights of the planetary boundary layer. Numerical simulations and comparisons show a good agreement between predicted and observed concentrations values. Comparisons between the two proposed techniques reveal that Lagrangian model generated more accurate results, but Eulerian model demands a lesser computational time.Neste artigo é realizada uma avaliação de duas técnicas semi-analíticas, considerando a qualidade e a exatidão destas técnicas em reproduzir valores de concentração ao nível da superfície de poluentes passivos emitidos a partir de fontes baixas e altas. A primeira técnica é um modelo Euleriano baseado na solução da equação advecção-difusão através da técnica de transformada de Laplace. A segunda é um modelo Lagrangiano baseado na solução da equação de Langevin através do Método Iterativo de Picard. Parâmetros da turbulência são calculados de acordo com uma parametrização capaz de gerar valores contínuos em todas as condições de estabilidade e em todas as alturas na camada limite planetária. Simulações numéricas e comparações mostram uma boa concordância entre valores de concentração previstos e observados. Comparações entre as duas técnicas revelam que o modelo Lagrangiano gera resultados mais precisos, mas o modelo Euleriano exige um menor tempo

  19. MO-C-18A-01: Advances in Model-Based 3D Image Reconstruction

    Chen, G; Pan, X; Stayman, J; Samei, E

    2014-01-01

    Recent years have seen the emergence of CT image reconstruction techniques that exploit physical models of the imaging system, photon statistics, and even the patient to achieve improved 3D image quality and/or reduction of radiation dose. With numerous advantages in comparison to conventional 3D filtered backprojection, such techniques bring a variety of challenges as well, including: a demanding computational load associated with sophisticated forward models and iterative optimization methods; nonlinearity and nonstationarity in image quality characteristics; a complex dependency on multiple free parameters; and the need to understand how best to incorporate prior information (including patient-specific prior images) within the reconstruction process. The advantages, however, are even greater – for example: improved image quality; reduced dose; robustness to noise and artifacts; task-specific reconstruction protocols; suitability to novel CT imaging platforms and noncircular orbits; and incorporation of known characteristics of the imager and patient that are conventionally discarded. This symposium features experts in 3D image reconstruction, image quality assessment, and the translation of such methods to emerging clinical applications. Dr. Chen will address novel methods for the incorporation of prior information in 3D and 4D CT reconstruction techniques. Dr. Pan will show recent advances in optimization-based reconstruction that enable potential reduction of dose and sampling requirements. Dr. Stayman will describe a “task-based imaging” approach that leverages models of the imaging system and patient in combination with a specification of the imaging task to optimize both the acquisition and reconstruction process. Dr. Samei will describe the development of methods for image quality assessment in such nonlinear reconstruction techniques and the use of these methods to characterize and optimize image quality and dose in a spectrum of clinical

  20. Modelling Spatial Compositional Data: Reconstructions of past land cover and uncertainties

    Pirzamanbein, Behnaz; Lindström, Johan; Poska, Anneli

    2018-01-01

    In this paper, we construct a hierarchical model for spatial compositional data, which is used to reconstruct past land-cover compositions (in terms of coniferous forest, broadleaved forest, and unforested/open land) for five time periods during the past $6\\,000$ years over Europe. The model...... to a fast MCMC algorithm. Reconstructions are obtained by combining pollen-based estimates of vegetation cover at a limited number of locations with scenarios of past deforestation and output from a dynamic vegetation model. To evaluate uncertainties in the predictions a novel way of constructing joint...... confidence regions for the entire composition at each prediction location is proposed. The hierarchical model's ability to reconstruct past land cover is evaluated through cross validation for all time periods, and by comparing reconstructions for the recent past to a present day European forest map...

  1. 2-D Fused Image Reconstruction approach for Microwave Tomography: a theoretical assessment using FDTD Model.

    Bindu, G; Semenov, S

    2013-01-01

    This paper describes an efficient two-dimensional fused image reconstruction approach for Microwave Tomography (MWT). Finite Difference Time Domain (FDTD) models were created for a viable MWT experimental system having the transceivers modelled using thin wire approximation with resistive voltage sources. Born Iterative and Distorted Born Iterative methods have been employed for image reconstruction with the extremity imaging being done using a differential imaging technique. The forward solver in the imaging algorithm employs the FDTD method of solving the time domain Maxwell's equations with the regularisation parameter computed using a stochastic approach. The algorithm is tested with 10% noise inclusion and successful image reconstruction has been shown implying its robustness.

  2. 3D Fractal reconstruction of terrain profile data based on digital elevation model

    Huang, Y.M.; Chen, C.-J.

    2009-01-01

    Digital Elevation Model (DEM) often makes it difficult for terrain reconstruction and data storage due to the failure in acquisition of details with higher resolution. If original terrain of DEM can be simulated, resulting in geographical details can be represented precisely while reducing the data size, then an effective reconstruction scheme is essential. This paper adopts two sets of real-world 3D terrain profile data to proceed data reducing, i.e. data sampling randomly, then reconstruct them through 3D fractal reconstruction. Meanwhile, the quantitative and qualitative difference generated from different reduction rates were evaluated statistically. The research results show that, if 3D fractal interpolation method is applied to DEM reconstruction, the higher reduction rate can be obtained for DEM of larger data size with respect to that of smaller data size under the assumption that the entire terrain structure is still maintained.

  3. CT of the chest with model-based, fully iterative reconstruction: comparison with adaptive statistical iterative reconstruction.

    Ichikawa, Yasutaka; Kitagawa, Kakuya; Nagasawa, Naoki; Murashima, Shuichi; Sakuma, Hajime

    2013-08-09

    The recently developed model-based iterative reconstruction (MBIR) enables significant reduction of image noise and artifacts, compared with adaptive statistical iterative reconstruction (ASIR) and filtered back projection (FBP). The purpose of this study was to evaluate lesion detectability of low-dose chest computed tomography (CT) with MBIR in comparison with ASIR and FBP. Chest CT was acquired with 64-slice CT (Discovery CT750HD) with standard-dose (5.7 ± 2.3 mSv) and low-dose (1.6 ± 0.8 mSv) conditions in 55 patients (aged 72 ± 7 years) who were suspected of lung disease on chest radiograms. Low-dose CT images were reconstructed with MBIR, ASIR 50% and FBP, and standard-dose CT images were reconstructed with FBP, using a reconstructed slice thickness of 0.625 mm. Two observers evaluated the image quality of abnormal lung and mediastinal structures on a 5-point scale (Score 5 = excellent and score 1 = non-diagnostic). The objective image noise was also measured as the standard deviation of CT intensity in the descending aorta. The image quality score of enlarged mediastinal lymph nodes on low-dose MBIR CT (4.7 ± 0.5) was significantly improved in comparison with low-dose FBP and ASIR CT (3.0 ± 0.5, p = 0.004; 4.0 ± 0.5, p = 0.02, respectively), and was nearly identical to the score of standard-dose FBP image (4.8 ± 0.4, p = 0.66). Concerning decreased lung attenuation (bulla, emphysema, or cyst), the image quality score on low-dose MBIR CT (4.9 ± 0.2) was slightly better compared to low-dose FBP and ASIR CT (4.5 ± 0.6, p = 0.01; 4.6 ± 0.5, p = 0.01, respectively). There were no significant differences in image quality scores of visualization of consolidation or mass, ground-glass attenuation, or reticular opacity among low- and standard-dose CT series. Image noise with low-dose MBIR CT (11.6 ± 1.0 Hounsfield units (HU)) were significantly lower than with low-dose ASIR (21.1 ± 2.6 HU, p standard-dose FBP CT (16.6 ± 2.3 HU, p 70%, MBIR can provide

  4. Model-based microwave image reconstruction: simulations and experiments

    Ciocan, Razvan; Jiang Huabei

    2004-01-01

    We describe an integrated microwave imaging system that can provide spatial maps of dielectric properties of heterogeneous media with tomographically collected data. The hardware system (800-1200 MHz) was built based on a lock-in amplifier with 16 fixed antennas. The reconstruction algorithm was implemented using a Newton iterative method with combined Marquardt-Tikhonov regularizations. System performance was evaluated using heterogeneous media mimicking human breast tissue. Finite element method coupled with the Bayliss and Turkel radiation boundary conditions were applied to compute the electric field distribution in the heterogeneous media of interest. The results show that inclusions embedded in a 76-diameter background medium can be quantitatively reconstructed from both simulated and experimental data. Quantitative analysis of the microwave images obtained suggests that an inclusion of 14 mm in diameter is the smallest object that can be fully characterized presently using experimental data, while objects as small as 10 mm in diameter can be quantitatively resolved with simulated data

  5. First experiences with model based iterative reconstructions influence on quantitative plaque volume and intensity measurements in coronary computed tomography angiography

    Precht, Helle; Kitslaar, Pieter H.; Broersen, Alexander

    2017-01-01

    Purpose: Investigate the influence of adaptive statistical iterative reconstruction (ASIR) and the model- based IR (Veo) reconstruction algorithm in coronary computed tomography angiography (CCTA) im- ages on quantitative measurements in coronary arteries for plaque volumes and intensities. Methods...

  6. Use of an object model in three dimensional image reconstruction. Application in medical imaging

    Delageniere-Guillot, S.

    1993-02-01

    Threedimensional image reconstruction from projections corresponds to a set of techniques which give information on the inner structure of the studied object. These techniques are mainly used in medical imaging or in non destructive evaluation. Image reconstruction is an ill-posed problem. So the inversion has to be regularized. This thesis deals with the introduction of a priori information within the reconstruction algorithm. The knowledge is introduced through an object model. The proposed scheme is applied to the medical domain for cone beam geometry. We address two specific problems. First, we study the reconstruction of high contrast objects. This can be applied to bony morphology (bone/soft tissue) or to angiography (vascular structures opacified by injection of contrast agent). With noisy projections, the filtering steps of standard methods tend to smooth the natural transitions of the investigated object. In order to regularize the reconstruction but to keep contrast, we introduce a model of classes which involves the Markov random fields theory. We develop a reconstruction scheme: analytic reconstruction-reprojection. Then, we address the case of an object changing during the acquisition. This can be applied to angiography when the contrast agent is moving through the vascular tree. The problem is then stated as a dynamic reconstruction. We define an evolution AR model and we use an algebraic reconstruction method. We represent the object at a particular moment as an intermediary state between the state of the object at the beginning and at the end of the acquisition. We test both methods on simulated and real data, and we prove how the use of an a priori model can improve the results. (author)

  7. A reconstruction of Maxwell model for effective thermal conductivity of composite materials

    Xu, J.Z.; Gao, B.Z.; Kang, F.Y.

    2016-01-01

    Highlights: • Deficiencies were found in classical Maxwell model for effective thermal conductivity. • Maxwell model was reconstructed based on potential mean-field theory. • Reconstructed Maxwell model was extended with particle–particle contact resistance. • Predictions by reconstructed Maxwell model agree excellently with experimental data. - Abstract: Composite materials consisting of high thermal conductive fillers and polymer matrix are often used as thermal interface materials to dissipate heat generated from mechanical and electronic devices. The prediction of effective thermal conductivity of composites remains as a critical issue due to its dependence on considerably factors. Most models for prediction are based on the analog between electric potential and temperature that satisfy the Laplace equation under steady condition. Maxwell was the first to derive the effective electric resistivity of composites by examining the far-field spherical harmonic solution of Laplace equation perturbed by a sphere of different resistivity, and his model was considered as classical. However, a close review of Maxwell’s derivation reveals that there exist several controversial issues (deficiencies) inherent in his model. In this study, we reconstruct the Maxwell model based on a potential mean-field theory to resolve these issues. For composites made of continuum matrix and particle fillers, the contact resistance among particles was introduced in the reconstruction of Maxwell model. The newly reconstructed Maxwell model with contact resistivity as a fitting parameter is shown to fit excellently to experimental data over wide ranges of particle concentration and mean particle diameter. The scope of applicability of the reconstructed Maxwell model is also discussed using the contact resistivity as a parameter.

  8. Missing data reconstruction using Gaussian mixture models for fingerprint images

    Agaian, Sos S.; Yeole, Rushikesh D.; Rao, Shishir P.; Mulawka, Marzena; Troy, Mike; Reinecke, Gary

    2016-05-01

    Publisher's Note: This paper, originally published on 25 May 2016, was replaced with a revised version on 16 June 2016. If you downloaded the original PDF, but are unable to access the revision, please contact SPIE Digital Library Customer Service for assistance. One of the most important areas in biometrics is matching partial fingerprints in fingerprint databases. Recently, significant progress has been made in designing fingerprint identification systems for missing fingerprint information. However, a dependable reconstruction of fingerprint images still remains challenging due to the complexity and the ill-posed nature of the problem. In this article, both binary and gray-level images are reconstructed. This paper also presents a new similarity score to evaluate the performance of the reconstructed binary image. The offered fingerprint image identification system can be automated and extended to numerous other security applications such as postmortem fingerprints, forensic science, investigations, artificial intelligence, robotics, all-access control, and financial security, as well as for the verification of firearm purchasers, driver license applicants, etc.

  9. Task-based data-acquisition optimization for sparse image reconstruction systems

    Chen, Yujia; Lou, Yang; Kupinski, Matthew A.; Anastasio, Mark A.

    2017-03-01

    Conventional wisdom dictates that imaging hardware should be optimized by use of an ideal observer (IO) that exploits full statistical knowledge of the class of objects to be imaged, without consideration of the reconstruction method to be employed. However, accurate and tractable models of the complete object statistics are often difficult to determine in practice. Moreover, in imaging systems that employ compressive sensing concepts, imaging hardware and (sparse) image reconstruction are innately coupled technologies. We have previously proposed a sparsity-driven ideal observer (SDIO) that can be employed to optimize hardware by use of a stochastic object model that describes object sparsity. The SDIO and sparse reconstruction method can therefore be "matched" in the sense that they both utilize the same statistical information regarding the class of objects to be imaged. To efficiently compute SDIO performance, the posterior distribution is estimated by use of computational tools developed recently for variational Bayesian inference. Subsequently, the SDIO test statistic can be computed semi-analytically. The advantages of employing the SDIO instead of a Hotelling observer are systematically demonstrated in case studies in which magnetic resonance imaging (MRI) data acquisition schemes are optimized for signal detection tasks.

  10. AUTOMATIC TEXTURE RECONSTRUCTION OF 3D CITY MODEL FROM OBLIQUE IMAGES

    J. Kang

    2016-06-01

    Full Text Available In recent years, the photorealistic 3D city models are increasingly important in various geospatial applications related to virtual city tourism, 3D GIS, urban planning, real-estate management. Besides the acquisition of high-precision 3D geometric data, texture reconstruction is also a crucial step for generating high-quality and visually realistic 3D models. However, most of the texture reconstruction approaches are probably leading to texture fragmentation and memory inefficiency. In this paper, we introduce an automatic framework of texture reconstruction to generate textures from oblique images for photorealistic visualization. Our approach include three major steps as follows: mesh parameterization, texture atlas generation and texture blending. Firstly, mesh parameterization procedure referring to mesh segmentation and mesh unfolding is performed to reduce geometric distortion in the process of mapping 2D texture to 3D model. Secondly, in the texture atlas generation step, the texture of each segmented region in texture domain is reconstructed from all visible images with exterior orientation and interior orientation parameters. Thirdly, to avoid color discontinuities at boundaries between texture regions, the final texture map is generated by blending texture maps from several corresponding images. We evaluated our texture reconstruction framework on a dataset of a city. The resulting mesh model can get textured by created texture without resampling. Experiment results show that our method can effectively mitigate the occurrence of texture fragmentation. It is demonstrated that the proposed framework is effective and useful for automatic texture reconstruction of 3D city model.

  11. Quantitative analysis of emphysema and airway measurements according to iterative reconstruction algorithms: comparison of filtered back projection, adaptive statistical iterative reconstruction and model-based iterative reconstruction

    Choo, Ji Yung; Goo, Jin Mo; Park, Chang Min; Park, Sang Joon; Lee, Chang Hyun; Shim, Mi-Suk

    2014-01-01

    To evaluate filtered back projection (FBP) and two iterative reconstruction (IR) algorithms and their effects on the quantitative analysis of lung parenchyma and airway measurements on computed tomography (CT) images. Low-dose chest CT obtained in 281 adult patients were reconstructed using three algorithms: FBP, adaptive statistical IR (ASIR) and model-based IR (MBIR). Measurements of each dataset were compared: total lung volume, emphysema index (EI), airway measurements of the lumen and wall area as well as average wall thickness. Accuracy of airway measurements of each algorithm was also evaluated using an airway phantom. EI using a threshold of -950 HU was significantly different among the three algorithms in decreasing order of FBP (2.30 %), ASIR (1.49 %) and MBIR (1.20 %) (P < 0.01). Wall thickness was also significantly different among the three algorithms with FBP (2.09 mm) demonstrating thicker walls than ASIR (2.00 mm) and MBIR (1.88 mm) (P < 0.01). Airway phantom analysis revealed that MBIR showed the most accurate value for airway measurements. The three algorithms presented different EIs and wall thicknesses, decreasing in the order of FBP, ASIR and MBIR. Thus, care should be taken in selecting the appropriate IR algorithm on quantitative analysis of the lung. (orig.)

  12. Quantitative analysis of emphysema and airway measurements according to iterative reconstruction algorithms: comparison of filtered back projection, adaptive statistical iterative reconstruction and model-based iterative reconstruction

    Choo, Ji Yung [Seoul National University Medical Research Center, Department of Radiology, Seoul National University College of Medicine, and Institute of Radiation Medicine, Seoul (Korea, Republic of); Korea University Ansan Hospital, Ansan-si, Department of Radiology, Gyeonggi-do (Korea, Republic of); Goo, Jin Mo; Park, Chang Min; Park, Sang Joon [Seoul National University Medical Research Center, Department of Radiology, Seoul National University College of Medicine, and Institute of Radiation Medicine, Seoul (Korea, Republic of); Seoul National University, Cancer Research Institute, Seoul (Korea, Republic of); Lee, Chang Hyun; Shim, Mi-Suk [Seoul National University Medical Research Center, Department of Radiology, Seoul National University College of Medicine, and Institute of Radiation Medicine, Seoul (Korea, Republic of)

    2014-04-15

    To evaluate filtered back projection (FBP) and two iterative reconstruction (IR) algorithms and their effects on the quantitative analysis of lung parenchyma and airway measurements on computed tomography (CT) images. Low-dose chest CT obtained in 281 adult patients were reconstructed using three algorithms: FBP, adaptive statistical IR (ASIR) and model-based IR (MBIR). Measurements of each dataset were compared: total lung volume, emphysema index (EI), airway measurements of the lumen and wall area as well as average wall thickness. Accuracy of airway measurements of each algorithm was also evaluated using an airway phantom. EI using a threshold of -950 HU was significantly different among the three algorithms in decreasing order of FBP (2.30 %), ASIR (1.49 %) and MBIR (1.20 %) (P < 0.01). Wall thickness was also significantly different among the three algorithms with FBP (2.09 mm) demonstrating thicker walls than ASIR (2.00 mm) and MBIR (1.88 mm) (P < 0.01). Airway phantom analysis revealed that MBIR showed the most accurate value for airway measurements. The three algorithms presented different EIs and wall thicknesses, decreasing in the order of FBP, ASIR and MBIR. Thus, care should be taken in selecting the appropriate IR algorithm on quantitative analysis of the lung. (orig.)

  13. A biomechanical modeling-guided simultaneous motion estimation and image reconstruction technique (SMEIR-Bio) for 4D-CBCT reconstruction

    Huang, Xiaokun; Zhang, You; Wang, Jing

    2018-02-01

    Reconstructing four-dimensional cone-beam computed tomography (4D-CBCT) images directly from respiratory phase-sorted traditional 3D-CBCT projections can capture target motion trajectory, reduce motion artifacts, and reduce imaging dose and time. However, the limited numbers of projections in each phase after phase-sorting decreases CBCT image quality under traditional reconstruction techniques. To address this problem, we developed a simultaneous motion estimation and image reconstruction (SMEIR) algorithm, an iterative method that can reconstruct higher quality 4D-CBCT images from limited projections using an inter-phase intensity-driven motion model. However, the accuracy of the intensity-driven motion model is limited in regions with fine details whose quality is degraded due to insufficient projection number, which consequently degrades the reconstructed image quality in corresponding regions. In this study, we developed a new 4D-CBCT reconstruction algorithm by introducing biomechanical modeling into SMEIR (SMEIR-Bio) to boost the accuracy of the motion model in regions with small fine structures. The biomechanical modeling uses tetrahedral meshes to model organs of interest and solves internal organ motion using tissue elasticity parameters and mesh boundary conditions. This physics-driven approach enhances the accuracy of solved motion in the organ’s fine structures regions. This study used 11 lung patient cases to evaluate the performance of SMEIR-Bio, making both qualitative and quantitative comparisons between SMEIR-Bio, SMEIR, and the algebraic reconstruction technique with total variation regularization (ART-TV). The reconstruction results suggest that SMEIR-Bio improves the motion model’s accuracy in regions containing small fine details, which consequently enhances the accuracy and quality of the reconstructed 4D-CBCT images.

  14. Reconstruction of neuronal input through modeling single-neuron dynamics and computations

    Qin, Qing; Wang, Jiang; Yu, Haitao; Deng, Bin; Chan, Wai-lok

    2016-01-01

    Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-space method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.

  15. Reconstruction of neuronal input through modeling single-neuron dynamics and computations

    Qin, Qing; Wang, Jiang; Yu, Haitao; Deng, Bin, E-mail: dengbin@tju.edu.cn; Chan, Wai-lok [School of Electrical Engineering and Automation, Tianjin University, Tianjin 300072 (China)

    2016-06-15

    Mathematical models provide a mathematical description of neuron activity, which can better understand and quantify neural computations and corresponding biophysical mechanisms evoked by stimulus. In this paper, based on the output spike train evoked by the acupuncture mechanical stimulus, we present two different levels of models to describe the input-output system to achieve the reconstruction of neuronal input. The reconstruction process is divided into two steps: First, considering the neuronal spiking event as a Gamma stochastic process. The scale parameter and the shape parameter of Gamma process are, respectively, defined as two spiking characteristics, which are estimated by a state-space method. Then, leaky integrate-and-fire (LIF) model is used to mimic the response system and the estimated spiking characteristics are transformed into two temporal input parameters of LIF model, through two conversion formulas. We test this reconstruction method by three different groups of simulation data. All three groups of estimates reconstruct input parameters with fairly high accuracy. We then use this reconstruction method to estimate the non-measurable acupuncture input parameters. Results show that under three different frequencies of acupuncture stimulus conditions, estimated input parameters have an obvious difference. The higher the frequency of the acupuncture stimulus is, the higher the accuracy of reconstruction is.

  16. Modeling of Pixelated Detector in SPECT Pinhole Reconstruction.

    Feng, Bing; Zeng, Gengsheng L

    2014-04-10

    A challenge for the pixelated detector is that the detector response of a gamma-ray photon varies with the incident angle and the incident location within a crystal. The normalization map obtained by measuring the flood of a point-source at a large distance can lead to artifacts in reconstructed images. In this work, we investigated a method of generating normalization maps by ray-tracing through the pixelated detector based on the imaging geometry and the photo-peak energy for the specific isotope. The normalization is defined for each pinhole as the normalized detector response for a point-source placed at the focal point of the pinhole. Ray-tracing is used to generate the ideal flood image for a point-source. Each crystal pitch area on the back of the detector is divided into 60 × 60 sub-pixels. Lines are obtained by connecting between a point-source and the centers of sub-pixels inside each crystal pitch area. For each line ray-tracing starts from the entrance point at the detector face and ends at the center of a sub-pixel on the back of the detector. Only the attenuation by NaI(Tl) crystals along each ray is assumed to contribute directly to the flood image. The attenuation by the silica (SiO 2 ) reflector is also included in the ray-tracing. To calculate the normalization for a pinhole, we need to calculate the ideal flood for a point-source at 360 mm distance (where the point-source was placed for the regular flood measurement) and the ideal flood image for the point-source at the pinhole focal point, together with the flood measurement at 360 mm distance. The normalizations are incorporated in the iterative OSEM reconstruction as a component of the projection matrix. Applications to single-pinhole and multi-pinhole imaging showed that this method greatly reduced the reconstruction artifacts.

  17. Reconstructed human epidermis: A model to study the barrier function

    Barbotteau, Y. [CENBG-IN2P3/CNRS, BP 120, 33175 Gradignan cedex (France); Gontier, E. [CENBG-IN2P3/CNRS, BP 120, 33175 Gradignan cedex (France); Barberet, P. [CENBG-IN2P3/CNRS, BP 120, 33175 Gradignan cedex (France); Cappadoro, M. [Institut de recherche Pierre FABRE, 31320 Castanet Tolosan (France); De Wever, B. [Institut de recherche Pierre FABRE, 31320 Castanet Tolosan (France); Habchi, C. [CENBG-IN2P3/CNRS, BP 120, 33175 Gradignan cedex (France); Incerti, S. [CENBG-IN2P3/CNRS, BP 120, 33175 Gradignan cedex (France); Mavon, A. [SkinEthic Laboratories, 45 rue St. Philippe, 06000 Nice (France); Moretto, P. [CENBG-IN2P3/CNRS, BP 120, 33175 Gradignan cedex (France)]. E-mail: moretto@cenbg.in2p3.fr; Pouthier, T. [CENBG-IN2P3/CNRS, BP 120, 33175 Gradignan cedex (France); Smith, R.W. [CENBG-IN2P3/CNRS, BP 120, 33175 Gradignan cedex (France); Ynsa, M.D. [CENBG-IN2P3/CNRS, BP 120, 33175 Gradignan cedex (France)

    2005-04-01

    The use of in vitro reconstructed human epidermis (RHE) by the cosmetic and pharmaceutical industries is increasing because of its similar physiological mechanisms to native human skin. With the advent of ethic laws on animal experimentation, RHE provides an helpful alternative for the test of formulations. The aim of this study is to check that the RHE mineral status is comparable to that of human native skin by investigating the elemental distributions in the epidermis strata. In addition, possible deleterious effects of the transport on the epidermis ionic content were studied by nuclear microscopy.

  18. Reconstructed human epidermis: A model to study the barrier function

    Barbotteau, Y.; Gontier, E.; Barberet, P.; Cappadoro, M.; De Wever, B.; Habchi, C.; Incerti, S.; Mavon, A.; Moretto, P.; Pouthier, T.; Smith, R.W.; Ynsa, M.D.

    2005-01-01

    The use of in vitro reconstructed human epidermis (RHE) by the cosmetic and pharmaceutical industries is increasing because of its similar physiological mechanisms to native human skin. With the advent of ethic laws on animal experimentation, RHE provides an helpful alternative for the test of formulations. The aim of this study is to check that the RHE mineral status is comparable to that of human native skin by investigating the elemental distributions in the epidermis strata. In addition, possible deleterious effects of the transport on the epidermis ionic content were studied by nuclear microscopy

  19. Verifying three-dimensional skull model reconstruction using cranial index of symmetry.

    Kung, Woon-Man; Chen, Shuo-Tsung; Lin, Chung-Hsiang; Lu, Yu-Mei; Chen, Tzu-Hsuan; Lin, Muh-Shi

    2013-01-01

    Difficulty exists in scalp adaptation for cranioplasty with customized computer-assisted design/manufacturing (CAD/CAM) implant in situations of excessive wound tension and sub-cranioplasty dead space. To solve this clinical problem, the CAD/CAM technique should include algorithms to reconstruct a depressed contour to cover the skull defect. Satisfactory CAM-derived alloplastic implants are based on highly accurate three-dimensional (3-D) CAD modeling. Thus, it is quite important to establish a symmetrically regular CAD/CAM reconstruction prior to depressing the contour. The purpose of this study is to verify the aesthetic outcomes of CAD models with regular contours using cranial index of symmetry (CIS). From January 2011 to June 2012, decompressive craniectomy (DC) was performed for 15 consecutive patients in our institute. 3-D CAD models of skull defects were reconstructed using commercial software. These models were checked in terms of symmetry by CIS scores. CIS scores of CAD reconstructions were 99.24±0.004% (range 98.47-99.84). CIS scores of these CAD models were statistically significantly greater than 95%, identical to 99.5%, but lower than 99.6% (ppairs signed rank test). These data evidenced the highly accurate symmetry of these CAD models with regular contours. CIS calculation is beneficial to assess aesthetic outcomes of CAD-reconstructed skulls in terms of cranial symmetry. This enables further accurate CAD models and CAM cranial implants with depressed contours, which are essential in patients with difficult scalp adaptation.

  20. APPLICATION OF 3D MODELING IN 3D PRINTING FOR THE LOWER JAW RECONSTRUCTION

    Yu. Yu. Dikov

    2015-01-01

    Full Text Available Aim of study: improvement of functional and aesthetic results of microsurgery reconstructions of the lower jaw due to the use of the methodology of 3D modeling and 3D printing. Application of this methodology has been demonstrated on the example of treatment of 4 patients with locally distributed tumors of the mouth cavity, who underwent excision of the tumor with simultaneous reconstruction of the lower jaw with revascularized fibular graft.Before, one patient has already undergo segmental resection of the lower jaw with the defect replacement with the avascular ileac graft and a reconstruction plate. Then, a relapse of the disease and lysis of the graft has developed with him. Modeling of the graft according to the shape of the lower jaw was performed by making osteotomies of the bone part of the graft using three-dimensional virtual models created by computed tomography data. Then these 3D models were printed with a 3D printer of plastic with the scale of 1:1 with the fused deposition modeling (FDM technology and were used during the surgery in the course of modeling of the graft. Sterilizing of the plastic model was performed in the formalin chamber.This methodology allowed more specific reconstruction of the resected fragment of the lower jaw and get better functional and aesthetic results and prepare patients to further dental rehabilitation. Advantages of this methodology are the possibility of simultaneous performance of stages of reconstruction and resection and shortening of the time of surgery.

  1. Skull Defects in Finite Element Head Models for Source Reconstruction from Magnetoencephalography Signals

    Lau, Stephan; Güllmar, Daniel; Flemming, Lars; Grayden, David B.; Cook, Mark J.; Wolters, Carsten H.; Haueisen, Jens

    2016-01-01

    Magnetoencephalography (MEG) signals are influenced by skull defects. However, there is a lack of evidence of this influence during source reconstruction. Our objectives are to characterize errors in source reconstruction from MEG signals due to ignoring skull defects and to assess the ability of an exact finite element head model to eliminate such errors. A detailed finite element model of the head of a rabbit used in a physical experiment was constructed from magnetic resonance and co-registered computer tomography imaging that differentiated nine tissue types. Sources of the MEG measurements above intact skull and above skull defects respectively were reconstructed using a finite element model with the intact skull and one incorporating the skull defects. The forward simulation of the MEG signals reproduced the experimentally observed characteristic magnitude and topography changes due to skull defects. Sources reconstructed from measured MEG signals above intact skull matched the known physical locations and orientations. Ignoring skull defects in the head model during reconstruction displaced sources under a skull defect away from that defect. Sources next to a defect were reoriented. When skull defects, with their physical conductivity, were incorporated in the head model, the location and orientation errors were mostly eliminated. The conductivity of the skull defect material non-uniformly modulated the influence on MEG signals. We propose concrete guidelines for taking into account conducting skull defects during MEG coil placement and modeling. Exact finite element head models can improve localization of brain function, specifically after surgery. PMID:27092044

  2. Coronary artery plaques: Cardiac CT with model-based and adaptive-statistical iterative reconstruction technique

    Scheffel, Hans; Stolzmann, Paul; Schlett, Christopher L.; Engel, Leif-Christopher; Major, Gyöngi Petra; Károlyi, Mihály; Do, Synho; Maurovich-Horvat, Pál; Hoffmann, Udo

    2012-01-01

    Objectives: To compare image quality of coronary artery plaque visualization at CT angiography with images reconstructed with filtered back projection (FBP), adaptive statistical iterative reconstruction (ASIR), and model based iterative reconstruction (MBIR) techniques. Methods: The coronary arteries of three ex vivo human hearts were imaged by CT and reconstructed with FBP, ASIR and MBIR. Coronary cross-sectional images were co-registered between the different reconstruction techniques and assessed for qualitative and quantitative image quality parameters. Readers were blinded to the reconstruction algorithm. Results: A total of 375 triplets of coronary cross-sectional images were co-registered. Using MBIR, 26% of the images were rated as having excellent overall image quality, which was significantly better as compared to ASIR and FBP (4% and 13%, respectively, all p < 0.001). Qualitative assessment of image noise demonstrated a noise reduction by using ASIR as compared to FBP (p < 0.01) and further noise reduction by using MBIR (p < 0.001). The contrast-to-noise-ratio (CNR) using MBIR was better as compared to ASIR and FBP (44 ± 19, 29 ± 15, 26 ± 9, respectively; all p < 0.001). Conclusions: Using MBIR improved image quality, reduced image noise and increased CNR as compared to the other available reconstruction techniques. This may further improve the visualization of coronary artery plaque and allow radiation reduction.

  3. A Pore Scale Flow Simulation of Reconstructed Model Based on the Micro Seepage Experiment

    Jianjun Liu

    2017-01-01

    Full Text Available Researches on microscopic seepage mechanism and fine description of reservoir pore structure play an important role in effective development of low and ultralow permeability reservoir. The typical micro pore structure model was established by two ways of the conventional model reconstruction method and the built-in graphics function method of Comsol® in this paper. A pore scale flow simulation was conducted on the reconstructed model established by two different ways using creeping flow interface and Brinkman equation interface, respectively. The results showed that the simulation of the two models agreed well in the distribution of velocity, pressure, Reynolds number, and so on. And it verified the feasibility of the direct reconstruction method from graphic file to geometric model, which provided a new way for diversifying the numerical study of micro seepage mechanism.

  4. Birth-death models and coalescent point processes: the shape and probability of reconstructed phylogenies.

    Lambert, Amaury; Stadler, Tanja

    2013-12-01

    Forward-in-time models of diversification (i.e., speciation and extinction) produce phylogenetic trees that grow "vertically" as time goes by. Pruning the extinct lineages out of such trees leads to natural models for reconstructed trees (i.e., phylogenies of extant species). Alternatively, reconstructed trees can be modelled by coalescent point processes (CPPs), where trees grow "horizontally" by the sequential addition of vertical edges. Each new edge starts at some random speciation time and ends at the present time; speciation times are drawn from the same distribution independently. CPPs lead to extremely fast computation of tree likelihoods and simulation of reconstructed trees. Their topology always follows the uniform distribution on ranked tree shapes (URT). We characterize which forward-in-time models lead to URT reconstructed trees and among these, which lead to CPP reconstructed trees. We show that for any "asymmetric" diversification model in which speciation rates only depend on time and extinction rates only depend on time and on a non-heritable trait (e.g., age), the reconstructed tree is CPP, even if extant species are incompletely sampled. If rates additionally depend on the number of species, the reconstructed tree is (only) URT (but not CPP). We characterize the common distribution of speciation times in the CPP description, and discuss incomplete species sampling as well as three special model cases in detail: (1) the extinction rate does not depend on a trait; (2) rates do not depend on time; (3) mass extinctions may happen additionally at certain points in the past. Copyright © 2013 Elsevier Inc. All rights reserved.

  5. CT angiography after carotid artery stenting: assessment of the utility of adaptive statistical iterative reconstruction and model-based iterative reconstruction

    Kuya, Keita; Shinohara, Yuki; Fujii, Shinya; Ogawa, Toshihide [Tottori University, Division of Radiology, Department of Pathophysiological Therapeutic Science, Faculty of Medicine, Yonago (Japan); Sakamoto, Makoto; Watanabe, Takashi [Tottori University, Division of Neurosurgery, Department of Brain and Neurosciences, Faculty of Medicine, Yonago (Japan); Iwata, Naoki; Kishimoto, Junichi [Tottori University, Division of Clinical Radiology Faculty of Medicine, Yonago (Japan); Kaminou, Toshio [Osaka Minami Medical Center, Department of Radiology, Osaka (Japan)

    2014-11-15

    Follow-up CT angiography (CTA) is routinely performed for post-procedure management after carotid artery stenting (CAS). However, the stent lumen tends to be underestimated because of stent artifacts on CTA reconstructed with the filtered back projection (FBP) technique. We assessed the utility of new iterative reconstruction techniques, such as adaptive statistical iterative reconstruction (ASIR) and model-based iterative reconstruction (MBIR), for CTA after CAS in comparison with FBP. In a phantom study, we evaluated the differences among the three reconstruction techniques with regard to the relationship between the stent luminal diameter and the degree of underestimation of stent luminal diameter. In a clinical study, 34 patients who underwent follow-up CTA after CAS were included. We compared the stent luminal diameters among FBP, ASIR, and MBIR, and performed visual assessment of low attenuation area (LAA) in the stent lumen using a three-point scale. In the phantom study, stent luminal diameter was increasingly underestimated as luminal diameter became smaller in all CTA images. Stent luminal diameter was larger with MBIR than with the other reconstruction techniques. Similarly, in the clinical study, stent luminal diameter was larger with MBIR than with the other reconstruction techniques. LAA detectability scores of MBIR were greater than or equal to those of FBP and ASIR in all cases. MBIR improved the accuracy of assessment of stent luminal diameter and LAA detectability in the stent lumen when compared with FBP and ASIR. We conclude that MBIR is a useful reconstruction technique for CTA after CAS. (orig.)

  6. CT angiography after carotid artery stenting: assessment of the utility of adaptive statistical iterative reconstruction and model-based iterative reconstruction

    Kuya, Keita; Shinohara, Yuki; Fujii, Shinya; Ogawa, Toshihide; Sakamoto, Makoto; Watanabe, Takashi; Iwata, Naoki; Kishimoto, Junichi; Kaminou, Toshio

    2014-01-01

    Follow-up CT angiography (CTA) is routinely performed for post-procedure management after carotid artery stenting (CAS). However, the stent lumen tends to be underestimated because of stent artifacts on CTA reconstructed with the filtered back projection (FBP) technique. We assessed the utility of new iterative reconstruction techniques, such as adaptive statistical iterative reconstruction (ASIR) and model-based iterative reconstruction (MBIR), for CTA after CAS in comparison with FBP. In a phantom study, we evaluated the differences among the three reconstruction techniques with regard to the relationship between the stent luminal diameter and the degree of underestimation of stent luminal diameter. In a clinical study, 34 patients who underwent follow-up CTA after CAS were included. We compared the stent luminal diameters among FBP, ASIR, and MBIR, and performed visual assessment of low attenuation area (LAA) in the stent lumen using a three-point scale. In the phantom study, stent luminal diameter was increasingly underestimated as luminal diameter became smaller in all CTA images. Stent luminal diameter was larger with MBIR than with the other reconstruction techniques. Similarly, in the clinical study, stent luminal diameter was larger with MBIR than with the other reconstruction techniques. LAA detectability scores of MBIR were greater than or equal to those of FBP and ASIR in all cases. MBIR improved the accuracy of assessment of stent luminal diameter and LAA detectability in the stent lumen when compared with FBP and ASIR. We conclude that MBIR is a useful reconstruction technique for CTA after CAS. (orig.)

  7. NASAL-Geom, a free upper respiratory tract 3D model reconstruction software

    Cercos-Pita, J. L.; Cal, I. R.; Duque, D.; de Moreta, G. Sanjuán

    2018-02-01

    The tool NASAL-Geom, a free upper respiratory tract 3D model reconstruction software, is here described. As a free software, researchers and professionals are welcome to obtain, analyze, improve and redistribute it, potentially increasing the rate of development, and reducing at the same time ethical conflicts regarding medical applications which cannot be analyzed. Additionally, the tool has been optimized for the specific task of reading upper respiratory tract Computerized Tomography scans, and producing 3D geometries. The reconstruction process is divided into three stages: preprocessing (including Metal Artifact Reduction, noise removal, and feature enhancement), segmentation (where the nasal cavity is identified), and 3D geometry reconstruction. The tool has been automatized (i.e. no human intervention is required) a critical feature to avoid bias in the reconstructed geometries. The applied methodology is discussed, as well as the program robustness and precision.

  8. A Hybrid Model Based on Wavelet Decomposition-Reconstruction in Track Irregularity State Forecasting

    Chaolong Jia

    2015-01-01

    Full Text Available Wavelet is able to adapt to the requirements of time-frequency signal analysis automatically and can focus on any details of the signal and then decompose the function into the representation of a series of simple basis functions. It is of theoretical and practical significance. Therefore, this paper does subdivision on track irregularity time series based on the idea of wavelet decomposition-reconstruction and tries to find the best fitting forecast model of detail signal and approximate signal obtained through track irregularity time series wavelet decomposition, respectively. On this ideology, piecewise gray-ARMA recursive based on wavelet decomposition and reconstruction (PG-ARMARWDR and piecewise ANN-ARMA recursive based on wavelet decomposition and reconstruction (PANN-ARMARWDR models are proposed. Comparison and analysis of two models have shown that both these models can achieve higher accuracy.

  9. Improved magnetic resonance fingerprinting reconstruction with low-rank and subspace modeling.

    Zhao, Bo; Setsompop, Kawin; Adalsteinsson, Elfar; Gagoski, Borjan; Ye, Huihui; Ma, Dan; Jiang, Yun; Ellen Grant, P; Griswold, Mark A; Wald, Lawrence L

    2018-02-01

    This article introduces a constrained imaging method based on low-rank and subspace modeling to improve the accuracy and speed of MR fingerprinting (MRF). A new model-based imaging method is developed for MRF to reconstruct high-quality time-series images and accurate tissue parameter maps (e.g., T 1 , T 2 , and spin density maps). Specifically, the proposed method exploits low-rank approximations of MRF time-series images, and further enforces temporal subspace constraints to capture magnetization dynamics. This allows the time-series image reconstruction problem to be formulated as a simple linear least-squares problem, which enables efficient computation. After image reconstruction, tissue parameter maps are estimated via dictionary-based pattern matching, as in the conventional approach. The effectiveness of the proposed method was evaluated with in vivo experiments. Compared with the conventional MRF reconstruction, the proposed method reconstructs time-series images with significantly reduced aliasing artifacts and noise contamination. Although the conventional approach exhibits some robustness to these corruptions, the improved time-series image reconstruction in turn provides more accurate tissue parameter maps. The improvement is pronounced especially when the acquisition time becomes short. The proposed method significantly improves the accuracy of MRF, and also reduces data acquisition time. Magn Reson Med 79:933-942, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  10. Application of a Laplace transform pair model for high-energy x-ray spectral reconstruction.

    Archer, B R; Almond, P R; Wagner, L K

    1985-01-01

    A Laplace transform pair model, previously shown to accurately reconstruct x-ray spectra at diagnostic energies, has been applied to megavoltage energy beams. The inverse Laplace transforms of 2-, 6-, and 25-MV attenuation curves were evaluated to determine the energy spectra of these beams. The 2-MV data indicate that the model can reliably reconstruct spectra in the low megavoltage range. Experimental limitations in acquiring the 6-MV transmission data demonstrate the sensitivity of the model to systematic experimental error. The 25-MV data result in a physically realistic approximation of the present spectrum.

  11. Immediate, but Not Delayed, Microsurgical Skull Reconstruction Exacerbates Brain Damage in Experimental Traumatic Brain Injury Model

    Lau, Tsz; Kaneko, Yuji; van Loveren, Harry; Borlongan, Cesario V.

    2012-01-01

    Moderate to severe traumatic brain injury (TBI) often results in malformations to the skull. Aesthetic surgical maneuvers may offer normalized skull structure, but inconsistent surgical closure of the skull area accompanies TBI. We examined whether wound closure by replacement of skull flap and bone wax would allow aesthetic reconstruction of the TBI-induced skull damage without causing any detrimental effects to the cortical tissue. Adult male Sprague-Dawley rats were subjected to TBI using the controlled cortical impact (CCI) injury model. Immediately after the TBI surgery, animals were randomly assigned to skull flap replacement with or without bone wax or no bone reconstruction, then were euthanized at five days post-TBI for pathological analyses. The skull reconstruction provided normalized gross bone architecture, but 2,3,5-triphenyltetrazolium chloride and hematoxylin and eosin staining results revealed larger cortical damage in these animals compared to those that underwent no surgical maneuver at all. Brain swelling accompanied TBI, especially the severe model, that could have relieved the intracranial pressure in those animals with no skull reconstruction. In contrast, the immediate skull reconstruction produced an upregulation of the edema marker aquaporin-4 staining, which likely prevented the therapeutic benefits of brain swelling and resulted in larger cortical infarcts. Interestingly, TBI animals introduced to a delay in skull reconstruction (i.e., 2 days post-TBI) showed significantly reduced edema and infarcts compared to those exposed to immediate skull reconstruction. That immediate, but not delayed, skull reconstruction may exacerbate TBI-induced cortical tissue damage warrants a careful consideration of aesthetic repair of the skull in TBI. PMID:22438975

  12. Bias in iterative reconstruction of low-statistics PET data: benefits of a resolution model

    Walker, M D; Asselin, M-C; Julyan, P J; Feldmann, M; Matthews, J C [School of Cancer and Enabling Sciences, Wolfson Molecular Imaging Centre, MAHSC, University of Manchester, Manchester M20 3LJ (United Kingdom); Talbot, P S [Mental Health and Neurodegeneration Research Group, Wolfson Molecular Imaging Centre, MAHSC, University of Manchester, Manchester M20 3LJ (United Kingdom); Jones, T, E-mail: matthew.walker@manchester.ac.uk [Academic Department of Radiation Oncology, Christie Hospital, University of Manchester, Manchester M20 4BX (United Kingdom)

    2011-02-21

    Iterative image reconstruction methods such as ordered-subset expectation maximization (OSEM) are widely used in PET. Reconstructions via OSEM are however reported to be biased for low-count data. We investigated this and considered the impact for dynamic PET. Patient listmode data were acquired in [{sup 11}C]DASB and [{sup 15}O]H{sub 2}O scans on the HRRT brain PET scanner. These data were subsampled to create many independent, low-count replicates. The data were reconstructed and the images from low-count data were compared to the high-count originals (from the same reconstruction method). This comparison enabled low-statistics bias to be calculated for the given reconstruction, as a function of the noise-equivalent counts (NEC). Two iterative reconstruction methods were tested, one with and one without an image-based resolution model (RM). Significant bias was observed when reconstructing data of low statistical quality, for both subsampled human and simulated data. For human data, this bias was substantially reduced by including a RM. For [{sup 11}C]DASB the low-statistics bias in the caudate head at 1.7 M NEC (approx. 30 s) was -5.5% and -13% with and without RM, respectively. We predicted biases in the binding potential of -4% and -10%. For quantification of cerebral blood flow for the whole-brain grey- or white-matter, using [{sup 15}O]H{sub 2}O and the PET autoradiographic method, a low-statistics bias of <2.5% and <4% was predicted for reconstruction with and without the RM. The use of a resolution model reduces low-statistics bias and can hence be beneficial for quantitative dynamic PET.

  13. A Novel Hybrid Model for Drawing Trace Reconstruction from Multichannel Surface Electromyographic Activity.

    Chen, Yumiao; Yang, Zhongliang

    2017-01-01

    Recently, several researchers have considered the problem of reconstruction of handwriting and other meaningful arm and hand movements from surface electromyography (sEMG). Although much progress has been made, several practical limitations may still affect the clinical applicability of sEMG-based techniques. In this paper, a novel three-step hybrid model of coordinate state transition, sEMG feature extraction and gene expression programming (GEP) prediction is proposed for reconstructing drawing traces of 12 basic one-stroke shapes from multichannel surface electromyography. Using a specially designed coordinate data acquisition system, we recorded the coordinate data of drawing traces collected in accordance with the time series while 7-channel EMG signals were recorded. As a widely-used time domain feature, Root Mean Square (RMS) was extracted with the analysis window. The preliminary reconstruction models can be established by GEP. Then, the original drawing traces can be approximated by a constructed prediction model. Applying the three-step hybrid model, we were able to convert seven channels of EMG activity recorded from the arm muscles into smooth reconstructions of drawing traces. The hybrid model can yield a mean accuracy of 74% in within-group design (one set of prediction models for all shapes) and 86% in between-group design (one separate set of prediction models for each shape), averaged for the reconstructed x and y coordinates. It can be concluded that it is feasible for the proposed three-step hybrid model to improve the reconstruction ability of drawing traces from sEMG.

  14. A singular K-space model for fast reconstruction of magnetic resonance images from undersampled data.

    Luo, Jianhua; Mou, Zhiying; Qin, Binjie; Li, Wanqing; Ogunbona, Philip; Robini, Marc C; Zhu, Yuemin

    2017-12-09

    Reconstructing magnetic resonance images from undersampled k-space data is a challenging problem. This paper introduces a novel method of image reconstruction from undersampled k-space data based on the concept of singularizing operators and a novel singular k-space model. Exploring the sparsity of an image in the k-space, the singular k-space model (SKM) is proposed in terms of the k-space functions of a singularizing operator. The singularizing operator is constructed by combining basic difference operators. An algorithm is developed to reliably estimate the model parameters from undersampled k-space data. The estimated parameters are then used to recover the missing k-space data through the model, subsequently achieving high-quality reconstruction of the image using inverse Fourier transform. Experiments on physical phantom and real brain MR images have shown that the proposed SKM method constantly outperforms the popular total variation (TV) and the classical zero-filling (ZF) methods regardless of the undersampling rates, the noise levels, and the image structures. For the same objective quality of the reconstructed images, the proposed method requires much less k-space data than the TV method. The SKM method is an effective method for fast MRI reconstruction from the undersampled k-space data. Graphical abstract Two Real Images and their sparsified images by singularizing operator.

  15. UROX 2.0: an interactive tool for fitting atomic models into electron-microscopy reconstructions

    Siebert, Xavier; Navaza, Jorge

    2009-01-01

    UROX is software designed for the interactive fitting of atomic models into electron-microscopy reconstructions. The main features of the software are presented, along with a few examples. Electron microscopy of a macromolecular structure can lead to three-dimensional reconstructions with resolutions that are typically in the 30–10 Å range and sometimes even beyond 10 Å. Fitting atomic models of the individual components of the macromolecular structure (e.g. those obtained by X-ray crystallography or nuclear magnetic resonance) into an electron-microscopy map allows the interpretation of the latter at near-atomic resolution, providing insight into the interactions between the components. Graphical software is presented that was designed for the interactive fitting and refinement of atomic models into electron-microscopy reconstructions. Several characteristics enable it to be applied over a wide range of cases and resolutions. Firstly, calculations are performed in reciprocal space, which results in fast algorithms. This allows the entire reconstruction (or at least a sizeable portion of it) to be used by taking into account the symmetry of the reconstruction both in the calculations and in the graphical display. Secondly, atomic models can be placed graphically in the map while the correlation between the model-based electron density and the electron-microscopy reconstruction is computed and displayed in real time. The positions and orientations of the models are refined by a least-squares minimization. Thirdly, normal-mode calculations can be used to simulate conformational changes between the atomic model of an individual component and its corresponding density within a macromolecular complex determined by electron microscopy. These features are illustrated using three practical cases with different symmetries and resolutions. The software, together with examples and user instructions, is available free of charge at http://mem.ibs.fr/UROX/

  16. Analysis of errors in spectral reconstruction with a Laplace transform pair model

    Archer, B.R.; Bushong, S.C.

    1985-01-01

    The sensitivity of a Laplace transform pair model for spectral reconstruction to random errors in attenuation measurements of diagnostic x-ray units has been investigated. No spectral deformation or significant alteration resulted from the simulated attenuation errors. It is concluded that the range of spectral uncertainties to be expected from the application of this model is acceptable for most scientific applications. (author)

  17. Reconstructing an interacting holographic polytropic gas model in a non-flat FRW universe

    Karami, K; Abdolmaleki, A

    2010-01-01

    We study the correspondence between the interacting holographic dark energy and the polytropic gas model of dark energy in a non-flat FRW universe. This correspondence allows one to reconstruct the potential and the dynamics for the scalar field of the polytropic model, which describe accelerated expansion of the universe.

  18. Reconstructing an interacting holographic polytropic gas model in a non-flat FRW universe

    Karami, K; Abdolmaleki, A, E-mail: KKarami@uok.ac.i [Department of Physics, University of Kurdistan, Pasdaran Street, Sanandaj (Iran, Islamic Republic of)

    2010-05-01

    We study the correspondence between the interacting holographic dark energy and the polytropic gas model of dark energy in a non-flat FRW universe. This correspondence allows one to reconstruct the potential and the dynamics for the scalar field of the polytropic model, which describe accelerated expansion of the universe.

  19. Simultaneous EEG Source and Forward Model Reconstruction (SOFOMORE) using a Hierarchical Bayesian Approach

    Stahlhut, Carsten; Mørup, Morten; Winther, Ole

    2011-01-01

    We present an approach to handle forward model uncertainty for EEG source reconstruction. A stochastic forward model representation is motivated by the many random contributions to the path from sources to measurements including the tissue conductivity distribution, the geometry of the cortical s...

  20. "Growing trees backwards": Description of a stand reconstruction model (P-53)

    Jonathan D. Bakker; Andrew J. Sanchez Meador; Peter Z. Fule; David W. Huffman; Margaret M. Moore

    2008-01-01

    We describe an individual-tree model that uses contemporary measurements to "grow trees backward" and reconstruct past tree diameters and stand structure in ponderosa pine dominated stands of the Southwest. Model inputs are contemporary structural measurements of all snags, logs, stumps, and living trees, and radial growth measurements, if available. Key...

  1. Development of Acoustic Model-Based Iterative Reconstruction Technique for Thick-Concrete Imaging

    Almansouri, Hani [Purdue University; Clayton, Dwight A [ORNL; Kisner, Roger A [ORNL; Polsky, Yarom [ORNL; Bouman, Charlie [Purdue University; Santos-Villalobos, Hector J [ORNL

    2015-01-01

    Ultrasound signals have been used extensively for non-destructive evaluation (NDE). However, typical reconstruction techniques, such as the synthetic aperture focusing technique (SAFT), are limited to quasi-homogenous thin media. New ultrasonic systems and reconstruction algorithms are in need for one-sided NDE of non-homogenous thick objects. An application example space is imaging of reinforced concrete structures for commercial nuclear power plants (NPPs). These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Another example is geothermal and oil/gas production wells. These multi-layered structures are composed of steel, cement, and several types of soil and rocks. Ultrasound systems with greater penetration range and image quality will allow for better monitoring of the well s health and prediction of high-pressure hydraulic fracturing of the rock. These application challenges need to be addressed with an integrated imaging approach, where the application, hardware, and reconstruction software are highly integrated and optimized. Therefore, we are developing an ultrasonic system with Model-Based Iterative Reconstruction (MBIR) as the image reconstruction backbone. As the first implementation of MBIR for ultrasonic signals, this paper document the first implementation of the algorithm and show reconstruction results for synthetically generated data.

  2. Development of Acoustic Model-Based Iterative Reconstruction Technique for Thick-Concrete Imaging

    Almansouri, Hani [Purdue University; Clayton, Dwight A [ORNL; Kisner, Roger A [ORNL; Polsky, Yarom [ORNL; Bouman, Charlie [Purdue University; Santos-Villalobos, Hector J [ORNL

    2016-01-01

    Ultrasound signals have been used extensively for non-destructive evaluation (NDE). However, typical reconstruction techniques, such as the synthetic aperture focusing technique (SAFT), are limited to quasi-homogenous thin media. New ultrasonic systems and reconstruction algorithms are in need for one-sided NDE of non-homogenous thick objects. An application example space is imaging of reinforced concrete structures for commercial nuclear power plants (NPPs). These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Another example is geothermal and oil/gas production wells. These multi-layered structures are composed of steel, cement, and several types of soil and rocks. Ultrasound systems with greater penetration range and image quality will allow for better monitoring of the well's health and prediction of high-pressure hydraulic fracturing of the rock. These application challenges need to be addressed with an integrated imaging approach, where the application, hardware, and reconstruction software are highly integrated and optimized. Therefore, we are developing an ultrasonic system with Model-Based Iterative Reconstruction (MBIR) as the image reconstruction backbone. As the first implementation of MBIR for ultrasonic signals, this paper document the first implementation of the algorithm and show reconstruction results for synthetically generated data.

  3. Development of acoustic model-based iterative reconstruction technique for thick-concrete imaging

    Almansouri, Hani; Clayton, Dwight; Kisner, Roger; Polsky, Yarom; Bouman, Charles; Santos-Villalobos, Hector

    2016-02-01

    Ultrasound signals have been used extensively for non-destructive evaluation (NDE). However, typical reconstruction techniques, such as the synthetic aperture focusing technique (SAFT), are limited to quasi-homogenous thin media. New ultrasonic systems and reconstruction algorithms are in need for one-sided NDE of non-homogenous thick objects. An application example space is imaging of reinforced concrete structures for commercial nuclear power plants (NPPs). These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Another example is geothermal and oil/gas production wells. These multi-layered structures are composed of steel, cement, and several types of soil and rocks. Ultrasound systems with greater penetration range and image quality will allow for better monitoring of the well's health and prediction of high-pressure hydraulic fracturing of the rock. These application challenges need to be addressed with an integrated imaging approach, where the application, hardware, and reconstruction software are highly integrated and optimized. Therefore, we are developing an ultrasonic system with Model-Based Iterative Reconstruction (MBIR) as the image reconstruction backbone. As the first implementation of MBIR for ultrasonic signals, this paper document the first implementation of the algorithm and show reconstruction results for synthetically generated data.1

  4. 3D Surface Reconstruction for Lower Limb Prosthetic Model using Radon Transform

    Sobani, S. S. Mohd; Mahmood, N. H.; Zakaria, N. A.; Razak, M. A. Abdul

    2018-03-01

    This paper describes the idea to realize three-dimensional surfaces of objects with cylinder-based shapes where the techniques adopted and the strategy developed for a non-rigid three-dimensional surface reconstruction of an object from uncalibrated two-dimensional image sequences using multiple-view digital camera and turntable setup. The surface of an object is reconstructed based on the concept of tomography with the aid of performing several digital image processing algorithms on the two-dimensional images captured by a digital camera in thirty-six different projections and the three-dimensional structure of the surface is analysed. Four different objects are used as experimental models in the reconstructions and each object is placed on a manually rotated turntable. The results shown that the proposed method has successfully reconstruct the three-dimensional surface of the objects and practicable. The shape and size of the reconstructed three-dimensional objects are recognizable and distinguishable. The reconstructions of objects involved in the test are strengthened with the analysis where the maximum percent error obtained from the computation is approximately 1.4 % for the height whilst 4.0%, 4.79% and 4.7% for the diameters at three specific heights of the objects.

  5. Grammar-based Automatic 3D Model Reconstruction from Terrestrial Laser Scanning Data

    Yu, Q.; Helmholz, P.; Belton, D.; West, G.

    2014-04-01

    The automatic reconstruction of 3D buildings has been an important research topic during the last years. In this paper, a novel method is proposed to automatically reconstruct the 3D building models from segmented data based on pre-defined formal grammar and rules. Such segmented data can be extracted e.g. from terrestrial or mobile laser scanning devices. Two steps are considered in detail. The first step is to transform the segmented data into 3D shapes, for instance using the DXF (Drawing Exchange Format) format which is a CAD data file format used for data interchange between AutoCAD and other program. Second, we develop a formal grammar to describe the building model structure and integrate the pre-defined grammars into the reconstruction process. Depending on the different segmented data, the selected grammar and rules are applied to drive the reconstruction process in an automatic manner. Compared with other existing approaches, our proposed method allows the model reconstruction directly from 3D shapes and takes the whole building into account.

  6. Radiation fields, dosimetry, biokinetics and biophysical models for cancer induction by ionising radiation 1996-1999. Dose reconstruction. Final report

    Jacob, P.; Aragno, D.; Bailiff, I.K.

    2000-01-01

    The project Dose Reconstruction was conducted within the five work packages: - EPR with teeth, - Chromosome painting (FISH) in lymphocytes, - Luminescence methods, - Modelling, and - Evaluation. (orig.)

  7. LGM permafrost distribution: how well can the latest PMIP multi-model ensembles perform reconstruction?

    Saito, K.; Sueyoshi, T.; Marchenko, S.; Romanovsky, V.; Otto-Bliesner, B.; Walsh, J.; Bigelow, N.; Hendricks, A.; Yoshikawa, K.

    2013-01-01

    Here, global-scale frozen ground distribution from the Last Glacial Maximum (LGM) has been reconstructed using multi-model ensembles of global climate models, and then compared with evidence-based knowledge and earlier numerical results. Modeled soil temperatures, taken from Paleoclimate Modelling Intercomparison Project phase III (PMIP3) simulations, were used to diagnose the subsurface thermal regime and determine underlying frozen ground types for the present day (pre-industrial; 0 kya) an...

  8. LGM permafrost distribution: how well can the latest PMIP multi-model ensembles reconstruct?

    K. Saito; T. Sueyoshi; S. Marchenko; V. Romanovsky; B. Otto-Bliesner; J. Walsh; N. Bigelow; A. Hendricks; K. Yoshikawa

    2013-01-01

    Global-scale frozen ground distribution during the Last Glacial Maximum (LGM) was reconstructed using multi-model ensembles of global climate models, and then compared with evidence-based knowledge and earlier numerical results. Modeled soil temperatures, taken from Paleoclimate Modelling Intercomparison Project Phase III (PMIP3) simulations, were used to diagnose the subsurface thermal regime and determine underlying frozen ground types for the present-day (pre-industrial; 0 k) and the LGM (...

  9. Three Dimensional Dynamic Model Based Wind Field Reconstruction from Lidar Data

    Raach, Steffen; Schlipf, David; Haizmann, Florian; Cheng, Po Wen

    2014-01-01

    Using the inflowing horizontal and vertical wind shears for individual pitch controller is a promising method if blade bending measurements are not available. Due to the limited information provided by a lidar system the reconstruction of shears in real-time is a challenging task especially for the horizontal shear in the presence of changing wind direction. The internal model principle has shown to be a promising approach to estimate the shears and directions in 10 minutes averages with real measurement data. The static model based wind vector field reconstruction is extended in this work taking into account a dynamic reconstruction model based on Taylor's Frozen Turbulence Hypothesis. The presented method provides time series over several seconds of the wind speed, shears and direction, which can be directly used in advanced optimal preview control. Therefore, this work is an important step towards the application of preview individual blade pitch control under realistic wind conditions. The method is tested using a turbulent wind field and a detailed lidar simulator. For the simulation, the turbulent wind field structure is flowing towards the lidar system and is continuously misaligned with respect to the horizontal axis of the wind turbine. Taylor's Frozen Turbulence Hypothesis is taken into account to model the wind evolution. For the reconstruction, the structure is discretized into several stages where each stage is reduced to an effective wind speed, superposed with a linear horizontal and vertical wind shear. Previous lidar measurements are shifted using again Taylor's Hypothesis. The wind field reconstruction problem is then formulated as a nonlinear optimization problem, which minimizes the residual between the assumed wind model and the lidar measurements to obtain the misalignment angle and the effective wind speed and the wind shears for each stage. This method shows good results in reconstructing the wind characteristics of a three

  10. Assessing Women's Preferences and Preference Modeling for Breast Reconstruction Decision-Making.

    Sun, Clement S; Cantor, Scott B; Reece, Gregory P; Crosby, Melissa A; Fingeret, Michelle C; Markey, Mia K

    2014-03-01

    Women considering breast reconstruction must make challenging trade-offs amongst issues that often conflict. It may be useful to quantify possible outcomes using a single summary measure to aid a breast cancer patient in choosing a form of breast reconstruction. In this study, we used multiattribute utility theory to combine multiple objectives to yield a summary value using nine different preference models. We elicited the preferences of 36 women, aged 32 or older with no history of breast cancer, for the patient-reported outcome measures of breast satisfaction, psychosocial well-being, chest well-being, abdominal well-being, and sexual wellbeing as measured by the BREAST-Q in addition to time lost to reconstruction and out-of-pocket cost. Participants ranked hypothetical breast reconstruction outcomes. We examined each multiattribute utility preference model and assessed how often each model agreed with participants' rankings. The median amount of time required to assess preferences was 34 minutes. Agreement among the nine preference models with the participants ranged from 75.9% to 78.9%. None of the preference models performed significantly worse than the best performing risk averse multiplicative model. We hypothesize an average theoretical agreement of 94.6% for this model if participant error is included. There was a statistically significant positive correlation with more unequal distribution of weight given to the seven attributes. We recommend the risk averse multiplicative model for modeling the preferences of patients considering different forms of breast reconstruction because it agreed most often with the participants in this study.

  11. A method for climate and vegetation reconstruction through the inversion of a dynamic vegetation model

    Garreta, Vincent; Guiot, Joel; Hely, Christelle [CEREGE, UMR 6635, CNRS, Universite Aix-Marseille, Europole de l' Arbois, Aix-en-Provence (France); Miller, Paul A.; Sykes, Martin T. [Lund University, Department of Physical Geography and Ecosystems Analysis, Geobiosphere Science Centre, Lund (Sweden); Brewer, Simon [Universite de Liege, Institut d' Astrophysique et de Geophysique, Liege (Belgium); Litt, Thomas [University of Bonn, Paleontological Institute, Bonn (Germany)

    2010-08-15

    Climate reconstructions from data sensitive to past climates provide estimates of what these climates were like. Comparing these reconstructions with simulations from climate models allows to validate the models used for future climate prediction. It has been shown that for fossil pollen data, gaining estimates by inverting a vegetation model allows inclusion of past changes in carbon dioxide values. As a new generation of dynamic vegetation model is available we have developed an inversion method for one model, LPJ-GUESS. When this novel method is used with high-resolution sediment it allows us to bypass the classic assumptions of (1) climate and pollen independence between samples and (2) equilibrium between the vegetation, represented as pollen, and climate. Our dynamic inversion method is based on a statistical model to describe the links among climate, simulated vegetation and pollen samples. The inversion is realised thanks to a particle filter algorithm. We perform a validation on 30 modern European sites and then apply the method to the sediment core of Meerfelder Maar (Germany), which covers the Holocene at a temporal resolution of approximately one sample per 30 years. We demonstrate that reconstructed temperatures are constrained. The reconstructed precipitation is less well constrained, due to the dimension considered (one precipitation by season), and the low sensitivity of LPJ-GUESS to precipitation changes. (orig.)

  12. BUMPER: the Bayesian User-friendly Model for Palaeo-Environmental Reconstruction

    Holden, Phil; Birks, John; Brooks, Steve; Bush, Mark; Hwang, Grace; Matthews-Bird, Frazer; Valencia, Bryan; van Woesik, Robert

    2017-04-01

    We describe the Bayesian User-friendly Model for Palaeo-Environmental Reconstruction (BUMPER), a Bayesian transfer function for inferring past climate and other environmental variables from microfossil assemblages. The principal motivation for a Bayesian approach is that the palaeoenvironment is treated probabilistically, and can be updated as additional data become available. Bayesian approaches therefore provide a reconstruction-specific quantification of the uncertainty in the data and in the model parameters. BUMPER is fully self-calibrating, straightforward to apply, and computationally fast, requiring 2 seconds to build a 100-taxon model from a 100-site training-set on a standard personal computer. We apply the model's probabilistic framework to generate thousands of artificial training-sets under ideal assumptions. We then use these to demonstrate both the general applicability of the model and the sensitivity of reconstructions to the characteristics of the training-set, considering assemblage richness, taxon tolerances, and the number of training sites. We demonstrate general applicability to real data, considering three different organism types (chironomids, diatoms, pollen) and different reconstructed variables. In all of these applications an identically configured model is used, the only change being the input files that provide the training-set environment and taxon-count data.

  13. Computed tomography depiction of small pediatric vessels with model-based iterative reconstruction

    Koc, Gonca; Courtier, Jesse L.; Phelps, Andrew; Marcovici, Peter A.; MacKenzie, John D. [UCSF Benioff Children' s Hospital, Department of Radiology and Biomedical Imaging, San Francisco, CA (United States)

    2014-07-15

    Computed tomography (CT) is extremely important in characterizing blood vessel anatomy and vascular lesions in children. Recent advances in CT reconstruction technology hold promise for improved image quality and also reductions in radiation dose. This report evaluates potential improvements in image quality for the depiction of small pediatric vessels with model-based iterative reconstruction (Veo trademark), a technique developed to improve image quality and reduce noise. To evaluate Veo trademark as an improved method when compared to adaptive statistical iterative reconstruction (ASIR trademark) for the depiction of small vessels on pediatric CT. Seventeen patients (mean age: 3.4 years, range: 2 days to 10.0 years; 6 girls, 11 boys) underwent contrast-enhanced CT examinations of the chest and abdomen in this HIPAA compliant and institutional review board approved study. Raw data were reconstructed into separate image datasets using Veo trademark and ASIR trademark algorithms (GE Medical Systems, Milwaukee, WI). Four blinded radiologists subjectively evaluated image quality. The pulmonary, hepatic, splenic and renal arteries were evaluated for the length and number of branches depicted. Datasets were compared with parametric and non-parametric statistical tests. Readers stated a preference for Veo trademark over ASIR trademark images when subjectively evaluating image quality criteria for vessel definition, image noise and resolution of small anatomical structures. The mean image noise in the aorta and fat was significantly less for Veo trademark vs. ASIR trademark reconstructed images. Quantitative measurements of mean vessel lengths and number of branches vessels delineated were significantly different for Veo trademark and ASIR trademark images. Veo trademark consistently showed more of the vessel anatomy: longer vessel length and more branching vessels. When compared to the more established adaptive statistical iterative reconstruction algorithm, model

  14. Reconstruction and validation of RefRec: a global model for the yeast molecular interaction network.

    Tommi Aho

    2010-05-01

    Full Text Available Molecular interaction networks establish all cell biological processes. The networks are under intensive research that is facilitated by new high-throughput measurement techniques for the detection, quantification, and characterization of molecules and their physical interactions. For the common model organism yeast Saccharomyces cerevisiae, public databases store a significant part of the accumulated information and, on the way to better understanding of the cellular processes, there is a need to integrate this information into a consistent reconstruction of the molecular interaction network. This work presents and validates RefRec, the most comprehensive molecular interaction network reconstruction currently available for yeast. The reconstruction integrates protein synthesis pathways, a metabolic network, and a protein-protein interaction network from major biological databases. The core of the reconstruction is based on a reference object approach in which genes, transcripts, and proteins are identified using their primary sequences. This enables their unambiguous identification and non-redundant integration. The obtained total number of different molecular species and their connecting interactions is approximately 67,000. In order to demonstrate the capacity of RefRec for functional predictions, it was used for simulating the gene knockout damage propagation in the molecular interaction network in approximately 590,000 experimentally validated mutant strains. Based on the simulation results, a statistical classifier was subsequently able to correctly predict the viability of most of the strains. The results also showed that the usage of different types of molecular species in the reconstruction is important for accurate phenotype prediction. In general, the findings demonstrate the benefits of global reconstructions of molecular interaction networks. With all the molecular species and their physical interactions explicitly modeled, our

  15. Computed tomography depiction of small pediatric vessels with model-based iterative reconstruction

    Koc, Gonca; Courtier, Jesse L.; Phelps, Andrew; Marcovici, Peter A.; MacKenzie, John D.

    2014-01-01

    Computed tomography (CT) is extremely important in characterizing blood vessel anatomy and vascular lesions in children. Recent advances in CT reconstruction technology hold promise for improved image quality and also reductions in radiation dose. This report evaluates potential improvements in image quality for the depiction of small pediatric vessels with model-based iterative reconstruction (Veo trademark), a technique developed to improve image quality and reduce noise. To evaluate Veo trademark as an improved method when compared to adaptive statistical iterative reconstruction (ASIR trademark) for the depiction of small vessels on pediatric CT. Seventeen patients (mean age: 3.4 years, range: 2 days to 10.0 years; 6 girls, 11 boys) underwent contrast-enhanced CT examinations of the chest and abdomen in this HIPAA compliant and institutional review board approved study. Raw data were reconstructed into separate image datasets using Veo trademark and ASIR trademark algorithms (GE Medical Systems, Milwaukee, WI). Four blinded radiologists subjectively evaluated image quality. The pulmonary, hepatic, splenic and renal arteries were evaluated for the length and number of branches depicted. Datasets were compared with parametric and non-parametric statistical tests. Readers stated a preference for Veo trademark over ASIR trademark images when subjectively evaluating image quality criteria for vessel definition, image noise and resolution of small anatomical structures. The mean image noise in the aorta and fat was significantly less for Veo trademark vs. ASIR trademark reconstructed images. Quantitative measurements of mean vessel lengths and number of branches vessels delineated were significantly different for Veo trademark and ASIR trademark images. Veo trademark consistently showed more of the vessel anatomy: longer vessel length and more branching vessels. When compared to the more established adaptive statistical iterative reconstruction algorithm, model

  16. Hydroelastic slamming of flexible wedges: Modeling and experiments from water entry to exit

    Shams, Adel; Zhao, Sam; Porfiri, Maurizio

    2017-03-01

    Fluid-structure interactions during hull slamming are of great interest for the design of aircraft and marine vessels. The main objective of this paper is to establish a semi-analytical model to investigate the entire hydroelastic slamming of a wedge, from the entry to the exit phase. The structural dynamics is described through Euler-Bernoulli beam theory and the hydrodynamic loading is estimated using potential flow theory. A Galerkin method is used to obtain a reduced order modal model in closed-form, and a Newmark-type integration scheme is utilized to find an approximate solution. To benchmark the proposed semi-analytical solution, we experimentally investigate fluid-structure interactions through particle image velocimetry (PIV). PIV is used to estimate the velocity field, and the pressure is reconstructed by solving the incompressible Navier-Stokes equations from PIV data. Experimental results confirm that the flow physics and free-surface elevation during water exit are different from water entry. While water entry is characterized by positive values of the pressure field, with respect to the atmospheric pressure, the pressure field during water exit may be less than atmospheric. Experimental observations indicate that the location where the maximum pressure in the fluid is attained moves from the pile-up region to the keel, as the wedge reverses its motion from the entry to the exit stage. Comparing experimental results with semi-analytical findings, we observe that the model is successful in predicting the free-surface elevation and the overall distribution of the hydrodynamic loading on the wedge. These integrated experimental and theoretical analyses of water exit problems are expected to aid in the design of lightweight structures, which experience repeated slamming events during their operation.

  17. A priori motion models for four-dimensional reconstruction in gated cardiac SPECT

    Lalush, D.S.; Tsui, B.M.W.; Cui, Lin

    1996-01-01

    We investigate the benefit of incorporating a priori assumptions about cardiac motion in a fully four-dimensional (4D) reconstruction algorithm for gated cardiac SPECT. Previous work has shown that non-motion-specific 4D Gibbs priors enforcing smoothing in time and space can control noise while preserving resolution. In this paper, we evaluate methods for incorporating known heart motion in the Gibbs prior model. The new model is derived by assigning motion vectors to each 4D voxel, defining the movement of that volume of activity into the neighboring time frames. Weights for the Gibbs cliques are computed based on these open-quotes most likelyclose quotes motion vectors. To evaluate, we employ the mathematical cardiac-torso (MCAT) phantom with a new dynamic heart model that simulates the beating and twisting motion of the heart. Sixteen realistically-simulated gated datasets were generated, with noise simulated to emulate a real Tl-201 gated SPECT study. Reconstructions were performed using several different reconstruction algorithms, all modeling nonuniform attenuation and three-dimensional detector response. These include ML-EM with 4D filtering, 4D MAP-EM without prior motion assumption, and 4D MAP-EM with prior motion assumptions. The prior motion assumptions included both the correct motion model and incorrect models. Results show that reconstructions using the 4D prior model can smooth noise and preserve time-domain resolution more effectively than 4D linear filters. We conclude that modeling of motion in 4D reconstruction algorithms can be a powerful tool for smoothing noise and preserving temporal resolution in gated cardiac studies

  18. AUTOMATED RECONSTRUCTION OF WALLS FROM AIRBORNE LIDAR DATA FOR COMPLETE 3D BUILDING MODELLING

    Y. He

    2012-07-01

    Full Text Available Automated 3D building model generation continues to attract research interests in photogrammetry and computer vision. Airborne Light Detection and Ranging (LIDAR data with increasing point density and accuracy has been recognized as a valuable source for automated 3D building reconstruction. While considerable achievements have been made in roof extraction, limited research has been carried out in modelling and reconstruction of walls, which constitute important components of a full building model. Low point density and irregular point distribution of LIDAR observations on vertical walls render this task complex. This paper develops a novel approach for wall reconstruction from airborne LIDAR data. The developed method commences with point cloud segmentation using a region growing approach. Seed points for planar segments are selected through principle component analysis, and points in the neighbourhood are collected and examined to form planar segments. Afterwards, segment-based classification is performed to identify roofs, walls and planar ground surfaces. For walls with sparse LIDAR observations, a search is conducted in the neighbourhood of each individual roof segment to collect wall points, and the walls are then reconstructed using geometrical and topological constraints. Finally, walls which were not illuminated by the LIDAR sensor are determined via both reconstructed roof data and neighbouring walls. This leads to the generation of topologically consistent and geometrically accurate and complete 3D building models. Experiments have been conducted in two test sites in the Netherlands and Australia to evaluate the performance of the proposed method. Results show that planar segments can be reliably extracted in the two reported test sites, which have different point density, and the building walls can be correctly reconstructed if the walls are illuminated by the LIDAR sensor.

  19. Modeling economic costs of disasters and recovery involving positive effects of reconstruction: analysis using a dynamic CGE model

    Xie, W.; Li, N.; Wu, J.-D.; Hao, X.-L.

    2013-11-01

    Disaster damages have negative effects on economy, whereas reconstruction investments have positive effects. The aim of this study is to model economic causes of disasters and recovery involving positive effects of reconstruction activities. Computable general equilibrium (CGE) model is a promising approach because it can incorporate these two kinds of shocks into a unified framework and further avoid double-counting problem. In order to factor both shocks in CGE model, direct loss is set as the amount of capital stock reduced on supply side of economy; A portion of investments restore the capital stock in existing period; An investment-driven dynamic model is formulated due to available reconstruction data, and the rest of a given country's saving is set as an endogenous variable. The 2008 Wenchuan Earthquake is selected as a case study to illustrate the model, and three scenarios are constructed: S0 (no disaster occurs), S1 (disaster occurs with reconstruction investment) and S2 (disaster occurs without reconstruction investment). S0 is taken as business as usual, and the differences between S1 and S0 and that between S2 and S0 can be interpreted as economic losses including reconstruction and excluding reconstruction respectively. The study showed that output from S1 is found to be closer to real data than that from S2. S2 overestimates economic loss by roughly two times that under S1. The gap in economic aggregate between S1 and S0 is reduced to 3% in 2011, a level that should take another four years to achieve under S2.

  20. Confidence of model based shape reconstruction from sparse data

    Baka, N.; de Bruijne, Marleen; Reiber, J. H. C.

    2010-01-01

    Statistical shape models (SSM) are commonly applied for plausible interpolation of missing data in medical imaging. However, when fitting a shape model to sparse information, many solutions may fit the available data. In this paper we derive a constrained SSM to fit noisy sparse input landmarks...

  1. Novel Low Cost 3D Surface Model Reconstruction System for Plant Phenotyping

    Suxing Liu

    2017-09-01

    Full Text Available Accurate high-resolution three-dimensional (3D models are essential for a non-invasive analysis of phenotypic characteristics of plants. Previous limitations in 3D computer vision algorithms have led to a reliance on volumetric methods or expensive hardware to record plant structure. We present an image-based 3D plant reconstruction system that can be achieved by using a single camera and a rotation stand. Our method is based on the structure from motion method, with a SIFT image feature descriptor. In order to improve the quality of the 3D models, we segmented the plant objects based on the PlantCV platform. We also deducted the optimal number of images needed for reconstructing a high-quality model. Experiments showed that an accurate 3D model of the plant was successfully could be reconstructed by our approach. This 3D surface model reconstruction system provides a simple and accurate computational platform for non-destructive, plant phenotyping.

  2. Human reconstructed skin xenografts on mice to model skin physiology.

    Salgado, Giorgiana; Ng, Yi Zhen; Koh, Li Fang; Goh, Christabelle S M; Common, John E

    Xenograft models to study skin physiology have been popular for scientific use since the 1970s, with various developments and improvements to the techniques over the decades. Xenograft models are particularly useful and sought after due to the lack of clinically relevant animal models in predicting drug effectiveness in humans. Such predictions could in turn boost the process of drug discovery, since novel drug compounds have an estimated 8% chance of FDA approval despite years of rigorous preclinical testing and evaluation, albeit mostly in non-human models. In the case of skin research, the mouse persists as the most popular animal model of choice, despite its well-known anatomical differences with human skin. Differences in skin biology are especially evident when trying to dissect more complex skin conditions, such as psoriasis and eczema, where interactions between the immune system, epidermis and the environment likely occur. While the use of animal models are still considered the gold standard for systemic toxicity studies under controlled environments, there are now alternative models that have been approved for certain applications. To overcome the biological limitations of the mouse model, research efforts have also focused on "humanizing" the mice model to better recapitulate human skin physiology. In this review, we outline the different approaches undertaken thus far to study skin biology using human tissue xenografts in mice and the technical challenges involved. We also describe more recent developments to generate humanized multi-tissue compartment mice that carry both a functioning human immune system and skin xenografts. Such composite animal models provide promising opportunities to study drugs, disease and differentiation with greater clinical relevance. Copyright © 2017 International Society of Differentiation. Published by Elsevier B.V. All rights reserved.

  3. Verifying three-dimensional skull model reconstruction using cranial index of symmetry.

    Woon-Man Kung

    Full Text Available BACKGROUND: Difficulty exists in scalp adaptation for cranioplasty with customized computer-assisted design/manufacturing (CAD/CAM implant in situations of excessive wound tension and sub-cranioplasty dead space. To solve this clinical problem, the CAD/CAM technique should include algorithms to reconstruct a depressed contour to cover the skull defect. Satisfactory CAM-derived alloplastic implants are based on highly accurate three-dimensional (3-D CAD modeling. Thus, it is quite important to establish a symmetrically regular CAD/CAM reconstruction prior to depressing the contour. The purpose of this study is to verify the aesthetic outcomes of CAD models with regular contours using cranial index of symmetry (CIS. MATERIALS AND METHODS: From January 2011 to June 2012, decompressive craniectomy (DC was performed for 15 consecutive patients in our institute. 3-D CAD models of skull defects were reconstructed using commercial software. These models were checked in terms of symmetry by CIS scores. RESULTS: CIS scores of CAD reconstructions were 99.24±0.004% (range 98.47-99.84. CIS scores of these CAD models were statistically significantly greater than 95%, identical to 99.5%, but lower than 99.6% (p<0.001, p = 0.064, p = 0.021 respectively, Wilcoxon matched pairs signed rank test. These data evidenced the highly accurate symmetry of these CAD models with regular contours. CONCLUSIONS: CIS calculation is beneficial to assess aesthetic outcomes of CAD-reconstructed skulls in terms of cranial symmetry. This enables further accurate CAD models and CAM cranial implants with depressed contours, which are essential in patients with difficult scalp adaptation.

  4. Assessment of the impact of modeling axial compression on PET image reconstruction.

    Belzunce, Martin A; Reader, Andrew J

    2017-10-01

    To comprehensively evaluate both the acceleration and image-quality impacts of axial compression and its degree of modeling in fully 3D PET image reconstruction. Despite being used since the very dawn of 3D PET reconstruction, there are still no extensive studies on the impact of axial compression and its degree of modeling during reconstruction on the end-point reconstructed image quality. In this work, an evaluation of the impact of axial compression on the image quality is performed by extensively simulating data with span values from 1 to 121. In addition, two methods for modeling the axial compression in the reconstruction were evaluated. The first method models the axial compression in the system matrix, while the second method uses an unmatched projector/backprojector, where the axial compression is modeled only in the forward projector. The different system matrices were analyzed by computing their singular values and the point response functions for small subregions of the FOV. The two methods were evaluated with simulated and real data for the Biograph mMR scanner. For the simulated data, the axial compression with span values lower than 7 did not show a decrease in the contrast of the reconstructed images. For span 11, the standard sinogram size of the mMR scanner, losses of contrast in the range of 5-10 percentage points were observed when measured for a hot lesion. For higher span values, the spatial resolution was degraded considerably. However, impressively, for all span values of 21 and lower, modeling the axial compression in the system matrix compensated for the spatial resolution degradation and obtained similar contrast values as the span 1 reconstructions. Such approaches have the same processing times as span 1 reconstructions, but they permit significant reduction in storage requirements for the fully 3D sinograms. For higher span values, the system has a large condition number and it is therefore difficult to recover accurately the higher

  5. Model simulations and proxy-based reconstructions for the European region in the past millennium (Invited)

    Zorita, E.

    2009-12-01

    One of the objectives when comparing simulations of past climates to proxy-based climate reconstructions is to asses the skill of climate models to simulate climate change. This comparison may accomplished at large spatial scales, for instance the evolution of simulated and reconstructed Northern Hemisphere annual temperature, or at regional or point scales. In both approaches a 'fair' comparison has to take into account different aspects that affect the inevitable uncertainties and biases in the simulations and in the reconstructions. These efforts face a trade-off: climate models are believed to be more skillful at large hemispheric scales, but climate reconstructions are these scales are burdened by the spatial distribution of available proxies and by methodological issues surrounding the statistical method used to translate the proxy information into large-spatial averages. Furthermore, the internal climatic noise at large hemispheric scales is low, so that the sampling uncertainty tends to be also low. On the other hand, the skill of climate models at regional scales is limited by the coarse spatial resolution, which hinders a faithful representation of aspects important for the regional climate. At small spatial scales, the reconstruction of past climate probably faces less methodological problems if information from different proxies is available. The internal climatic variability at regional scales is, however, high. In this contribution some examples of the different issues faced when comparing simulation and reconstructions at small spatial scales in the past millennium are discussed. These examples comprise reconstructions from dendrochronological data and from historical documentary data in Europe and climate simulations with global and regional models. These examples indicate that the centennial climate variations can offer a reasonable target to assess the skill of global climate models and of proxy-based reconstructions, even at small spatial scales

  6. A Comparison of Manual Neuronal Reconstruction from Biocytin Histology or 2-Photon Imaging: Morphometry and Computer Modeling

    Arne Vladimir Blackman

    2014-07-01

    Full Text Available Accurate 3D reconstruction of neurons is vital for applications linking anatomy and physiology. Reconstructions are typically created using Neurolucida after biocytin histology (BH. An alternative inexpensive and fast method is to use freeware such as Neuromantic to reconstruct from fluorescence imaging (FI stacks acquired using 2-photon laser-scanning microscopy during physiological recording. We compare these two methods with respect to morphometry, cell classification, and multicompartmental modeling in the NEURON simulation environment. Quantitative morphological analysis of the same cells reconstructed using both methods reveals that whilst biocytin reconstructions facilitate tracing of more distal collaterals, both methods are comparable in representing the overall morphology: automated clustering of reconstructions from both methods successfully separates neocortical basket cells from pyramidal cells but not BH from FI reconstructions. BH reconstructions suffer more from tissue shrinkage and compression artifacts than FI reconstructions do. FI reconstructions, on the other hand, consistently have larger process diameters. Consequently, significant differences in NEURON modeling of excitatory post-synaptic potential (EPSP forward propagation are seen between the two methods, with FI reconstructions exhibiting smaller depolarizations. Simulated action potential backpropagation (bAP, however, is indistinguishable between reconstructions obtained with the two methods. In our hands, BH reconstructions are necessary for NEURON modeling and detailed morphological tracing, and thus remain state of the art, although they are more labor intensive, more expensive, and suffer from a higher failure rate. However, for a subset of anatomical applications such as cell type identification, FI reconstructions are superior, because of indistinguishable classification performance with greater ease of use, essentially 100% success rate, and lower cost.

  7. A Hierarchical Building Segmentation in Digital Surface Models for 3D Reconstruction

    Yiming Yan

    2017-01-01

    Full Text Available In this study, a hierarchical method for segmenting buildings in a digital surface model (DSM, which is used in a novel framework for 3D reconstruction, is proposed. Most 3D reconstructions of buildings are model-based. However, the limitations of these methods are overreliance on completeness of the offline-constructed models of buildings, and the completeness is not easily guaranteed since in modern cities buildings can be of a variety of types. Therefore, a model-free framework using high precision DSM and texture-images buildings was introduced. There are two key problems with this framework. The first one is how to accurately extract the buildings from the DSM. Most segmentation methods are limited by either the terrain factors or the difficult choice of parameter-settings. A level-set method are employed to roughly find the building regions in the DSM, and then a recently proposed ‘occlusions of random textures model’ are used to enhance the local segmentation of the buildings. The second problem is how to generate the facades of buildings. Synergizing with the corresponding texture-images, we propose a roof-contour guided interpolation of building facades. The 3D reconstruction results achieved by airborne-like images and satellites are compared. Experiments show that the segmentation method has good performance, and 3D reconstruction is easily performed by our framework, and better visualization results can be obtained by airborne-like images, which can be further replaced by UAV images.

  8. A Convex Reconstruction Model for X-ray Tomographic Imaging with Uncertain Flat-fields

    Aggrawal, Hari Om; Andersen, Martin Skovgaard; Rose, Sean

    2018-01-01

    has a negligible effect on the reconstruction quality. However, in time- or dose-limited applications such as dynamic CT, this uncertainty may cause severe and systematic artifacts known as ring artifacts. By carefully modeling the measurement process and by taking uncertainties into account, we...

  9. Reconstructing Holocene climate using a climate model: Model strategy and preliminary results

    Haberkorn, K.; Blender, R.; Lunkeit, F.; Fraedrich, K.

    2009-04-01

    An Earth system model of intermediate complexity (Planet Simulator; PlaSim) is used to reconstruct Holocene climate based on proxy data. The Planet Simulator is a user friendly general circulation model (GCM) suitable for palaeoclimate research. Its easy handling and the modular structure allow for fast and problem dependent simulations. The spectral model is based on the moist primitive equations conserving momentum, mass, energy and moisture. Besides the atmospheric part, a mixed layer-ocean with sea ice and a land surface with biosphere are included. The present-day climate of PlaSim, based on an AMIP II control-run (T21/10L resolution), shows reasonable agreement with ERA-40 reanalysis data. Combining PlaSim with a socio-technological model (GLUES; DFG priority project INTERDYNAMIK) provides improved knowledge on the shift from hunting-gathering to agropastoral subsistence societies. This is achieved by a data assimilation approach, incorporating proxy time series into PlaSim to initialize palaeoclimate simulations during the Holocene. For this, the following strategy is applied: The sensitivities of the terrestrial PlaSim climate are determined with respect to sea surface temperature (SST) anomalies. Here, the focus is the impact of regionally varying SST both in the tropics and the Northern Hemisphere mid-latitudes. The inverse of these sensitivities is used to determine the SST conditions necessary for the nudging of land and coastal proxy climates. Preliminary results indicate the potential, the uncertainty and the limitations of the method.

  10. Reconstruction of the solid transport of the river Tiber by a stochastic model

    Grimaldi, S.; Magnaldi, S.; Margaritora, G.

    1999-01-01

    The chronological series of cumulative suspended solids transport observed at Ripetta station in river Tiber (Rome, Italy) is reconstructed on the base of the correlation with the chronological series of liquid discharge, using a TFN (Transfer Function Noise) stochastic model with SARIMA noise. The results are compared with those similar reconstructions based on linear correlation that can be found in literature. Finally, the importance of floods intensity and frequency decrease observed after 1950 at Ripetta station is shown as not negligible aggravation for the decrease solid transport in river Tiber [it

  11. POD Model Reconstruction for Gray-Box Fault Detection

    Park, Han; Zak, Michail

    2007-01-01

    Proper orthogonal decomposition (POD) is the mathematical basis of a method of constructing low-order mathematical models for the "gray-box" fault-detection algorithm that is a component of a diagnostic system known as beacon-based exception analysis for multi-missions (BEAM). POD has been successfully applied in reducing computational complexity by generating simple models that can be used for control and simulation for complex systems such as fluid flows. In the present application to BEAM, POD brings the same benefits to automated diagnosis. BEAM is a method of real-time or offline, automated diagnosis of a complex dynamic system.The gray-box approach makes it possible to utilize incomplete or approximate knowledge of the dynamics of the system that one seeks to diagnose. In the gray-box approach, a deterministic model of the system is used to filter a time series of system sensor data to remove the deterministic components of the time series from further examination. What is left after the filtering operation is a time series of residual quantities that represent the unknown (or at least unmodeled) aspects of the behavior of the system. Stochastic modeling techniques are then applied to the residual time series. The procedure for detecting abnormal behavior of the system then becomes one of looking for statistical differences between the residual time series and the predictions of the stochastic model.

  12. An improved model for the reconstruction of past radon exposure.

    Cauwels, P; Poffijn, A

    2000-05-01

    If the behavior of long-lived radon progeny was well understood, measurements of these could be used in epidemiological studies to estimate past radon exposure. Field measurements were done in a radon-prone area in the Ardennes (Belgium). The surface activity of several glass sheets was measured using detectors that were fixed on indoor glass surfaces. Simultaneously the indoor radon concentration was measured using diffusion chambers. By using Monte Carlo techniques, it could be proven that there is a discrepancy between this data set and the room model calculations, which are normally used to correlate surface activity and past radon exposure. To solve this, a modification of the model is proposed.

  13. Gaussian mixture models and semantic gating improve reconstructions from human brain activity

    Sanne eSchoenmakers

    2015-01-01

    Full Text Available Better acquisition protocols and analysis techniques are making it possible to use fMRI to obtain highly detailed visualizations of brain processes. In particular we focus on the reconstruction of natural images from BOLD responses in visual cortex. We expand our linear Gaussian framework for percept decoding with Gaussian mixture models to better represent the prior distribution of natural images. Reconstruction of such images then boils down to probabilistic inference in a hybrid Bayesian network. In our set-up, different mixture components correspond to different character categories. Our framework can automatically infer higher-order semantic categories from lower-level brain areas. Furthermore the framework can gate semantic information from higher-order brain areas to enforce the correct category during reconstruction. When categorical information is not available, we show that automatically learned clusters in the data give a similar improvement in reconstruction. The hybrid Bayesian network leads to highly accurate reconstructions in both supervised and unsupervised settings.

  14. Model-based respiratory motion compensation for emission tomography image reconstruction

    Reyes, M; Malandain, G; Koulibaly, P M; Gonzalez-Ballester, M A; Darcourt, J

    2007-01-01

    In emission tomography imaging, respiratory motion causes artifacts in lungs and cardiac reconstructed images, which lead to misinterpretations, imprecise diagnosis, impairing of fusion with other modalities, etc. Solutions like respiratory gating, correlated dynamic PET techniques, list-mode data based techniques and others have been tested, which lead to improvements over the spatial activity distribution in lungs lesions, but which have the disadvantages of requiring additional instrumentation or the need of discarding part of the projection data used for reconstruction. The objective of this study is to incorporate respiratory motion compensation directly into the image reconstruction process, without any additional acquisition protocol consideration. To this end, we propose an extension to the maximum likelihood expectation maximization (MLEM) algorithm that includes a respiratory motion model, which takes into account the displacements and volume deformations produced by the respiratory motion during the data acquisition process. We present results from synthetic simulations incorporating real respiratory motion as well as from phantom and patient data

  15. Reconstruction of the Scalar Field Potential in Inflationary Models with a Gauss-Bonnet term

    Koh, Seoktae; Lee, Bum-Hoon; Tumurtushaa, Gansukh

    2017-06-01

    We consider inflationary models with a Gauss-Bonnet term to reconstruct the scalar-field potentials and the Gauss-Bonnet coupling functions. Both expressions are derived from the observationally favored configurations of ns and r . Our result implies that, for the reconstructed potentials and coupling functions, the blue tilt of inflationary tensor fluctuations can be realized. To achieve a blue tilt for the inflationary tensor fluctuations, a scalar field must climb up its potential before rolling down. We further investigate the properties of propagation of the perturbation modes in Friedmann-Robertson-Walker spacetime. For the reconstructed configurations that give rise to the blue tilt for the inflationary tensor fluctuations, we show that the ghosts and instabilities are absent with the superluminal propagation speeds for the scalar perturbation modes, whereas the propagation speeds of the tensor perturbations are subluminal.

  16. Analysis and reconstruction of stochastic coupled map lattice models

    Coca, Daniel; Billings, Stephen A.

    2003-01-01

    The Letter introduces a general stochastic coupled lattice map model together with an algorithm to estimate the nodal equations involved based only on a small set of observable variables and in the presence of stochastic perturbations. More general forms of the Frobenius-Perron and the transfer operators, which describe the evolution of densities under the action of the CML transformation, are derived

  17. Fast parallel algorithm for three-dimensional distance-driven model in iterative computed tomography reconstruction

    Chen Jian-Lin; Li Lei; Wang Lin-Yuan; Cai Ai-Long; Xi Xiao-Qi; Zhang Han-Ming; Li Jian-Xin; Yan Bin

    2015-01-01

    The projection matrix model is used to describe the physical relationship between reconstructed object and projection. Such a model has a strong influence on projection and backprojection, two vital operations in iterative computed tomographic reconstruction. The distance-driven model (DDM) is a state-of-the-art technology that simulates forward and back projections. This model has a low computational complexity and a relatively high spatial resolution; however, it includes only a few methods in a parallel operation with a matched model scheme. This study introduces a fast and parallelizable algorithm to improve the traditional DDM for computing the parallel projection and backprojection operations. Our proposed model has been implemented on a GPU (graphic processing unit) platform and has achieved satisfactory computational efficiency with no approximation. The runtime for the projection and backprojection operations with our model is approximately 4.5 s and 10.5 s per loop, respectively, with an image size of 256×256×256 and 360 projections with a size of 512×512. We compare several general algorithms that have been proposed for maximizing GPU efficiency by using the unmatched projection/backprojection models in a parallel computation. The imaging resolution is not sacrificed and remains accurate during computed tomographic reconstruction. (paper)

  18. Role of rheology in reconstructing slab morphology in global mantle models

    Bello, Léa; Coltice, Nicolas; Tackley, Paul; Müller, Dietmar

    2015-04-01

    Reconstructing the 3D structure of the Earth's mantle has been a challenge for geodynamicists for about 40 years. Although numerical models and computational capabilities have incredibly progressed, parameterizations used for modeling convection forced by plate motions are far from being Earth-like. Among the set of parameters, rheology is fundamental because it defines in a non-linear way the dynamics of slabs and plumes, and the organization of the lithosphere. Previous studies have employed diverse viscosity laws, most of them being temperature and depth dependent with relatively small viscosity contrasts. In this study, we evaluate the role of the temperature dependence of viscosity (variations up to 6 orders of magnitude) on reconstructing slab evolution in 3D spherical models of convection driven by plate history models. We also investigate the importance of pseudo-plasticity in such models. We show that strong temperature dependence of viscosity combined with pseudo-plasticity produce laterally and vertically continuous slabs, and flat subduction where trench retreat is fast (North, Central and South America). Moreover, pseudo-plasticity allows a consistent coupling between imposed plate motions and global convection, which is not possible with temperature-dependent viscosity only. However, even our most sophisticated model is not able to reproduce unambiguously stagnant slabs probably because of the simplicity of material properties we use here. The differences between models employing different viscosity laws are very large, larger than the differences between two models with the same rheology but using two different plate reconstructions or initial conditions.

  19. 3D Volumetric Modeling and Microvascular Reconstruction of Irradiated Lumbosacral Defects After Oncologic Resection

    Emilio Garcia-Tutor

    2016-12-01

    Full Text Available Background: Locoregional flaps are sufficient in most sacral reconstructions. However, large sacral defects due to malignancy necessitate a different reconstructive approach, with local flaps compromised by radiation and regional flaps inadequate for broad surface areas or substantial volume obliteration. In this report, we present our experience using free muscle transfer for volumetric reconstruction in such cases, and demonstrate 3D haptic models of the sacral defect to aid preoperative planning.Methods: Five consecutive patients with irradiated sacral defects secondary to oncologic resections were included, surface area ranging from 143-600cm2. Latissimus dorsi-based free flap sacral reconstruction was performed in each case, between 2005 and 2011. Where the superior gluteal artery was compromised, the subcostal artery was used as a recipient vessel. Microvascular technique, complications and outcomes are reported. The use of volumetric analysis and 3D printing is also demonstrated, with imaging data converted to 3D images suitable for 3D printing with Osirix software (Pixmeo, Geneva, Switzerland. An office-based, desktop 3D printer was used to print 3D models of sacral defects, used to demonstrate surface area and contour and produce a volumetric print of the dead space needed for flap obliteration. Results: The clinical series of latissimus dorsi free flap reconstructions is presented, with successful transfer in all cases, and adequate soft-tissue cover and volume obliteration achieved. The original use of the subcostal artery as a recipient vessel was successfully achieved. All wounds healed uneventfully. 3D printing is also demonstrated as a useful tool for 3D evaluation of volume and dead-space.Conclusion: Free flaps offer unique benefits in sacral reconstruction where local tissue is compromised by irradiation and tumor recurrence, and dead-space requires accurate volumetric reconstruction. We describe for the first time the use of

  20. A Canine Arthroscopic Anterior Cruciate Ligament Reconstruction Model for Study of Synthetic Augmentation of Tendon Allografts.

    Cook, James L; Smith, Pat; Stannard, James P; Pfeiffer, Ferris; Kuroki, Keiichi; Bozynski, Chantelle C; Cook, Cristi

    2017-09-01

    Novel graft types, fixation methods, and means for augmenting anterior cruciate ligament (ACL) reconstructions require preclinical validation prior to safe and effective clinical application. The objective of this study was to describe and validate a translational canine model for all-inside arthroscopic complete ACL reconstruction using a quadriceps tendon allograft with internal brace (QTIB). With institutional approval, adult research hounds underwent complete transection of the native ACL followed by all-inside ACL reconstruction using the novel QTIB construct with suspensory fixation ( n  = 10). Contralateral knees were used as nonoperated controls ( n  = 10). Dogs were assessed over a 6-month period using functional, diagnostic imaging, gross, biomechanical, and histologic outcome measures required for preclinical animal models. Study results suggest that the novel QTIB construct used for complete ACL reconstruction can provide sustained knee stability and function without the development of premature osteoarthritis in a rigorous and valid preclinical model. The unique configuration of the QTIB construct-the combination of a tendon allograft with a synthetic suture tape internal brace-allowed for an effective biologic-synthetic load-sharing ACL construct. It prevented early failure, allowed for direct, four-zone graft-to-bone healing, and functional graft remodeling while avoiding problems noted with use of all-synthetic grafts. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  1. Bayesian error analysis model for reconstructing transcriptional regulatory networks

    Sun, Ning; Carroll, Raymond J.; Zhao, Hongyu

    2006-01-01

    Transcription regulation is a fundamental biological process, and extensive efforts have been made to dissect its mechanisms through direct biological experiments and regulation modeling based on physical–chemical principles and mathematical formulations. Despite these efforts, transcription regulation is yet not well understood because of its complexity and limitations in biological experiments. Recent advances in high throughput technologies have provided substantial amounts and diverse typ...

  2. 3D model tools for architecture and archaeology reconstruction

    Vlad, Ioan; Herban, Ioan Sorin; Stoian, Mircea; Vilceanu, Clara-Beatrice

    2016-06-01

    The main objective of architectural and patrimonial survey is to provide a precise documentation of the status quo of the surveyed objects (monuments, buildings, archaeological object and sites) for preservation and protection, for scientific studies and restoration purposes, for the presentation to the general public. Cultural heritage documentation includes an interdisciplinary approach having as purpose an overall understanding of the object itself and an integration of the information which characterize it. The accuracy and the precision of the model are directly influenced by the quality of the measurements realized on field and by the quality of the software. The software is in the process of continuous development, which brings many improvements. On the other side, compared to aerial photogrammetry, close range photogrammetry and particularly architectural photogrammetry is not limited to vertical photographs with special cameras. The methodology of terrestrial photogrammetry has changed significantly and various photographic acquisitions are widely in use. In this context, the present paper brings forward a comparative study of TLS (Terrestrial Laser Scanner) and digital photogrammetry for 3D modeling. The authors take into account the accuracy of the 3D models obtained, the overall costs involved for each technology and method and the 4th dimension - time. The paper proves its applicability as photogrammetric technologies are nowadays used at a large scale for obtaining the 3D model of cultural heritage objects, efficacious in their assessment and monitoring, thus contributing to historic conservation. Its importance also lies in highlighting the advantages and disadvantages of each method used - very important issue for both the industrial and scientific segment when facing decisions such as in which technology to invest more research and funds.

  3. Phylogenetic tree reconstruction accuracy and model fit when proportions of variable sites change across the tree.

    Shavit Grievink, Liat; Penny, David; Hendy, Michael D; Holland, Barbara R

    2010-05-01

    Commonly used phylogenetic models assume a homogeneous process through time in all parts of the tree. However, it is known that these models can be too simplistic as they do not account for nonhomogeneous lineage-specific properties. In particular, it is now widely recognized that as constraints on sequences evolve, the proportion and positions of variable sites can vary between lineages causing heterotachy. The extent to which this model misspecification affects tree reconstruction is still unknown. Here, we evaluate the effect of changes in the proportions and positions of variable sites on model fit and tree estimation. We consider 5 current models of nucleotide sequence evolution in a Bayesian Markov chain Monte Carlo framework as well as maximum parsimony (MP). We show that for a tree with 4 lineages where 2 nonsister taxa undergo a change in the proportion of variable sites tree reconstruction under the best-fitting model, which is chosen using a relative test, often results in the wrong tree. In this case, we found that an absolute test of model fit is a better predictor of tree estimation accuracy. We also found further evidence that MP is not immune to heterotachy. In addition, we show that increased sampling of taxa that have undergone a change in proportion and positions of variable sites is critical for accurate tree reconstruction.

  4. Hybrid light transport model based bioluminescence tomography reconstruction for early gastric cancer detection

    Chen, Xueli; Liang, Jimin; Hu, Hao; Qu, Xiaochao; Yang, Defu; Chen, Duofang; Zhu, Shouping; Tian, Jie

    2012-03-01

    Gastric cancer is the second cause of cancer-related death in the world, and it remains difficult to cure because it has been in late-stage once that is found. Early gastric cancer detection becomes an effective approach to decrease the gastric cancer mortality. Bioluminescence tomography (BLT) has been applied to detect early liver cancer and prostate cancer metastasis. However, the gastric cancer commonly originates from the gastric mucosa and grows outwards. The bioluminescent light will pass through a non-scattering region constructed by gastric pouch when it transports in tissues. Thus, the current BLT reconstruction algorithms based on the approximation model of radiative transfer equation are not optimal to handle this problem. To address the gastric cancer specific problem, this paper presents a novel reconstruction algorithm that uses a hybrid light transport model to describe the bioluminescent light propagation in tissues. The radiosity theory integrated with the diffusion equation to form the hybrid light transport model is utilized to describe light propagation in the non-scattering region. After the finite element discretization, the hybrid light transport model is converted into a minimization problem which fuses an l1 norm based regularization term to reveal the sparsity of bioluminescent source distribution. The performance of the reconstruction algorithm is first demonstrated with a digital mouse based simulation with the reconstruction error less than 1mm. An in situ gastric cancer-bearing nude mouse based experiment is then conducted. The primary result reveals the ability of the novel BLT reconstruction algorithm in early gastric cancer detection.

  5. Efficient methodologies for system matrix modelling in iterative image reconstruction for rotating high-resolution PET

    Ortuno, J E; Kontaxakis, G; Rubio, J L; Santos, A [Departamento de Ingenieria Electronica (DIE), Universidad Politecnica de Madrid, Ciudad Universitaria s/n, 28040 Madrid (Spain); Guerra, P [Networking Research Center on Bioengineering, Biomaterials and Nanomedicine (CIBER-BBN), Madrid (Spain)], E-mail: juanen@die.upm.es

    2010-04-07

    A fully 3D iterative image reconstruction algorithm has been developed for high-resolution PET cameras composed of pixelated scintillator crystal arrays and rotating planar detectors, based on the ordered subsets approach. The associated system matrix is precalculated with Monte Carlo methods that incorporate physical effects not included in analytical models, such as positron range effects and interaction of the incident gammas with the scintillator material. Custom Monte Carlo methodologies have been developed and optimized for modelling of system matrices for fast iterative image reconstruction adapted to specific scanner geometries, without redundant calculations. According to the methodology proposed here, only one-eighth of the voxels within two central transaxial slices need to be modelled in detail. The rest of the system matrix elements can be obtained with the aid of axial symmetries and redundancies, as well as in-plane symmetries within transaxial slices. Sparse matrix techniques for the non-zero system matrix elements are employed, allowing for fast execution of the image reconstruction process. This 3D image reconstruction scheme has been compared in terms of image quality to a 2D fast implementation of the OSEM algorithm combined with Fourier rebinning approaches. This work confirms the superiority of fully 3D OSEM in terms of spatial resolution, contrast recovery and noise reduction as compared to conventional 2D approaches based on rebinning schemes. At the same time it demonstrates that fully 3D methodologies can be efficiently applied to the image reconstruction problem for high-resolution rotational PET cameras by applying accurate pre-calculated system models and taking advantage of the system's symmetries.

  6. Scapular flap for maxillectomy defect reconstruction and preliminary results using three-dimensional modeling.

    Modest, Mara C; Moore, Eric J; Abel, Kathryn M Van; Janus, Jeffrey R; Sims, John R; Price, Daniel L; Olsen, Kerry D

    2017-01-01

    Discuss current techniques utilizing the scapular tip and subscapular system for free tissue reconstruction of maxillary defects and highlight the impact of medical modeling on these techniques with a case series. Case review series at an academic hospital of patients undergoing maxillectomy + thoracodorsal scapula composite free flap (TSCF) reconstruction. Three-dimensional (3D) models were used in the last five cases. 3D modeling, surgical, functional, and aesthetic outcomes were reviewed. Nine patients underwent TSCF reconstruction for maxillectomy defects (median age = 43 years; range, 19-66 years). Five patients (55%) had a total maxillectomy (TM) ± orbital exenteration, whereas four patients (44%) underwent subtotal palatal maxillectomy. For TM, the contralateral scapula tip was positioned with its natural concavity recreating facial contour. The laterally based vascular pedicle was ideally positioned for facial vessel anastomosis. For subtotal-palatal defect, an ipsilateral flap was harvested, but inset with the convex surface facing superiorly. Once 3D models were available from our anatomic modeling lab, they were used for intraoperative planning of the last five patients. Use of the model intraoperatively improved efficiency and allowed for better contouring/plating of the TSCF. At last follow-up, all patients had good functional outcomes. Aesthetic outcomes were more successful in patients where 3D-modeling was used (100% vs. 50%). There were no flap failures. Median follow-up >1 month was 5.2 months (range, 1-32.7 months). Reconstruction of maxillectomy defects is complex. Successful aesthetic and functional outcomes are critical to patient satisfaction. The TSCF is a versatile flap. Based on defect type, choosing laterality is crucial for proper vessel orientation and outcomes. The use of internally produced 3D models has helped refine intraoperative contouring and flap inset, leading to more successful outcomes. 4. Laryngoscope, 127:E8-E14

  7. Multiscale vision model for event detection and reconstruction in two-photon imaging data

    Brazhe, Alexey; Mathiesen, Claus; Lind, Barbara Lykke

    2014-01-01

    on a modified multiscale vision model, an object detection framework based on the thresholding of wavelet coefficients and hierarchical trees of significant coefficients followed by nonlinear iterative partial object reconstruction, for the analysis of two-photon calcium imaging data. The framework is discussed...... of the multiscale vision model is similar in the denoising, but provides a better segmenation of the image into meaningful objects, whereas other methods need to be combined with dedicated thresholding and segmentation utilities....

  8. Foot modeling and smart plantar pressure reconstruction from three sensors.

    Ghaida, Hussein Abou; Mottet, Serge; Goujon, Jean-Marc

    2014-01-01

    In order to monitor pressure under feet, this study presents a biomechanical model of the human foot. The main elements of the foot that induce the plantar pressure distribution are described. Then the link between the forces applied at the ankle and the distribution of the plantar pressure is established. Assumptions are made by defining the concepts of a 3D internal foot shape, which can be extracted from the plantar pressure measurements, and a uniform elastic medium, which describes the soft tissues behaviour. In a second part, we show that just 3 discrete pressure sensors per foot are enough to generate real time plantar pressure cartographies in the standing position or during walking. Finally, the generated cartographies are compared with pressure cartographies issued from the F-SCAN system. The results show 0.01 daN (2% of full scale) average error, in the standing position.

  9. Magnetic resonance imaging of reconstructed ferritin as an iron-induced pathological model system

    Balejcikova, Lucia [Institute of Experimental Physics SAS, Watsonova 47, 040 01 Kosice (Slovakia); Institute of Measurement Science SAS, Dubravska cesta 9, 841 04 Bratislava 4 (Slovakia); Strbak, Oliver [Institute of Measurement Science SAS, Dubravska cesta 9, 841 04 Bratislava 4 (Slovakia); Biomedical Center Martin, Jessenius Faculty of Medicine in Martin, Comenius University in Bratislava, Mala Hora 4, 036 01 Martin (Slovakia); Baciak, Ladislav [Faculty of Chemical and Food Technology STU, Radlinskeho 9, 812 37 Bratislava (Slovakia); Kovac, Jozef [Institute of Experimental Physics SAS, Watsonova 47, 040 01 Kosice (Slovakia); Masarova, Marta; Krafcik, Andrej; Frollo, Ivan [Institute of Measurement Science SAS, Dubravska cesta 9, 841 04 Bratislava 4 (Slovakia); Dobrota, Dusan [Biomedical Center Martin, Jessenius Faculty of Medicine in Martin, Comenius University in Bratislava, Mala Hora 4, 036 01 Martin (Slovakia); Kopcansky, Peter [Institute of Experimental Physics SAS, Watsonova 47, 040 01 Kosice (Slovakia)

    2017-04-01

    Iron, an essential element of the human body, is a significant risk factor, particularly in the case of its concentration increasing above the specific limit. Therefore, iron is stored in the non-toxic form of the globular protein, ferritin, consisting of an apoferritin shell and iron core. Numerous studies confirmed the disruption of homeostasis and accumulation of iron in patients with various diseases (e.g. cancer, cardiovascular or neurological conditions), which is closely related to ferritin metabolism. Such iron imbalance enables the use of magnetic resonance imaging (MRI) as a sensitive technique for the detection of iron-based aggregates through changes in the relaxation times, followed by the change in the inherent image contrast. For our in vitrostudy, modified ferritins with different iron loadings were prepared by chemical reconstruction of the iron core in an apoferritin shell as pathological model systems. The magnetic properties of samples were studied using SQUID magnetometry, while the size distribution was detected via dynamic light scattering. We have shown that MRI could represent the most advantageous method for distinguishing native ferritin from reconstructed ferritin which, after future standardisation, could then be suitable for the diagnostics of diseases associated with iron accumulation. - Highlights: • MRI is the sensitive technique for detecting iron-based aggregates. • Reconstructed Ferritin is suitable model system of iron-related disorders. • MRI allow distinguish of native ferritin from reconstructed ferritin. • MRI could be useful for diagnostics of diseases associated with iron accumulation.

  10. A GLOBAL SOLUTION TO TOPOLOGICAL RECONSTRUCTION OF BUILDING ROOF MODELS FROM AIRBORNE LIDAR POINT CLOUDS

    J. Yan

    2016-06-01

    Full Text Available This paper presents a global solution to building roof topological reconstruction from LiDAR point clouds. Starting with segmented roof planes from building LiDAR points, a BSP (binary space partitioning algorithm is used to partition the bounding box of the building into volumetric cells, whose geometric features and their topology are simultaneously determined. To resolve the inside/outside labelling problem of cells, a global energy function considering surface visibility and spatial regularization between adjacent cells is constructed and minimized via graph cuts. As a result, the cells are labelled as either inside or outside, where the planar surfaces between the inside and outside form the reconstructed building model. Two LiDAR data sets of Yangjiang (China and Wuhan University (China are used in the study. Experimental results show that the completeness of reconstructed roof planes is 87.5%. Comparing with existing data-driven approaches, the proposed approach is global. Roof faces and edges as well as their topology can be determined at one time via minimization of an energy function. Besides, this approach is robust to partial absence of roof planes and tends to reconstruct roof models with visibility-consistent surfaces.

  11. Limiting CT radiation dose in children with craniosynostosis: phantom study using model-based iterative reconstruction

    Kaasalainen, Touko; Lampinen, Anniina [University of Helsinki and Helsinki University Hospital, HUS Medical Imaging Center, Radiology, POB 340, Helsinki (Finland); University of Helsinki, Department of Physics, Helsinki (Finland); Palmu, Kirsi [University of Helsinki and Helsinki University Hospital, HUS Medical Imaging Center, Radiology, POB 340, Helsinki (Finland); School of Science, Aalto University, Department of Biomedical Engineering and Computational Science, Helsinki (Finland); Reijonen, Vappu; Kortesniemi, Mika [University of Helsinki and Helsinki University Hospital, HUS Medical Imaging Center, Radiology, POB 340, Helsinki (Finland); Leikola, Junnu [University of Helsinki and Helsinki University Hospital, Department of Plastic Surgery, Helsinki (Finland); Kivisaari, Riku [University of Helsinki and Helsinki University Hospital, Department of Neurosurgery, Helsinki (Finland)

    2015-09-15

    Medical professionals need to exercise particular caution when developing CT scanning protocols for children who require multiple CT studies, such as those with craniosynostosis. To evaluate the utility of ultra-low-dose CT protocols with model-based iterative reconstruction techniques for craniosynostosis imaging. We scanned two pediatric anthropomorphic phantoms with a 64-slice CT scanner using different low-dose protocols for craniosynostosis. We measured organ doses in the head region with metal-oxide-semiconductor field-effect transistor (MOSFET) dosimeters. Numerical simulations served to estimate organ and effective doses. We objectively and subjectively evaluated the quality of images produced by adaptive statistical iterative reconstruction (ASiR) 30%, ASiR 50% and Veo (all by GE Healthcare, Waukesha, WI). Image noise and contrast were determined for different tissues. Mean organ dose with the newborn phantom was decreased up to 83% compared to the routine protocol when using ultra-low-dose scanning settings. Similarly, for the 5-year phantom the greatest radiation dose reduction was 88%. The numerical simulations supported the findings with MOSFET measurements. The image quality remained adequate with Veo reconstruction, even at the lowest dose level. Craniosynostosis CT with model-based iterative reconstruction could be performed with a 20-μSv effective dose, corresponding to the radiation exposure of plain skull radiography, without compromising required image quality. (orig.)

  12. Enhanced capital-asset pricing model for the reconstruction of bipartite financial networks

    Squartini, Tiziano; Almog, Assaf; Caldarelli, Guido; van Lelyveld, Iman; Garlaschelli, Diego; Cimini, Giulio

    2017-09-01

    Reconstructing patterns of interconnections from partial information is one of the most important issues in the statistical physics of complex networks. A paramount example is provided by financial networks. In fact, the spreading and amplification of financial distress in capital markets are strongly affected by the interconnections among financial institutions. Yet, while the aggregate balance sheets of institutions are publicly disclosed, information on single positions is mostly confidential and, as such, unavailable. Standard approaches to reconstruct the network of financial interconnection produce unrealistically dense topologies, leading to a biased estimation of systemic risk. Moreover, reconstruction techniques are generally designed for monopartite networks of bilateral exposures between financial institutions, thus failing in reproducing bipartite networks of security holdings (e.g., investment portfolios). Here we propose a reconstruction method based on constrained entropy maximization, tailored for bipartite financial networks. Such a procedure enhances the traditional capital-asset pricing model (CAPM) and allows us to reproduce the correct topology of the network. We test this enhanced CAPM (ECAPM) method on a dataset, collected by the European Central Bank, of detailed security holdings of European institutional sectors over a period of six years (2009-2015). Our approach outperforms the traditional CAPM and the recently proposed maximum-entropy CAPM both in reproducing the network topology and in estimating systemic risk due to fire sales spillovers. In general, ECAPM can be applied to the whole class of weighted bipartite networks described by the fitness model.

  13. Conceptualising forensic science and forensic reconstruction. Part I: A conceptual model.

    Morgan, R M

    2017-11-01

    There has been a call for forensic science to actively return to the approach of scientific endeavour. The importance of incorporating an awareness of the requirements of the law in its broadest sense, and embedding research into both practice and policy within forensic science, is arguably critical to achieving such an endeavour. This paper presents a conceptual model (FoRTE) that outlines the holistic nature of trace evidence in the 'endeavour' of forensic reconstruction. This model offers insights into the different components intrinsic to transparent, reproducible and robust reconstructions in forensic science. The importance of situating evidence within the whole forensic science process (from crime scene to court), of developing evidence bases to underpin each stage, of frameworks that offer insights to the interaction of different lines of evidence, and the role of expertise in decision making are presented and their interactions identified. It is argued that such a conceptual model has value in identifying the future steps for harnessing the value of trace evidence in forensic reconstruction. It also highlights that there is a need to develop a nuanced approach to reconstructions that incorporates both empirical evidence bases and expertise. A conceptual understanding has the potential to ensure that the endeavour of forensic reconstruction has its roots in 'problem-solving' science, and can offer transparency and clarity in the conclusions and inferences drawn from trace evidence, thereby enabling the value of trace evidence to be realised in investigations and the courts. Copyright © 2017 The Author. Published by Elsevier B.V. All rights reserved.

  14. Reconstruction of the external dose of evacuees from the contaminated areas based on simulation modelling

    Meckbach, R.; Chumak, V.V.

    1996-01-01

    Model calculations are being performed for the reconstruction of individual external gamma doses of population evacuated during the Chernobyl accident from the city of Pripyat and other settlements of the 30-km zone. The models are based on sets of dose rate measurements performed during the accident, on individual behavior histories of more than 30000 evacuees obtained by questionnaire survey and on location factors determined for characteristic housing buildings. Location factors were calculated by Monte Carlo simulations of photon transport for a typical housing block and village houses. Stochastic models for individual external dose reconstruction are described. Using Monte Carlo methods, frequency distributions representing the uncertainty of doses are calculated from an assessment of the uncertainty of the data. The determination of dose rate distributions in Pripyat is discussed. Exemplary results for individual external doses are presented

  15. Bayesian Multi-Energy Computed Tomography reconstruction approaches based on decomposition models

    Cai, Caifang

    2013-01-01

    Multi-Energy Computed Tomography (MECT) makes it possible to get multiple fractions of basis materials without segmentation. In medical application, one is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical MECT measurements are usually obtained with polychromatic X-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam poly-chromaticity fail to estimate the correct decomposition fractions and result in Beam-Hardening Artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log pre-processing and the water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on non-linear forward models counting the beam poly-chromaticity show great potential for giving accurate fraction images.This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint Maximum A Posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a non-quadratic cost function. To solve it, the use of a monotone Conjugate Gradient (CG) algorithm with suboptimal descent steps is proposed.The performances of the proposed approach are analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also

  16. Bayesian hierarchical models for regional climate reconstructions of the last glacial maximum

    Weitzel, Nils; Hense, Andreas; Ohlwein, Christian

    2017-04-01

    Spatio-temporal reconstructions of past climate are important for the understanding of the long term behavior of the climate system and the sensitivity to forcing changes. Unfortunately, they are subject to large uncertainties, have to deal with a complex proxy-climate structure, and a physically reasonable interpolation between the sparse proxy observations is difficult. Bayesian Hierarchical Models (BHMs) are a class of statistical models that is well suited for spatio-temporal reconstructions of past climate because they permit the inclusion of multiple sources of information (e.g. records from different proxy types, uncertain age information, output from climate simulations) and quantify uncertainties in a statistically rigorous way. BHMs in paleoclimatology typically consist of three stages which are modeled individually and are combined using Bayesian inference techniques. The data stage models the proxy-climate relation (often named transfer function), the process stage models the spatio-temporal distribution of the climate variables of interest, and the prior stage consists of prior distributions of the model parameters. For our BHMs, we translate well-known proxy-climate transfer functions for pollen to a Bayesian framework. In addition, we can include Gaussian distributed local climate information from preprocessed proxy records. The process stage combines physically reasonable spatial structures from prior distributions with proxy records which leads to a multivariate posterior probability distribution for the reconstructed climate variables. The prior distributions that constrain the possible spatial structure of the climate variables are calculated from climate simulation output. We present results from pseudoproxy tests as well as new regional reconstructions of temperatures for the last glacial maximum (LGM, ˜ 21,000 years BP). These reconstructions combine proxy data syntheses with information from climate simulations for the LGM that were

  17. The SENSE-Isomorphism Theoretical Image Voxel Estimation (SENSE-ITIVE) Model for Reconstruction and Observing Statistical Properties of Reconstruction Operators

    Bruce, Iain P.; Karaman, M. Muge; Rowe, Daniel B.

    2012-01-01

    The acquisition of sub-sampled data from an array of receiver coils has become a common means of reducing data acquisition time in MRI. Of the various techniques used in parallel MRI, SENSitivity Encoding (SENSE) is one of the most common, making use of a complex-valued weighted least squares estimation to unfold the aliased images. It was recently shown in Bruce et al. [Magn. Reson. Imag. 29(2011):1267–1287] that when the SENSE model is represented in terms of a real-valued isomorphism, it assumes a skew-symmetric covariance between receiver coils, as well as an identity covariance structure between voxels. In this manuscript, we show that not only is the skew-symmetric coil covariance unlike that of real data, but the estimated covariance structure between voxels over a time series of experimental data is not an identity matrix. As such, a new model, entitled SENSE-ITIVE, is described with both revised coil and voxel covariance structures. Both the SENSE and SENSE-ITIVE models are represented in terms of real-valued isomorphisms, allowing for a statistical analysis of reconstructed voxel means, variances, and correlations resulting from the use of different coil and voxel covariance structures used in the reconstruction processes to be conducted. It is shown through both theoretical and experimental illustrations that the miss-specification of the coil and voxel covariance structures in the SENSE model results in a lower standard deviation in each voxel of the reconstructed images, and thus an artificial increase in SNR, compared to the standard deviation and SNR of the SENSE-ITIVE model where both the coil and voxel covariances are appropriately accounted for. It is also shown that there are differences in the correlations induced by the reconstruction operations of both models, and consequently there are differences in the correlations estimated throughout the course of reconstructed time series. These differences in correlations could result in meaningful

  18. More performance results and implementation of an object oriented track reconstruction model in different OO frameworks

    Gaines, Irwin; Qian Sijin

    2001-01-01

    This is an update of the report about an Object Oriented (OO) track reconstruction model, which was presented in the previous AIHENP'99 at Crete, Greece. The OO model for the Kalman filtering method has been designed for high energy physics experiments at high luminosity hadron colliders. It has been coded in the C++ programming language and successfully implemented into a few different OO computing environments of the CMS and ATLAS experiments at the future Large Hadron Collider at CERN. We shall report: (1) more performance result: (2) implementing the OO model into the new SW OO framework 'Athena' of ATLAS experiment and some upgrades of the OO model itself

  19. Anisotropic modeling and joint-MAP stitching for improved ultrasound model-based iterative reconstruction of large and thick specimens

    Almansouri, Hani [Purdue University; Venkatakrishnan, Singanallur V. [ORNL; Clayton, Dwight A. [ORNL; Polsky, Yarom [ORNL; Bouman, Charles [Purdue University; Santos-Villalobos, Hector J. [ORNL

    2018-04-01

    One-sided non-destructive evaluation (NDE) is widely used to inspect materials, such as concrete structures in nuclear power plants (NPP). A widely used method for one-sided NDE is the synthetic aperture focusing technique (SAFT). The SAFT algorithm produces reasonable results when inspecting simple structures. However, for complex structures, such as heavily reinforced thick concrete structures, SAFT results in artifacts and hence there is a need for a more sophisticated inversion technique. Model-based iterative reconstruction (MBIR) algorithms, which are typically equivalent to regularized inversion techniques, offer a powerful framework to incorporate complex models for the physics, detector miscalibrations and the materials being imaged to obtain high quality reconstructions. Previously, we have proposed an ultrasonic MBIR method that signifcantly improves reconstruction quality compared to SAFT. However, the method made some simplifying assumptions on the propagation model and did not disucss ways to handle data that is obtained by raster scanning a system over a surface to inspect large regions. In this paper, we propose a novel MBIR algorithm that incorporates an anisotropic forward model and allows for the joint processing of data obtained from a system that raster scans a large surface. We demonstrate that the new MBIR method can produce dramatic improvements in reconstruction quality compared to SAFT and suppresses articfacts compared to the perviously presented MBIR approach.

  20. Implicit Regularization for Reconstructing 3D Building Rooftop Models Using Airborne LiDAR Data

    Jaewook Jung

    2017-03-01

    Full Text Available With rapid urbanization, highly accurate and semantically rich virtualization of building assets in 3D become more critical for supporting various applications, including urban planning, emergency response and location-based services. Many research efforts have been conducted to automatically reconstruct building models at city-scale from remotely sensed data. However, developing a fully-automated photogrammetric computer vision system enabling the massive generation of highly accurate building models still remains a challenging task. One the most challenging task for 3D building model reconstruction is to regularize the noises introduced in the boundary of building object retrieved from a raw data with lack of knowledge on its true shape. This paper proposes a data-driven modeling approach to reconstruct 3D rooftop models at city-scale from airborne laser scanning (ALS data. The focus of the proposed method is to implicitly derive the shape regularity of 3D building rooftops from given noisy information of building boundary in a progressive manner. This study covers a full chain of 3D building modeling from low level processing to realistic 3D building rooftop modeling. In the element clustering step, building-labeled point clouds are clustered into homogeneous groups by applying height similarity and plane similarity. Based on segmented clusters, linear modeling cues including outer boundaries, intersection lines, and step lines are extracted. Topology elements among the modeling cues are recovered by the Binary Space Partitioning (BSP technique. The regularity of the building rooftop model is achieved by an implicit regularization process in the framework of Minimum Description Length (MDL combined with Hypothesize and Test (HAT. The parameters governing the MDL optimization are automatically estimated based on Min-Max optimization and Entropy-based weighting method. The performance of the proposed method is tested over the International

  1. Implicit Regularization for Reconstructing 3D Building Rooftop Models Using Airborne LiDAR Data.

    Jung, Jaewook; Jwa, Yoonseok; Sohn, Gunho

    2017-03-19

    With rapid urbanization, highly accurate and semantically rich virtualization of building assets in 3D become more critical for supporting various applications, including urban planning, emergency response and location-based services. Many research efforts have been conducted to automatically reconstruct building models at city-scale from remotely sensed data. However, developing a fully-automated photogrammetric computer vision system enabling the massive generation of highly accurate building models still remains a challenging task. One the most challenging task for 3D building model reconstruction is to regularize the noises introduced in the boundary of building object retrieved from a raw data with lack of knowledge on its true shape. This paper proposes a data-driven modeling approach to reconstruct 3D rooftop models at city-scale from airborne laser scanning (ALS) data. The focus of the proposed method is to implicitly derive the shape regularity of 3D building rooftops from given noisy information of building boundary in a progressive manner. This study covers a full chain of 3D building modeling from low level processing to realistic 3D building rooftop modeling. In the element clustering step, building-labeled point clouds are clustered into homogeneous groups by applying height similarity and plane similarity. Based on segmented clusters, linear modeling cues including outer boundaries, intersection lines, and step lines are extracted. Topology elements among the modeling cues are recovered by the Binary Space Partitioning (BSP) technique. The regularity of the building rooftop model is achieved by an implicit regularization process in the framework of Minimum Description Length (MDL) combined with Hypothesize and Test (HAT). The parameters governing the MDL optimization are automatically estimated based on Min-Max optimization and Entropy-based weighting method. The performance of the proposed method is tested over the International Society for

  2. Data-Driven Neural Network Model for Robust Reconstruction of Automobile Casting

    Lin, Jinhua; Wang, Yanjie; Li, Xin; Wang, Lu

    2017-09-01

    In computer vision system, it is a challenging task to robustly reconstruct complex 3D geometries of automobile castings. However, 3D scanning data is usually interfered by noises, the scanning resolution is low, these effects normally lead to incomplete matching and drift phenomenon. In order to solve these problems, a data-driven local geometric learning model is proposed to achieve robust reconstruction of automobile casting. In order to relieve the interference of sensor noise and to be compatible with incomplete scanning data, a 3D convolution neural network is established to match the local geometric features of automobile casting. The proposed neural network combines the geometric feature representation with the correlation metric function to robustly match the local correspondence. We use the truncated distance field(TDF) around the key point to represent the 3D surface of casting geometry, so that the model can be directly embedded into the 3D space to learn the geometric feature representation; Finally, the training labels is automatically generated for depth learning based on the existing RGB-D reconstruction algorithm, which accesses to the same global key matching descriptor. The experimental results show that the matching accuracy of our network is 92.2% for automobile castings, the closed loop rate is about 74.0% when the matching tolerance threshold τ is 0.2. The matching descriptors performed well and retained 81.6% matching accuracy at 95% closed loop. For the sparse geometric castings with initial matching failure, the 3D matching object can be reconstructed robustly by training the key descriptors. Our method performs 3D reconstruction robustly for complex automobile castings.

  3. The impact of the Ice Model on tau neutrino reconstruction in IceCube

    Usner, Marcel; Kowalski, Marek [DESY Zeuthen (Germany); Collaboration: IceCube-Collaboration

    2015-07-01

    The IceCube Neutrino Observatory at the South Pole is a Cherenkov detector with an instrumented volume of about one cubic kilometer of the Antarctic ice. Tau neutrinos can be measured via the double bang signature that links two subsequent cascades from the neutrino interaction and the tau decay. Reconstruction of double bang events is currently limited to PeV energies and above where the decay length of the tau is greater than 50 m. At lower energies it is important to consider small effects that affect the propagation of Cherenkov photons in the ice. The most recent model of the glacial ice below South pole contains a tilt of the ice layers and an anisotropy of the scattering coefficient in the direction of the glacier flow. These effects cannot be incorporated trivially into the existing reconstruction methods and can have a significant impact on single and double cascade reconstruction. Updates on finding a solution to this problem are presented, and the effect on the reconstruction of tau neutrino events is discussed.

  4. Nature of science in instruction materials of science through the model of educational reconstruction

    Azizah, Nur; Mudzakir, Ahmad

    2016-02-01

    The study was carried out to reconstruct the science teaching materials charged view of the nature of science (VNOS). This reconstruction process using the Model of Educational Reconstruction (MER), which is the framework for research and development of science education as well as a guide for planning the teaching of science in the schools is limited in two stages, namely: content structure analysis, and empirical studies of learners. The purpose of this study is to obtain a pre-conception of learners and prospective scientists to the topic of the nature of the material and utilization. The method used to descriptive with the instruments is guidelines for interviews for 15 students of class VIII, text analysis sheet, sheet analysis of the concept, and the validation sheet indicators and learning objectives NOS charged on cognitive and affective aspects. The results obtained in the form of pre-conceptions of learners who demonstrate almost 100% of students know the types of materials and some of its nature, the results of the scientist's perspective on the topic of the nature of the material and its use, as well as the results of the validation indicators and learning objectives charged NOS and competencies PISA 2015 cognitive and affective aspects with CVI value of 0.99 and 1.0 after being validated by five experts. This suggests that the indicators and the resulting learning objectives feasible and can proceed to the reconstruction of teaching materials on the topic of material properties and utilization.

  5. Model study of the compact gravity reconstruction; Juryoku inversion `CGR` no model kento

    Ishii, Y; Muraoka, A [Sogo Geophysical Exploration Co. Ltd., Tokyo (Japan)

    1996-05-01

    An examination was made on gravity inversion using a compact gravity reconstruction (CGR) method in gravity tomography analysis. In a model analysis, an analytical region of 100m{times}50m was divided into cells of 10m{times}10m, on the assumption that two density anomalous bodies with a density difference of 1.0g/cm{sup 3} existed with one shallow and the other deep density distribution. The result of the analysis revealed that, in a linear analysis by a general inverse matrix, blurs and blotting were plenty with a tendency of making gravity anomaly attributable to an anomalous distribution of shallow density; that CGR provided a large effect in making a clear contrast of an anomalous part; that, where structures of shallow and deep density anomalies existed, the analysis by CGR was inferior in the restoration of a deep structure with errors enlarged; that, if a gravity traverse was taken long compared with the distribution depth of density anomalies, the analytical precision of a deep part was improved; that an analytical convergence was better with the restriction of density difference given on the large side than on the small side; and so on. 3 refs., 10 figs.

  6. Control-oriented modeling of the plasma particle density in tokamaks and application to real-time density profile reconstruction

    Blanken, T.C.; Felici, F.; Rapson, C.J.; de Baar, M.R.; Heemels, W.P.M.H.

    2018-01-01

    A model-based approach to real-time reconstruction of the particle density profile in tokamak plasmas is presented, based on a dynamic state estimator. Traditionally, the density profile is reconstructed in real-time by solving an ill-conditioned inversion problem using a measurement at a single

  7. Reconstructing ATLAS SU3 in the CMSSM and relaxed phenomenological supersymmetry models

    Fowlie, Andrew

    2011-01-01

    Assuming that the LHC makes a positive end-point measurement indicative of low-energy supersymmetry, we examine the prospects of reconstructing the parameter values of a typical low-mass point in the framework of the Constrained MSSM and in several other supersymmetry models that have more free parameters and fewer assumptions than the CMSSM. As a case study, we consider the ATLAS SU3 benchmark point with a Bayesian approach and with a Gaussian approximation to the likelihood for the measured masses and mass differences. First we investigate the impact of the hypothetical ATLAS measurement alone and show that it significantly narrows the confidence intervals of relevant, otherwise fairly unrestricted, model parameters. Next we add information about the relic density of neutralino dark matter to the likelihood and show that this further narrows the confidence intervals. We confirm that the CMSSM has the best prospects for parameter reconstruction; its results had little dependence on our choice of prior, in co...

  8. Statistical shape model-based reconstruction of a scaled, patient-specific surface model of the pelvis from a single standard AP x-ray radiograph

    Zheng Guoyan [Institute for Surgical Technology and Biomechanics, University of Bern, Stauffacherstrasse 78, CH-3014 Bern (Switzerland)

    2010-04-15

    Purpose: The aim of this article is to investigate the feasibility of using a statistical shape model (SSM)-based reconstruction technique to derive a scaled, patient-specific surface model of the pelvis from a single standard anteroposterior (AP) x-ray radiograph and the feasibility of estimating the scale of the reconstructed surface model by performing a surface-based 3D/3D matching. Methods: Data sets of 14 pelvises (one plastic bone, 12 cadavers, and one patient) were used to validate the single-image based reconstruction technique. This reconstruction technique is based on a hybrid 2D/3D deformable registration process combining a landmark-to-ray registration with a SSM-based 2D/3D reconstruction. The landmark-to-ray registration was used to find an initial scale and an initial rigid transformation between the x-ray image and the SSM. The estimated scale and rigid transformation were used to initialize the SSM-based 2D/3D reconstruction. The optimal reconstruction was then achieved in three stages by iteratively matching the projections of the apparent contours extracted from a 3D model derived from the SSM to the image contours extracted from the x-ray radiograph: Iterative affine registration, statistical instantiation, and iterative regularized shape deformation. The image contours are first detected by using a semiautomatic segmentation tool based on the Livewire algorithm and then approximated by a set of sparse dominant points that are adaptively sampled from the detected contours. The unknown scales of the reconstructed models were estimated by performing a surface-based 3D/3D matching between the reconstructed models and the associated ground truth models that were derived from a CT-based reconstruction method. Such a matching also allowed for computing the errors between the reconstructed models and the associated ground truth models. Results: The technique could reconstruct the surface models of all 14 pelvises directly from the landmark

  9. Model-Based Photoacoustic Image Reconstruction using Compressed Sensing and Smoothed L0 Norm

    Mozaffarzadeh, Moein; Mahloojifar, Ali; Nasiriavanaki, Mohammadreza; Orooji, Mahdi

    2018-01-01

    Photoacoustic imaging (PAI) is a novel medical imaging modality that uses the advantages of the spatial resolution of ultrasound imaging and the high contrast of pure optical imaging. Analytical algorithms are usually employed to reconstruct the photoacoustic (PA) images as a result of their simple implementation. However, they provide a low accurate image. Model-based (MB) algorithms are used to improve the image quality and accuracy while a large number of transducers and data acquisition a...

  10. An Operational Implementation of a CBRN Sensor-Driven Modeling Paradigm for Stochastic Event Reconstruction

    2010-05-01

    operationnelle pour la reconstruction de sources, au systeme de model- isation urbaine multi echelle integre mis en oeuvre dans !’infrastructure informatique...or urbanLS, respectively. However, for the Bayesian inversion of concentration data to be practical , fast and efficient techniques are required for...sensitivity and/or uncertainty analysis methods that have been used to quantify and reduce them. In this paper , all the various error contributions to

  11. Casting the Coronal Magnetic Field Reconstruction Tools in 3D Using the MHD Bifrost Model

    Fleishman, Gregory D.; Loukitcheva, Maria [Physics Department, Center for Solar-Terrestrial Research, New Jersey Institute of Technology Newark, NJ, 07102-1982 (United States); Anfinogentov, Sergey; Mysh’yakov, Ivan [Institute of Solar-Terrestrial Physics (ISZF), Lermontov st., 126a, Irkutsk, 664033 (Russian Federation); Stupishin, Alexey [Saint Petersburg State University, 7/9 Universitetskaya nab., St. Petersburg, 199034 (Russian Federation)

    2017-04-10

    Quantifying the coronal magnetic field remains a central problem in solar physics. Nowadays, the coronal magnetic field is often modeled using nonlinear force-free field (NLFFF) reconstructions, whose accuracy has not yet been comprehensively assessed. Here we perform a detailed casting of the NLFFF reconstruction tools, such as π -disambiguation, photospheric field preprocessing, and volume reconstruction methods, using a 3D snapshot of the publicly available full-fledged radiative MHD model. Specifically, from the MHD model, we know the magnetic field vector in the entire 3D domain, which enables us to perform a “voxel-by-voxel” comparison of the restored and the true magnetic fields in the 3D model volume. Our tests show that the available π -disambiguation methods often fail in the quiet-Sun areas dominated by small-scale magnetic elements, while they work well in the active region (AR) photosphere and (even better) chromosphere. The preprocessing of the photospheric magnetic field, although it does produce a more force-free boundary condition, also results in some effective “elevation” of the magnetic field components. This “elevation” height is different for the longitudinal and transverse components, which results in a systematic error in absolute heights in the reconstructed magnetic data cube. The extrapolations performed starting from the actual AR photospheric magnetogram are free from this systematic error, while other metrics are comparable with those for extrapolations from the preprocessed magnetograms. This finding favors the use of extrapolations from the original photospheric magnetogram without preprocessing. Our tests further suggest that extrapolations from a force-free chromospheric boundary produce measurably better results than those from a photospheric boundary.

  12. Weighted regularized statistical shape space projection for breast 3D model reconstruction.

    Ruiz, Guillermo; Ramon, Eduard; García, Jaime; Sukno, Federico M; Ballester, Miguel A González

    2018-05-02

    The use of 3D imaging has increased as a practical and useful tool for plastic and aesthetic surgery planning. Specifically, the possibility of representing the patient breast anatomy in a 3D shape and simulate aesthetic or plastic procedures is a great tool for communication between surgeon and patient during surgery planning. For the purpose of obtaining the specific 3D model of the breast of a patient, model-based reconstruction methods can be used. In particular, 3D morphable models (3DMM) are a robust and widely used method to perform 3D reconstruction. However, if additional prior information (i.e., known landmarks) is combined with the 3DMM statistical model, shape constraints can be imposed to improve the 3DMM fitting accuracy. In this paper, we present a framework to fit a 3DMM of the breast to two possible inputs: 2D photos and 3D point clouds (scans). Our method consists in a Weighted Regularized (WR) projection into the shape space. The contribution of each point in the 3DMM shape is weighted allowing to assign more relevance to those points that we want to impose as constraints. Our method is applied at multiple stages of the 3D reconstruction process. Firstly, it can be used to obtain a 3DMM initialization from a sparse set of 3D points. Additionally, we embed our method in the 3DMM fitting process in which more reliable or already known 3D points or regions of points, can be weighted in order to preserve their shape information. The proposed method has been tested in two different input settings: scans and 2D pictures assessing both reconstruction frameworks with very positive results. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. A two-dimensional model with three regions for the reflooding study

    Motta, A.M.T.; Kinrys, S.; Roberty, N.C.; Carmo, E.G.D. do; Oliveira, L.F.S. de.

    1983-02-01

    A two-dimensional semi-analytical model, with three heat transfer regions is described for the calculation of flood ratio, the lenght of quenching front and the temperature distribution in the cladding. (E.G.) [pt

  14. A two-dimensional model with three regions for the reflooding study

    Motta, A.M.T.; Kinrys, S.; Roberty, N.C.; Carmo, E.G.D. do; Oliveira, L.F.S. de

    1982-01-01

    A two-dimensional semi-analytical model, with three heat transfer regions is described for the calculation of flood ratio, the length of quenching front and the temperature distribution in the cladding. (E.G.) [pt

  15. Image quality of iterative reconstruction in cranial CT imaging: comparison of model-based iterative reconstruction (MBIR) and adaptive statistical iterative reconstruction (ASiR)

    Notohamiprodjo, S.; Deak, Z.; Meurer, F.; Maertz, F.; Mueck, F.G.; Geyer, L.L.; Wirth, S. [Ludwig-Maximilians University Hospital of Munich, Institute for Clinical Radiology, Munich (Germany)

    2015-01-15

    The purpose of this study was to compare cranial CT (CCT) image quality (IQ) of the MBIR algorithm with standard iterative reconstruction (ASiR). In this institutional review board (IRB)-approved study, raw data sets of 100 unenhanced CCT examinations (120 kV, 50-260 mAs, 20 mm collimation, 0.984 pitch) were reconstructed with both ASiR and MBIR. Signal-to-noise (SNR) and contrast-to-noise (CNR) were calculated from attenuation values measured in caudate nucleus, frontal white matter, anterior ventricle horn, fourth ventricle, and pons. Two radiologists, who were blinded to the reconstruction algorithms, evaluated anonymized multiplanar reformations of 2.5 mm with respect to depiction of different parenchymal structures and impact of artefacts on IQ with a five-point scale (0: unacceptable, 1: less than average, 2: average, 3: above average, 4: excellent). MBIR decreased artefacts more effectively than ASiR (p < 0.01). The median depiction score for MBIR was 3, whereas the median value for ASiR was 2 (p < 0.01). SNR and CNR were significantly higher in MBIR than ASiR (p < 0.01). MBIR showed significant improvement of IQ parameters compared to ASiR. As CCT is an examination that is frequently required, the use of MBIR may allow for substantial reduction of radiation exposure caused by medical diagnostics. (orig.)

  16. Image quality of iterative reconstruction in cranial CT imaging: comparison of model-based iterative reconstruction (MBIR) and adaptive statistical iterative reconstruction (ASiR)

    Notohamiprodjo, S.; Deak, Z.; Meurer, F.; Maertz, F.; Mueck, F.G.; Geyer, L.L.; Wirth, S.

    2015-01-01

    The purpose of this study was to compare cranial CT (CCT) image quality (IQ) of the MBIR algorithm with standard iterative reconstruction (ASiR). In this institutional review board (IRB)-approved study, raw data sets of 100 unenhanced CCT examinations (120 kV, 50-260 mAs, 20 mm collimation, 0.984 pitch) were reconstructed with both ASiR and MBIR. Signal-to-noise (SNR) and contrast-to-noise (CNR) were calculated from attenuation values measured in caudate nucleus, frontal white matter, anterior ventricle horn, fourth ventricle, and pons. Two radiologists, who were blinded to the reconstruction algorithms, evaluated anonymized multiplanar reformations of 2.5 mm with respect to depiction of different parenchymal structures and impact of artefacts on IQ with a five-point scale (0: unacceptable, 1: less than average, 2: average, 3: above average, 4: excellent). MBIR decreased artefacts more effectively than ASiR (p < 0.01). The median depiction score for MBIR was 3, whereas the median value for ASiR was 2 (p < 0.01). SNR and CNR were significantly higher in MBIR than ASiR (p < 0.01). MBIR showed significant improvement of IQ parameters compared to ASiR. As CCT is an examination that is frequently required, the use of MBIR may allow for substantial reduction of radiation exposure caused by medical diagnostics. (orig.)

  17. Reconstructing exposures from biomarkers using exposure-pharmacokinetic modeling--A case study with carbaryl.

    Brown, Kathleen; Phillips, Martin; Grulke, Christopher; Yoon, Miyoung; Young, Bruce; McDougall, Robin; Leonard, Jeremy; Lu, Jingtao; Lefew, William; Tan, Yu-Mei

    2015-12-01

    Sources of uncertainty involved in exposure reconstruction for short half-life chemicals were characterized using computational models that link external exposures to biomarkers. Using carbaryl as an example, an exposure model, the Cumulative and Aggregate Risk Evaluation System (CARES), was used to generate time-concentration profiles for 500 virtual individuals exposed to carbaryl. These exposure profiles were used as inputs into a physiologically based pharmacokinetic (PBPK) model to predict urinary biomarker concentrations. These matching dietary intake levels and biomarker concentrations were used to (1) compare three reverse dosimetry approaches based on their ability to predict the central tendency of the intake dose distribution; and (2) identify parameters necessary for a more accurate exposure reconstruction. This study illustrates the trade-offs between using non-iterative reverse dosimetry methods that are fast, less precise and iterative methods that are slow, more precise. This study also intimates the necessity of including urine flow rate and elapsed time between last dose and urine sampling as part of the biomarker sampling collection for better interpretation of urinary biomarker data of short biological half-life chemicals. Resolution of these critical data gaps can allow exposure reconstruction methods to better predict population-level intake doses from large biomonitoring studies. Published by Elsevier Inc.

  18. Quality Analysis on 3d Buidling Models Reconstructed from Uav Imagery

    Jarzabek-Rychard, M.; Karpina, M.

    2016-06-01

    Recent developments in UAV technology and structure from motion techniques have effected that UAVs are becoming standard platforms for 3D data collection. Because of their flexibility and ability to reach inaccessible urban parts, drones appear as optimal solution for urban applications. Building reconstruction from the data collected with UAV has the important potential to reduce labour cost for fast update of already reconstructed 3D cities. However, especially for updating of existing scenes derived from different sensors (e.g. airborne laser scanning), a proper quality assessment is necessary. The objective of this paper is thus to evaluate the potential of UAV imagery as an information source for automatic 3D building modeling at LOD2. The investigation process is conducted threefold: (1) comparing generated SfM point cloud to ALS data; (2) computing internal consistency measures of the reconstruction process; (3) analysing the deviation of Check Points identified on building roofs and measured with a tacheometer. In order to gain deep insight in the modeling performance, various quality indicators are computed and analysed. The assessment performed according to the ground truth shows that the building models acquired with UAV-photogrammetry have the accuracy of less than 18 cm for the plannimetric position and about 15 cm for the height component.

  19. QUALITY ANALYSIS ON 3D BUIDLING MODELS RECONSTRUCTED FROM UAV IMAGERY

    M. Jarzabek-Rychard

    2016-06-01

    Full Text Available Recent developments in UAV technology and structure from motion techniques have effected that UAVs are becoming standard platforms for 3D data collection. Because of their flexibility and ability to reach inaccessible urban parts, drones appear as optimal solution for urban applications. Building reconstruction from the data collected with UAV has the important potential to reduce labour cost for fast update of already reconstructed 3D cities. However, especially for updating of existing scenes derived from different sensors (e.g. airborne laser scanning, a proper quality assessment is necessary. The objective of this paper is thus to evaluate the potential of UAV imagery as an information source for automatic 3D building modeling at LOD2. The investigation process is conducted threefold: (1 comparing generated SfM point cloud to ALS data; (2 computing internal consistency measures of the reconstruction process; (3 analysing the deviation of Check Points identified on building roofs and measured with a tacheometer. In order to gain deep insight in the modeling performance, various quality indicators are computed and analysed. The assessment performed according to the ground truth shows that the building models acquired with UAV-photogrammetry have the accuracy of less than 18 cm for the plannimetric position and about 15 cm for the height component.

  20. [Research progress of three-dimensional digital model for repair and reconstruction of knee joint].

    Tong, Lu; Li, Yanlin; Hu, Meng

    2013-01-01

    To review recent advance in the application and research of three-dimensional digital knee model. The recent original articles about three-dimensional digital knee model were extensively reviewed and analyzed. The digital three-dimensional knee model can simulate the knee complex anatomical structure very well. Based on this, there are some developments of new software and techniques, and good clinical results are achieved. With the development of computer techniques and software, the knee repair and reconstruction procedure has been improved, the operation will be more simple and its accuracy will be further improved.

  1. Mid- and long-term runoff predictions by an improved phase-space reconstruction model

    Hong, Mei; Wang, Dong; Wang, Yuankun; Zeng, Xiankui; Ge, Shanshan; Yan, Hengqian; Singh, Vijay P.

    2016-01-01

    In recent years, the phase-space reconstruction method has usually been used for mid- and long-term runoff predictions. However, the traditional phase-space reconstruction method is still needs to be improved. Using the genetic algorithm to improve the phase-space reconstruction method, a new nonlinear model of monthly runoff is constructed. The new model does not rely heavily on embedding dimensions. Recognizing that the rainfall–runoff process is complex, affected by a number of factors, more variables (e.g. temperature and rainfall) are incorporated in the model. In order to detect the possible presence of chaos in the runoff dynamics, chaotic characteristics of the model are also analyzed, which shows the model can represent the nonlinear and chaotic characteristics of the runoff. The model is tested for its forecasting performance in four types of experiments using data from six hydrological stations on the Yellow River and the Yangtze River. Results show that the medium-and long-term runoff is satisfactorily forecasted at the hydrological stations. Not only is the forecasting trend accurate, but also the mean absolute percentage error is no more than 15%. Moreover, the forecast results of wet years and dry years are both good, which means that the improved model can overcome the traditional ‘‘wet years and dry years predictability barrier,’’ to some extent. The model forecasts for different regions are all good, showing the universality of the approach. Compared with selected conceptual and empirical methods, the model exhibits greater reliability and stability in the long-term runoff prediction. Our study provides a new thinking for research on the association between the monthly runoff and other hydrological factors, and also provides a new method for the prediction of the monthly runoff. - Highlights: • The improved phase-space reconstruction model of monthly runoff is established. • Two variables (temperature and rainfall) are incorporated

  2. Evaluation of the influence of uncertain forward models on the EEG source reconstruction problem

    Stahlhut, Carsten; Mørup, Morten; Winther, Ole

    2009-01-01

    in the different areas of the brain when noise is present. Results Due to mismatch between the true and experimental forward model, the reconstruction of the sources is determined by the angles between the i'th forward field associated with the true source and the j'th forward field in the experimental forward...... representation of the signal. Conclusions This analysis demonstrated that caution is needed when evaluating the source estimates in different brain regions. Moreover, we demonstrated the importance of reliable forward models, which may be used as a motivation for including the forward model uncertainty...

  3. Image reconstruction method for electrical capacitance tomography based on the combined series and parallel normalization model

    Dong, Xiangyuan; Guo, Shuqing

    2008-01-01

    In this paper, a novel image reconstruction method for electrical capacitance tomography (ECT) based on the combined series and parallel model is presented. A regularization technique is used to obtain a stabilized solution of the inverse problem. Also, the adaptive coefficient of the combined model is deduced by numerical optimization. Simulation results indicate that it can produce higher quality images when compared to the algorithm based on the parallel or series models for the cases tested in this paper. It provides a new algorithm for ECT application

  4. Mid- and long-term runoff predictions by an improved phase-space reconstruction model

    Hong, Mei [Research Center of Ocean Environment Numerical Simulation, Institute of Meteorology and oceanography, PLA University of Science and Technology, Nanjing (China); Wang, Dong, E-mail: wangdong@nju.edu.cn [Key Laboratory of Surficial Geochemistry, Ministry of Education, Department of Hydrosciences, School of Earth Sciences and Engineering, Collaborative Innovation Center of South China Sea Studies, State Key Laboratory of Pollution Control and Resource Reuse, Nanjing University, Nanjing 210093 (China); Wang, Yuankun; Zeng, Xiankui [Key Laboratory of Surficial Geochemistry, Ministry of Education, Department of Hydrosciences, School of Earth Sciences and Engineering, Collaborative Innovation Center of South China Sea Studies, State Key Laboratory of Pollution Control and Resource Reuse, Nanjing University, Nanjing 210093 (China); Ge, Shanshan; Yan, Hengqian [Research Center of Ocean Environment Numerical Simulation, Institute of Meteorology and oceanography, PLA University of Science and Technology, Nanjing (China); Singh, Vijay P. [Department of Biological and Agricultural Engineering Zachry Department of Civil Engineering, Texas A & M University, College Station, TX 77843 (United States)

    2016-07-15

    In recent years, the phase-space reconstruction method has usually been used for mid- and long-term runoff predictions. However, the traditional phase-space reconstruction method is still needs to be improved. Using the genetic algorithm to improve the phase-space reconstruction method, a new nonlinear model of monthly runoff is constructed. The new model does not rely heavily on embedding dimensions. Recognizing that the rainfall–runoff process is complex, affected by a number of factors, more variables (e.g. temperature and rainfall) are incorporated in the model. In order to detect the possible presence of chaos in the runoff dynamics, chaotic characteristics of the model are also analyzed, which shows the model can represent the nonlinear and chaotic characteristics of the runoff. The model is tested for its forecasting performance in four types of experiments using data from six hydrological stations on the Yellow River and the Yangtze River. Results show that the medium-and long-term runoff is satisfactorily forecasted at the hydrological stations. Not only is the forecasting trend accurate, but also the mean absolute percentage error is no more than 15%. Moreover, the forecast results of wet years and dry years are both good, which means that the improved model can overcome the traditional ‘‘wet years and dry years predictability barrier,’’ to some extent. The model forecasts for different regions are all good, showing the universality of the approach. Compared with selected conceptual and empirical methods, the model exhibits greater reliability and stability in the long-term runoff prediction. Our study provides a new thinking for research on the association between the monthly runoff and other hydrological factors, and also provides a new method for the prediction of the monthly runoff. - Highlights: • The improved phase-space reconstruction model of monthly runoff is established. • Two variables (temperature and rainfall) are incorporated

  5. Hierarchical model generation for architecture reconstruction using laser-scanned point clouds

    Ning, Xiaojuan; Wang, Yinghui; Zhang, Xiaopeng

    2014-06-01

    Architecture reconstruction using terrestrial laser scanner is a prevalent and challenging research topic. We introduce an automatic, hierarchical architecture generation framework to produce full geometry of architecture based on a novel combination of facade structures detection, detailed windows propagation, and hierarchical model consolidation. Our method highlights the generation of geometric models automatically fitting the design information of the architecture from sparse, incomplete, and noisy point clouds. First, the planar regions detected in raw point clouds are interpreted as three-dimensional clusters. Then, the boundary of each region extracted by projecting the points into its corresponding two-dimensional plane is classified to obtain detailed shape structure elements (e.g., windows and doors). Finally, a polyhedron model is generated by calculating the proposed local structure model, consolidated structure model, and detailed window model. Experiments on modeling the scanned real-life buildings demonstrate the advantages of our method, in which the reconstructed models not only correspond to the information of architectural design accurately, but also satisfy the requirements for visualization and analysis.

  6. Image quality in children with low-radiation chest CT using adaptive statistical iterative reconstruction and model-based iterative reconstruction.

    Jihang Sun

    Full Text Available OBJECTIVE: To evaluate noise reduction and image quality improvement in low-radiation dose chest CT images in children using adaptive statistical iterative reconstruction (ASIR and a full model-based iterative reconstruction (MBIR algorithm. METHODS: Forty-five children (age ranging from 28 days to 6 years, median of 1.8 years who received low-dose chest CT scans were included. Age-dependent noise index (NI was used for acquisition. Images were retrospectively reconstructed using three methods: MBIR, 60% of ASIR and 40% of conventional filtered back-projection (FBP, and FBP. The subjective quality of the images was independently evaluated by two radiologists. Objective noises in the left ventricle (LV, muscle, fat, descending aorta and lung field at the layer with the largest cross-section area of LV were measured, with the region of interest about one fourth to half of the area of descending aorta. Optimized signal-to-noise ratio (SNR was calculated. RESULT: In terms of subjective quality, MBIR images were significantly better than ASIR and FBP in image noise and visibility of tiny structures, but blurred edges were observed. In terms of objective noise, MBIR and ASIR reconstruction decreased the image noise by 55.2% and 31.8%, respectively, for LV compared with FBP. Similarly, MBIR and ASIR reconstruction increased the SNR by 124.0% and 46.2%, respectively, compared with FBP. CONCLUSION: Compared with FBP and ASIR, overall image quality and noise reduction were significantly improved by MBIR. MBIR image could reconstruct eligible chest CT images in children with lower radiation dose.

  7. 4D-PET reconstruction using a spline-residue model with spatial and temporal roughness penalties

    Ralli, George P.; Chappell, Michael A.; McGowan, Daniel R.; Sharma, Ricky A.; Higgins, Geoff S.; Fenwick, John D.

    2018-05-01

    4D reconstruction of dynamic positron emission tomography (dPET) data can improve the signal-to-noise ratio in reconstructed image sequences by fitting smooth temporal functions to the voxel time-activity-curves (TACs) during the reconstruction, though the optimal choice of function remains an open question. We propose a spline-residue model, which describes TACs as weighted sums of convolutions of the arterial input function with cubic B-spline basis functions. Convolution with the input function constrains the spline-residue model at early time-points, potentially enhancing noise suppression in early time-frames, while still allowing a wide range of TAC descriptions over the entire imaged time-course, thus limiting bias. Spline-residue based 4D-reconstruction is compared to that of a conventional (non-4D) maximum a posteriori (MAP) algorithm, and to 4D-reconstructions based on adaptive-knot cubic B-splines, the spectral model and an irreversible two-tissue compartment (‘2C3K’) model. 4D reconstructions were carried out using a nested-MAP algorithm including spatial and temporal roughness penalties. The algorithms were tested using Monte-Carlo simulated scanner data, generated for a digital thoracic phantom with uptake kinetics based on a dynamic [18F]-Fluromisonidazole scan of a non-small cell lung cancer patient. For every algorithm, parametric maps were calculated by fitting each voxel TAC within a sub-region of the reconstructed images with the 2C3K model. Compared to conventional MAP reconstruction, spline-residue-based 4D reconstruction achieved  >50% improvements for five of the eight combinations of the four kinetics parameters for which parametric maps were created with the bias and noise measures used to analyse them, and produced better results for 5/8 combinations than any of the other reconstruction algorithms studied, while spectral model-based 4D reconstruction produced the best results for 2/8. 2C3K model-based 4D reconstruction generated

  8. Bayesian image reconstruction in SPECT using higher order mechanical models as priors

    Lee, S.J.; Gindi, G.; Rangarajan, A.

    1995-01-01

    While the ML-EM (maximum-likelihood-expectation maximization) algorithm for reconstruction for emission tomography is unstable due to the ill-posed nature of the problem, Bayesian reconstruction methods overcome this instability by introducing prior information, often in the form of a spatial smoothness regularizer. More elaborate forms of smoothness constraints may be used to extend the role of the prior beyond that of a stabilizer in order to capture actual spatial information about the object. Previously proposed forms of such prior distributions were based on the assumption of a piecewise constant source distribution. Here, the authors propose an extension to a piecewise linear model--the weak plate--which is more expressive than the piecewise constant model. The weak plate prior not only preserves edges but also allows for piecewise ramplike regions in the reconstruction. Indeed, for the application in SPECT, such ramplike regions are observed in ground-truth source distributions in the form of primate autoradiographs of rCBF radionuclides. To incorporate the weak plate prior in a MAP approach, the authors model the prior as a Gibbs distribution and use a GEM formulation for the optimization. They compare quantitative performance of the ML-EM algorithm, a GEM algorithm with a prior favoring piecewise constant regions, and a GEM algorithm with the weak plate prior. Pointwise and regional bias and variance of ensemble image reconstructions are used as indications of image quality. The results show that the weak plate and membrane priors exhibit improved bias and variance relative to ML-EM techniques

  9. Implementation of an object oriented track reconstruction model into multiple LHC experiments*

    Gaines, Irwin; Gonzalez, Saul; Qian, Sijin

    2001-10-01

    An Object Oriented (OO) model (Gaines et al., 1996; 1997; Gaines and Qian, 1998; 1999) for track reconstruction by the Kalman filtering method has been designed for high energy physics experiments at high luminosity hadron colliders. The model has been coded in the C++ programming language and has been successfully implemented into the OO computing environments of both the CMS (1994) and ATLAS (1994) experiments at the future Large Hadron Collider (LHC) at CERN. We shall report: how the OO model was adapted, with largely the same code, to different scenarios and serves the different reconstruction aims in different experiments (i.e. the level-2 trigger software for ATLAS and the offline software for CMS); how the OO model has been incorporated into different OO environments with a similar integration structure (demonstrating the ease of re-use of OO program); what are the OO model's performance, including execution time, memory usage, track finding efficiency and ghost rate, etc.; and additional physics performance based on use of the OO tracking model. We shall also mention the experience and lessons learned from the implementation of the OO model into the general OO software framework of the experiments. In summary, our practice shows that the OO technology really makes the software development and the integration issues straightforward and convenient; this may be particularly beneficial for the general non-computer-professional physicists.

  10. Improved convergence of gradient-based reconstruction using multi-scale models

    Cunningham, G.S.; Hanson, K.M.; Koyfman, I.

    1996-01-01

    Geometric models have received increasing attention in medical imaging for tasks such as segmentation, reconstruction, restoration, and registration. In order to determine the best configuration of the geometric model in the context of any of these tasks, one needs to perform a difficult global optimization of an energy function that may have many local minima. Explicit models of geometry, also called deformable models, snakes, or active contours, have been used extensively to solve image segmentation problems in a non-Bayesian framework. Researchers have seen empirically that multi-scale analysis is useful for convergence to a configuration that is near the global minimum. In this type of analysis, the image data are convolved with blur functions of increasing resolution, and an optimal configuration of the snake is found for each blurred image. The configuration obtained using the highest resolution blur is used as the solution to the global optimization problem. In this article, the authors use explicit models of geometry for a variety of Bayesian estimation problems, including image segmentation, reconstruction and restoration. The authors introduce a multi-scale approach that blurs the geometric model, rather than the image data, and show that this approach turns a global, highly nonquadratic optimization into a sequence of local, approximately quadratic problems that converge to the global minimum. The result is a deterministic, robust, and efficient optimization strategy applicable to a wide variety of Bayesian estimation problems in which geometric models of images are an important component

  11. Reconstruction of daily erythemal UV radiation values for the last century - The benefit of modelled ozone

    Junk, J.; Feister, U.; Rozanov, E.; Krzyścin, J. W.

    2013-05-01

    Solar erythemal UV radiation (UVER) is highly relevant for numerous biological processes that affect plants, animals, and human health. Nevertheless, long-term UVER records are scarce. As significant declines in the column ozone concentration were observed in the past and a recovery of the stratospheric ozone layer is anticipated by the middle of the 21st century, there is a strong interest in the temporal variation of UVER time series. Therefore, we combined groundbased measurements of different meteorological variables with modeled ozone data sets to reconstruct time series of daily totals of UVER at the Meteorological Observatory Potsdam, Germany. Artificial neural networks were trained with measured UVER, sunshine duration, the day of year, measured and modeled total column ozone, as well as the minimum solar zenith angle. This allows for the reconstruction of daily totals of UVER for the period from 1901 to 1999. Additionally, analyses of the long-term variations from 1901 until 1999 of the reconstructed, new UVER data set are presented. The time series of monthly and annual totals of UVER provide a long-term meteorological basis for epidemiological investigations in human health and occupational medicine for the region of Potsdam and Berlin. A strong benefit of our ANN-approach is the fact that it can be easily adapted to different geographical locations, as successfully tested in the framework of the COSTAction 726.

  12. From GCode to STL: Reconstruct Models from 3D Printing as a Service

    Baumann, Felix W.; Schuermann, Martin; Odefey, Ulrich; Pfeil, Markus

    2017-12-01

    The authors present a method to reverse engineer 3D printer specific machine instructions (GCode) to a point cloud representation and then a STL (Stereolithography) file format. GCode is a machine code that is used for 3D printing among other applications, such as CNC routers. Such code files contain instructions for the 3D printer to move and control its actuator, in case of Fused Deposition Modeling (FDM), the printhead that extrudes semi-molten plastics. The reverse engineering method presented here is based on the digital simulation of the extrusion process of FDM type 3D printing. The reconstructed models and pointclouds do not accommodate for hollow structures, such as holes or cavities. The implementation is performed in Python and relies on open source software and libraries, such as Matplotlib and OpenCV. The reconstruction is performed on the model’s extrusion boundary and considers mechanical imprecision. The complete reconstruction mechanism is available as a RESTful (Representational State Transfer) Web service.

  13. Semi-analytical study of the tokamak pedestal density profile in a single-null diverted plasma with puffing-recycling gas sources

    Shi, Bingren

    2010-10-01

    The tokamak pedestal density structure is generally studied using a diffusion-dominant model. Recent investigations (Stacey and Groebner 2009 Phys. Plasmas 16 102504) from first principle based physics have shown a plausible existence of large inward convection in the pedestal region. The diffusion-convection equation with rapidly varying convection and diffusion coefficients in the near edge region and model puffing-recycling neutral particles is studied in this paper. A peculiar property of its solution for the existence of the large convection case is that the pedestal width of the density profile, qualitatively different from the diffusion-dominant case, depends mainly on the width of the inward convection and only weakly on the neutral penetration length and its injection position.

  14. Semi-analytical study of the tokamak pedestal density profile in a single-null diverted plasma with puffing-recycling gas sources

    Shi Bingren, E-mail: shibr@swip.ac.c [Southwestern Institute of Physics, PO Box 432, Chengdu, Sichuan 610041 (China)

    2010-10-15

    The tokamak pedestal density structure is generally studied using a diffusion-dominant model. Recent investigations (Stacey and Groebner 2009 Phys. Plasmas 16 102504) from first principle based physics have shown a plausible existence of large inward convection in the pedestal region. The diffusion-convection equation with rapidly varying convection and diffusion coefficients in the near edge region and model puffing-recycling neutral particles is studied in this paper. A peculiar property of its solution for the existence of the large convection case is that the pedestal width of the density profile, qualitatively different from the diffusion-dominant case, depends mainly on the width of the inward convection and only weakly on the neutral penetration length and its injection position.

  15. Reconstruction of binary geological images using analytical edge and object models

    Abdollahifard, Mohammad J.; Ahmadi, Sadegh

    2016-04-01

    Reconstruction of fields using partial measurements is of vital importance in different applications in geosciences. Solving such an ill-posed problem requires a well-chosen model. In recent years, training images (TI) are widely employed as strong prior models for solving these problems. However, in the absence of enough evidence it is difficult to find an adequate TI which is capable of describing the field behavior properly. In this paper a very simple and general model is introduced which is applicable to a fairly wide range of binary images without any modifications. The model is motivated by the fact that nearly all binary images are composed of simple linear edges in micro-scale. The analytic essence of this model allows us to formulate the template matching problem as a convex optimization problem having efficient and fast solutions. The model has the potential to incorporate the qualitative and quantitative information provided by geologists. The image reconstruction problem is also formulated as an optimization problem and solved using an iterative greedy approach. The proposed method is capable of recovering the image unknown values with accuracies about 90% given samples representing as few as 2% of the original image.

  16. 3D RECONSTRUCTION OF A MULTISCALE MICROSTRUCTURE BY ANISOTROPIC TESSELLATION MODELS

    Hellen Altendorf

    2014-05-01

    Full Text Available In the area of tessellation models, there is an intense activity to fully understand the classical models of Voronoi, Laguerre and Johnson-Mehl. Still, these models are all simulations of isotropic growth and are therefore limited to very simple and partly convex cell shapes. The here considered microstructure of martensitic steel has a much more complex and highly non convex cell shape, requiring new tessellation models. This paper presents a new approach for anisotropic tessellation models that resolve to the well-studied cases of Laguerre and Johnson-Mehl for spherical germs. Much better reconstructions can be achieved with these models and thus more realistic microstructure simulations can be produced for materials widely used in industry like martensitic and bainitic steels.

  17. Reconstruction of genome-scale human metabolic models using omics data

    Ryu, Jae Yong; Kim, Hyun Uk; Lee, Sang Yup

    2015-01-01

    used to describe metabolic phenotypes of healthy and diseased human tissues and cells, and to predict therapeutic targets. Here we review recent trends in genome-scale human metabolic modeling, including various generic and tissue/cell type-specific human metabolic models developed to date, and methods......, databases and platforms used to construct them. For generic human metabolic models, we pay attention to Recon 2 and HMR 2.0 with emphasis on data sources used to construct them. Draft and high-quality tissue/cell type-specific human metabolic models have been generated using these generic human metabolic...... refined through gap filling, reaction directionality assignment and the subcellular localization of metabolic reactions. We review relevant tools for this model refinement procedure as well. Finally, we suggest the direction of further studies on reconstructing an improved human metabolic model....

  18. Micromechanical Modeling of Solid Oxide Fuel Cell Anode Supports based on Three-dimensional Reconstructions

    Kwok, Kawai; Jørgensen, Peter Stanley; Frandsen, Henrik Lund

    2014-01-01

    Ni-3YSZ in the operating temperature through numerical micromechanical modeling. Three-dimensional microstructures of Ni-3YSZ anode supports are reconstructed from a two-dimensional image stack obtained via focused ion beam tomography. Time-dependent stress distributions in the microscopic scale...... are computed by the finite element method. The macroscopic creep response of the porous anode support is determined based on homogenization theory. It is shown that micromechanical modeling provides an effective tool to study the effect of microstructures on the macroscopic properties....

  19. Improved quantitative 90 Y bremsstrahlung SPECT/CT reconstruction with Monte Carlo scatter modeling.

    Dewaraja, Yuni K; Chun, Se Young; Srinivasa, Ravi N; Kaza, Ravi K; Cuneo, Kyle C; Majdalany, Bill S; Novelli, Paula M; Ljungberg, Michael; Fessler, Jeffrey A

    2017-12-01

    In 90 Y microsphere radioembolization (RE), accurate post-therapy imaging-based dosimetry is important for establishing absorbed dose versus outcome relationships for developing future treatment planning strategies. Additionally, accurately assessing microsphere distributions is important because of concerns for unexpected activity deposition outside the liver. Quantitative 90 Y imaging by either SPECT or PET is challenging. In 90 Y SPECT model based methods are necessary for scatter correction because energy window-based methods are not feasible with the continuous bremsstrahlung energy spectrum. The objective of this work was to implement and evaluate a scatter estimation method for accurate 90 Y bremsstrahlung SPECT/CT imaging. Since a fully Monte Carlo (MC) approach to 90 Y SPECT reconstruction is computationally very demanding, in the present study the scatter estimate generated by a MC simulator was combined with an analytical projector in the 3D OS-EM reconstruction model. A single window (105 to 195-keV) was used for both the acquisition and the projector modeling. A liver/lung torso phantom with intrahepatic lesions and low-uptake extrahepatic objects was imaged to evaluate SPECT/CT reconstruction without and with scatter correction. Clinical application was demonstrated by applying the reconstruction approach to five patients treated with RE to determine lesion and normal liver activity concentrations using a (liver) relative calibration. There was convergence of the scatter estimate after just two updates, greatly reducing computational requirements. In the phantom study, compared with reconstruction without scatter correction, with MC scatter modeling there was substantial improvement in activity recovery in intrahepatic lesions (from > 55% to > 86%), normal liver (from 113% to 104%), and lungs (from 227% to 104%) with only a small degradation in noise (13% vs. 17%). Similarly, with scatter modeling contrast improved substantially both visually and in

  20. Comparison of the image qualities of filtered back-projection, adaptive statistical iterative reconstruction, and model-based iterative reconstruction for CT venography at 80 kVp

    Kim, Jin Hyeok; Choo, Ki Seok; Moon, Tae Yong; Lee, Jun Woo; Jeon, Ung Bae; Kim, Tae Un; Hwang, Jae Yeon; Yun, Myeong-Ja; Jeong, Dong Wook; Lim, Soo Jin

    2016-01-01

    To evaluate the subjective and objective qualities of computed tomography (CT) venography images at 80 kVp using model-based iterative reconstruction (MBIR) and to compare these with those of filtered back projection (FBP) and adaptive statistical iterative reconstruction (ASIR) using the same CT data sets. Forty-four patients (mean age: 56.1 ± 18.1) who underwent 80 kVp CT venography (CTV) for the evaluation of deep vein thrombosis (DVT) during 4 months were enrolled in this retrospective study. The same raw data were reconstructed using FBP, ASIR, and MBIR. Objective and subjective image analysis were performed at the inferior vena cava (IVC), femoral vein, and popliteal vein. The mean CNR of MBIR was significantly greater than those of FBP and ASIR and images reconstructed using MBIR had significantly lower objective image noise (p <.001). Subjective image quality and confidence of detecting DVT by MBIR group were significantly greater than those of FBP and ASIR (p <.005), and MBIR had the lowest score for subjective image noise (p <.001). CTV at 80 kVp with MBIR was superior to FBP and ASIR regarding subjective and objective image qualities. (orig.)

  1. Effect of Cardiac Phases and Conductivity Inhomogeneities of the Thorax Models on ECG Lead Selection and Reconstruction

    Takano, Noriyuki

    2001-01-01

    ECG lead selection and reconstruction were investigated in the present study using ECG source-to-measurement transfer matrices computed in inhomogeneous and homogeneous conductor thorax-heart models...

  2. Connecting the dots: Semi-analytical and random walk numerical solutions of the diffusion–reaction equation with stochastic initial conditions

    Paster, Amir, E-mail: paster@tau.ac.il [Environmental Fluid Mechanics Laboratories, Dept. of Civil and Environmental Engineering and Earth Sciences, University of Notre Dame, Notre Dame, IN (United States); School of Mechanical Engineering, Tel Aviv University, Tel Aviv, 69978 (Israel); Bolster, Diogo [Environmental Fluid Mechanics Laboratories, Dept. of Civil and Environmental Engineering and Earth Sciences, University of Notre Dame, Notre Dame, IN (United States); Benson, David A. [Hydrologic Science and Engineering, Colorado School of Mines, Golden, CO, 80401 (United States)

    2014-04-15

    We study a system with bimolecular irreversible kinetic reaction A+B→∅ where the underlying transport of reactants is governed by diffusion, and the local reaction term is given by the law of mass action. We consider the case where the initial concentrations are given in terms of an average and a white noise perturbation. Our goal is to solve the diffusion–reaction equation which governs the system, and we tackle it with both analytical and numerical approaches. To obtain an analytical solution, we develop the equations of moments and solve them approximately. To obtain a numerical solution, we develop a grid-less Monte Carlo particle tracking approach, where diffusion is modeled by a random walk of the particles, and reaction is modeled by annihilation of particles. The probability of annihilation is derived analytically from the particles' co-location probability. We rigorously derive the relationship between the initial number of particles in the system and the amplitude of white noise represented by that number. This enables us to compare the particle simulations and the approximate analytical solution and offer an explanation of the late time discrepancies. - Graphical abstract:.

  3. Utilization of reconstructed cultured human skin models as an alternative skin for permeation studies of chemical compounds

    Kano, Satoshi; 藤堂, 浩明; 杉江, 謙一; 藤本, 英哲; 中田, 圭一; 徳留, 嘉寛; 橋本, フミ惠; 杉林, 堅次

    2010-01-01

    Two reconstructed human skin models, EpiskinSM and EpiDermTM, have been approved as alternative membranes for skin corrosive/irritation experiments due to their close correlation with animal skin. Such reconstructed human skin models were evaluated as alternative membranes for skin permeation experiments. Seven drugs with different lipophilicities and almost the same molecular weight were used as test penetrants. Relationships were investigated between permeability coefficients (P values) of ...

  4. Sub-component modeling for face image reconstruction in video communications

    Shiell, Derek J.; Xiao, Jing; Katsaggelos, Aggelos K.

    2008-08-01

    Emerging communications trends point to streaming video as a new form of content delivery. These systems are implemented over wired systems, such as cable or ethernet, and wireless networks, cell phones, and portable game systems. These communications systems require sophisticated methods of compression and error-resilience encoding to enable communications across band-limited and noisy delivery channels. Additionally, the transmitted video data must be of high enough quality to ensure a satisfactory end-user experience. Traditionally, video compression makes use of temporal and spatial coherence to reduce the information required to represent an image. In many communications systems, the communications channel is characterized by a probabilistic model which describes the capacity or fidelity of the channel. The implication is that information is lost or distorted in the channel, and requires concealment on the receiving end. We demonstrate a generative model based transmission scheme to compress human face images in video, which has the advantages of a potentially higher compression ratio, while maintaining robustness to errors and data corruption. This is accomplished by training an offline face model and using the model to reconstruct face images on the receiving end. We propose a sub-component AAM modeling the appearance of sub-facial components individually, and show face reconstruction results under different types of video degradation using a weighted and non-weighted version of the sub-component AAM.

  5. Topology reconstruction for B-Rep modeling from 3D mesh in reverse engineering applications

    Bénière, Roseline; Subsol, Gérard; Gesquière, Gilles; Le Breton, François; Puech, William

    2012-03-01

    Nowadays, most of the manufactured objects are designed using CAD (Computer-Aided Design) software. Nevertheless, for visualization, data exchange or manufacturing applications, the geometric model has to be discretized into a 3D mesh composed of a finite number of vertices and edges. But, in some cases, the initial model may be lost or unavailable. In other cases, the 3D discrete representation may be modified, for example after a numerical simulation, and does not correspond anymore to the initial model. A reverse engineering method is then required to reconstruct a 3D continuous representation from the discrete one. In previous work, we have presented a new approach for 3D geometric primitive extraction. In this paper, to complete our automatic and comprehensive reverse engineering process, we propose a method to construct the topology of the retrieved object. To reconstruct a B-Rep model, a new formalism is now introduced to define the adjacency relations. Then a new process is used to construct the boundaries of the object. The whole process is tested on 3D industrial meshes and bring a solution to recover B-Rep models.

  6. Ultrasound thermography: A new temperature reconstruction model and in vivo results

    Bayat, Mahdi; Ballard, John R.; Ebbini, Emad S.

    2017-03-01

    The recursive echo strain filter (RESF) model is presented as a new echo shift-based ultrasound temperature estimation model. The model is shown to have an infinite impulse response (IIR) filter realization of a differentitor-integrator operator. This model is then used for tracking sub-therapeutic temperature changes due to high intensity focused ultrasound (HIFU) shots in the hind limb of the Copenhagen rats in vivo. In addition to the reconstruction filter, a motion compensation method is presented which takes advantage of the deformation field outside the region of interest to correct the motion errors during temperature tracking. The combination of the RESF model and motion compensation algorithm is shown to greatly enhance the accuracy of the in vivo temperature estimation using ultrasound echo shifts.

  7. Iterative model reconstruction reduces calcified plaque volume in coronary CT angiography

    Károlyi, Mihály, E-mail: mihaly.karolyi@cirg.hu [MTA-SE Cardiovascular Imaging Research Group, Heart and Vascular Center, Semmelweis University, 68. Varosmajor st, 1122, Budapest (Hungary); Szilveszter, Bálint, E-mail: szilveszter.balint@gmail.com [MTA-SE Cardiovascular Imaging Research Group, Heart and Vascular Center, Semmelweis University, 68. Varosmajor st, 1122, Budapest (Hungary); Kolossváry, Márton, E-mail: martonandko@gmail.com [MTA-SE Cardiovascular Imaging Research Group, Heart and Vascular Center, Semmelweis University, 68. Varosmajor st, 1122, Budapest (Hungary); Takx, Richard A.P, E-mail: richard.takx@gmail.com [Department of Radiology, University Medical Center Utrecht, 100 Heidelberglaan, 3584, CX Utrecht (Netherlands); Celeng, Csilla, E-mail: celengcsilla@gmail.com [MTA-SE Cardiovascular Imaging Research Group, Heart and Vascular Center, Semmelweis University, 68. Varosmajor st, 1122, Budapest (Hungary); Bartykowszki, Andrea, E-mail: bartyandi@gmail.com [MTA-SE Cardiovascular Imaging Research Group, Heart and Vascular Center, Semmelweis University, 68. Varosmajor st, 1122, Budapest (Hungary); Jermendy, Ádám L., E-mail: adam.jermendy@gmail.com [MTA-SE Cardiovascular Imaging Research Group, Heart and Vascular Center, Semmelweis University, 68. Varosmajor st, 1122, Budapest (Hungary); Panajotu, Alexisz, E-mail: panajotualexisz@gmail.com [MTA-SE Cardiovascular Imaging Research Group, Heart and Vascular Center, Semmelweis University, 68. Varosmajor st, 1122, Budapest (Hungary); Karády, Júlia, E-mail: karadyjulia@gmail.com [MTA-SE Cardiovascular Imaging Research Group, Heart and Vascular Center, Semmelweis University, 68. Varosmajor st, 1122, Budapest (Hungary); and others

    2017-02-15

    Objective: To assess the impact of iterative model reconstruction (IMR) on calcified plaque quantification as compared to filtered back projection reconstruction (FBP) and hybrid iterative reconstruction (HIR) in coronary computed tomography angiography (CTA). Methods: Raw image data of 52 patients who underwent 256-slice CTA were reconstructed with IMR, HIR and FBP. We evaluated qualitative, quantitative image quality parameters and quantified calcified and partially calcified plaque volumes using automated software. Results: Overall qualitative image quality significantly improved with HIR as compared to FBP, and further improved with IMR (p < 0.01 all). Contrast-to-noise ratios were improved with IMR, compared to HIR and FBP (51.0 [43.5–59.9], 20.3 [16.2–25.9] and 14.0 [11.2–17.7], respectively, all p < 0.01) Overall plaque volumes were lowest with IMR and highest with FBP (121.7 [79.3–168.4], 138.7 [90.6–191.7], 147.0 [100.7–183.6]). Similarly, calcified volumes (>130 HU) were decreased with IMR as compared to HIR and FBP (105.9 [62.1–144.6], 110.2 [63.8–166.6], 115.9 [81.7–164.2], respectively, p < 0.05 all). High-attenuation non-calcified volumes (90–129 HU) yielded similar values with FBP and HIR (p = 0.81), however it was lower with IMR (p < 0.05 both). Intermediate- (30–89 HU) and low-attenuation (<30 HU) non-calcified volumes showed no significant difference (p = 0.22 and p = 0.67, respectively). Conclusions: IMR improves image quality of coronary CTA and decreases calcified plaque volumes.

  8. Bayesian Models for Streamflow and River Network Reconstruction using Tree Rings

    Ravindranath, A.; Devineni, N.

    2016-12-01

    Water systems face non-stationary, dynamically shifting risks due to shifting societal conditions and systematic long-term variations in climate manifesting as quasi-periodic behavior on multi-decadal time scales. Water systems are thus vulnerable to long periods of wet or dry hydroclimatic conditions. Streamflow is a major component of water systems and a primary means by which water is transported to serve ecosystems' and human needs. Thus, our concern is in understanding streamflow variability. Climate variability and impacts on water resources are crucial factors affecting streamflow, and multi-scale variability increases risk to water sustainability and systems. Dam operations are necessary for collecting water brought by streamflow while maintaining downstream ecological health. Rules governing dam operations are based on streamflow records that are woefully short compared to periods of systematic variation present in the climatic factors driving streamflow variability and non-stationarity. We use hierarchical Bayesian regression methods in order to reconstruct paleo-streamflow records for dams within a basin using paleoclimate proxies (e.g. tree rings) to guide the reconstructions. The riverine flow network for the entire basin is subsequently modeled hierarchically using feeder stream and tributary flows. This is a starting point in analyzing streamflow variability and risks to water systems, and developing a scientifically-informed dynamic risk management framework for formulating dam operations and water policies to best hedge such risks. We will apply this work to the Missouri and Delaware River Basins (DRB). Preliminary results of streamflow reconstructions for eight dams in the upper DRB using standard Gaussian regression with regional tree ring chronologies give streamflow records that now span two to two and a half centuries, and modestly smoothed versions of these reconstructed flows indicate physically-justifiable trends in the time series.

  9. Advanced modeling in positron emission tomography using Monte Carlo simulations for improving reconstruction and quantification

    Stute, Simon

    2010-01-01

    Positron Emission Tomography (PET) is a medical imaging technique that plays a major role in oncology, especially using "1"8F-Fluoro-Deoxyglucose. However, PET images suffer from a modest spatial resolution and from high noise. As a result, there is still no consensus on how tumor metabolically active volume and tumor uptake should be characterized. In the meantime, research groups keep producing new methods for such characterizations that need to be assessed. A Monte Carlo simulation based method has been developed to produce simulated PET images of patients suffering from cancer, indistinguishable from clinical images, and for which all parameters are known. The method uses high resolution PET images from patient acquisitions, from which the physiological heterogeneous activity distribution can be modeled. It was shown that the performance of quantification methods on such highly realistic simulated images are significantly lower and more variable than using simple phantom studies. Fourteen different quantification methods were also compared in realistic conditions using a group of such simulated patients. In addition, the proposed method was extended to simulate serial PET scans in the context of patient monitoring, including a modeling of the tumor changes, as well as the variability over time of non-tumoral physiological activity distribution. Monte Carlo simulations were also used to study the detection probability inside the crystals of the tomograph. A model of the crystal response was derived and included in the system matrix involved in tomographic reconstruction. The resulting reconstruction method was compared with other sophisticated methods for modeling the detector response in the image space, proposed in the literature. We demonstrated the superiority of the proposed method over equivalent approaches on simulated data, and illustrated its robustness on clinical data. For a same noise level, it is possible to reconstruct PET images offering a

  10. Performance measurement of PSF modeling reconstruction (True X) on Siemens Biograph TruePoint TrueV PET/CT.

    Lee, Young Sub; Kim, Jin Su; Kim, Kyeong Min; Kang, Joo Hyun; Lim, Sang Moo; Kim, Hee-Joung

    2014-05-01

    The Siemens Biograph TruePoint TrueV (B-TPTV) positron emission tomography (PET) scanner performs 3D PET reconstruction using a system matrix with point spread function (PSF) modeling (called the True X reconstruction). PET resolution was dramatically improved with the True X method. In this study, we assessed the spatial resolution and image quality on a B-TPTV PET scanner. In addition, we assessed the feasibility of animal imaging with a B-TPTV PET and compared it with a microPET R4 scanner. Spatial resolution was measured at center and at 8 cm offset from the center in transverse plane with warm background activity. True X, ordered subset expectation maximization (OSEM) without PSF modeling, and filtered back-projection (FBP) reconstruction methods were used. Percent contrast (% contrast) and percent background variability (% BV) were assessed according to NEMA NU2-2007. The recovery coefficient (RC), non-uniformity, spill-over ratio (SOR), and PET imaging of the Micro Deluxe Phantom were assessed to compare image quality of B-TPTV PET with that of the microPET R4. When True X reconstruction was used, spatial resolution was RC with True X reconstruction was higher than that with the FBP method and the OSEM without PSF modeling method on the microPET R4. The non-uniformity with True X reconstruction was higher than that with FBP and OSEM without PSF modeling on microPET R4. SOR with True X reconstruction was better than that with FBP or OSEM without PSF modeling on the microPET R4. This study assessed the performance of the True X reconstruction. Spatial resolution with True X reconstruction was improved by 45 % and its % contrast was significantly improved compared to those with the conventional OSEM without PSF modeling reconstruction algorithm. The noise level was higher than that with the other reconstruction algorithm. Therefore, True X reconstruction should be used with caution when quantifying PET data.

  11. Semi-analytical computation of the acoustic field of a segment of a cylindrically concave transducer in lossless and attenuating media.

    Karbeyaz, Başak Ulker; Miller, Eric L; Cleveland, Robin O

    2007-02-01

    Conventional ultrasound transducers used for medical diagnosis generally consist of linearly aligned rectangular apertures with elements that are focused in one plane. While traditional beamforming is easily accomplished with such transducers, the development of quantitative, physics-based imaging methods, such as tomography, requires an accurate, and computationally efficient, model of the field radiated by the transducer. The field can be expressed in terms of the Helmholtz-Kirchhoff integral; however, its direct numerical evaluation is a computationally intensive task. Here, a fast semianalytical method based on Stepanishen's spatial impulse response formulation [J. Acoust. Soc. Am. 49, 1627-1638 (1971)] is developed to compute the acoustic field of a rectangular element of cylindrically concave transducers in a homogeneous medium. The pressure field, for, lossless and attenuating media, is expressed as a superposition of Bessel functions, which can be evaluated rapidly. In particular, the coefficients of the Bessel series are frequency independent and need only be evaluated once for a given transducer. A speed up of two orders of magnitude is obtained compared to an optimized direct numerical integration. The numerical results are compared with Field II and the Fresnel approximation.

  12. An analytical reconstruction model of the spread-out Bragg peak using laser-accelerated proton beams.

    Tao, Li; Zhu, Kun; Zhu, Jungao; Xu, Xiaohan; Lin, Chen; Ma, Wenjun; Lu, Haiyang; Zhao, Yanying; Lu, Yuanrong; Chen, Jia-Er; Yan, Xueqing

    2017-07-07

    With the development of laser technology, laser-driven proton acceleration provides a new method for proton tumor therapy. However, it has not been applied in practice because of the wide and decreasing energy spectrum of laser-accelerated proton beams. In this paper, we propose an analytical model to reconstruct the spread-out Bragg peak (SOBP) using laser-accelerated proton beams. Firstly, we present a modified weighting formula for protons of different energies. Secondly, a theoretical model for the reconstruction of SOBPs with laser-accelerated proton beams has been built. It can quickly calculate the number of laser shots needed for each energy interval of the laser-accelerated protons. Finally, we show the 2D reconstruction results of SOBPs for laser-accelerated proton beams and the ideal situation. The final results show that our analytical model can give an SOBP reconstruction scheme that can be used for actual tumor therapy.

  13. [Reconstruction and measurement of a digital dental model using grating projection and reverse engineering].

    Zhenzhen, Wang; Yi, Lu; Jun, Song; Jun, Chen; Qin, Zhou

    2015-02-01

    This work lays the foundation for establishing a digital model database with normal occlusion. A digital dental cast is acquired through grating projection, and model features are measured through reverse engineering. The grating projection system controlled by a computer was projected onto the surface of a normal dental model. Three-dimensional contour data were obtained through multi-angle shooting. A three-dimensional model was constructed, and the model features were analyzed by using reverse engineering. The digital model was compared with the plaster model to determine the accuracy of the measurement system. The structure of three-dimensional reconstruction model was clear. The digital models of two measurements exhibited no significant difference (P > 0.05). When digital and plaster models were measured, we found that the crown length and arch width were not statistically different (P > 0.05), whereas the difference between the crown width and arch length was statistically significant (P model by using the grating projection technique and reverse engineering can be used for dental model measurement in clinic al and scientific research and can provide a scientific method for establishing a digital model database with normal occlusion.

  14. Tools for macromolecular model building and refinement into electron cryo-microscopy reconstructions

    Brown, Alan; Long, Fei; Nicholls, Robert A.; Toots, Jaan; Emsley, Paul; Murshudov, Garib, E-mail: garib@mrc-lmb.cam.ac.uk [MRC Laboratory of Molecular Biology, Francis Crick Avenue, Cambridge CB2 0QH (United Kingdom)

    2015-01-01

    A description is given of new tools to facilitate model building and refinement into electron cryo-microscopy reconstructions. The recent rapid development of single-particle electron cryo-microscopy (cryo-EM) now allows structures to be solved by this method at resolutions close to 3 Å. Here, a number of tools to facilitate the interpretation of EM reconstructions with stereochemically reasonable all-atom models are described. The BALBES database has been repurposed as a tool for identifying protein folds from density maps. Modifications to Coot, including new Jiggle Fit and morphing tools and improved handling of nucleic acids, enhance its functionality for interpreting EM maps. REFMAC has been modified for optimal fitting of atomic models into EM maps. As external structural information can enhance the reliability of the derived atomic models, stabilize refinement and reduce overfitting, ProSMART has been extended to generate interatomic distance restraints from nucleic acid reference structures, and a new tool, LIBG, has been developed to generate nucleic acid base-pair and parallel-plane restraints. Furthermore, restraint generation has been integrated with visualization and editing in Coot, and these restraints have been applied to both real-space refinement in Coot and reciprocal-space refinement in REFMAC.

  15. Constraints on reconstructed dark energy model from SN Ia and BAO/CMB observations

    Mamon, Abdulla Al [Manipal University, Manipal Centre for Natural Sciences, Manipal (India); Visva-Bharati, Department of Physics, Santiniketan (India); Bamba, Kazuharu [Fukushima University, Division of Human Support System, Faculty of Symbiotic Systems Science, Fukushima (Japan); Das, Sudipta [Visva-Bharati, Department of Physics, Santiniketan (India)

    2017-01-15

    The motivation of the present work is to reconstruct a dark energy model through the dimensionless dark energy function X(z), which is the dark energy density in units of its present value. In this paper, we have shown that a scalar field φ having a phenomenologically chosen X(z) can give rise to a transition from a decelerated to an accelerated phase of expansion for the universe. We have examined the possibility of constraining various cosmological parameters (such as the deceleration parameter and the effective equation of state parameter) by comparing our theoretical model with the latest Type Ia Supernova (SN Ia), Baryon Acoustic Oscillations (BAO) and Cosmic Microwave Background (CMB) radiation observations. Using the joint analysis of the SN Ia+BAO/CMB dataset, we have also reconstructed the scalar potential from the parametrized X(z). The relevant potential is found, a polynomial in φ. From our analysis, it has been found that the present model favors the standard ΛCDM model within 1σ confidence level. (orig.)

  16. Tools for macromolecular model building and refinement into electron cryo-microscopy reconstructions

    Brown, Alan; Long, Fei; Nicholls, Robert A.; Toots, Jaan; Emsley, Paul; Murshudov, Garib

    2015-01-01

    A description is given of new tools to facilitate model building and refinement into electron cryo-microscopy reconstructions. The recent rapid development of single-particle electron cryo-microscopy (cryo-EM) now allows structures to be solved by this method at resolutions close to 3 Å. Here, a number of tools to facilitate the interpretation of EM reconstructions with stereochemically reasonable all-atom models are described. The BALBES database has been repurposed as a tool for identifying protein folds from density maps. Modifications to Coot, including new Jiggle Fit and morphing tools and improved handling of nucleic acids, enhance its functionality for interpreting EM maps. REFMAC has been modified for optimal fitting of atomic models into EM maps. As external structural information can enhance the reliability of the derived atomic models, stabilize refinement and reduce overfitting, ProSMART has been extended to generate interatomic distance restraints from nucleic acid reference structures, and a new tool, LIBG, has been developed to generate nucleic acid base-pair and parallel-plane restraints. Furthermore, restraint generation has been integrated with visualization and editing in Coot, and these restraints have been applied to both real-space refinement in Coot and reciprocal-space refinement in REFMAC

  17. Submillisievert coronary calcium quantification using model-based iterative reconstruction: A within-patient analysis

    Harder, Annemarie M. den, E-mail: a.m.denharder@umcutrecht.nl [Department of Radiology, University Medical Center Utrecht, Utrecht (Netherlands); Wolterink, Jelmer M. [Image Sciences Institute, University Medical Center Utrecht, Utrecht (Netherlands); Willemink, Martin J.; Schilham, Arnold M.R.; Jong, Pim A. de [Department of Radiology, University Medical Center Utrecht, Utrecht (Netherlands); Budde, Ricardo P.J. [Department of Radiology, Erasmus Medical Center, Rotterdam (Netherlands); Nathoe, Hendrik M. [Department of Cardiology, University Medical Center Utrecht, Utrecht (Netherlands); Išgum, Ivana [Image Sciences Institute, University Medical Center Utrecht, Utrecht (Netherlands); Leiner, Tim [Department of Radiology, University Medical Center Utrecht, Utrecht (Netherlands)

    2016-11-15

    Highlights: • Iterative reconstruction (IR) allows for low dose coronary calcium scoring (CCS). • Radiation dose can be safely reduced to 0.4 mSv with hybrid and model-based IR. • FBP is not feasible at these dose levels due to excessive noise. - Abstract: Purpose: To determine the effect of model-based iterative reconstruction (IR) on coronary calcium quantification using different submillisievert CT acquisition protocols. Methods: Twenty-eight patients received a clinically indicated non contrast-enhanced cardiac CT. After the routine dose acquisition, low-dose acquisitions were performed with 60%, 40% and 20% of the routine dose mAs. Images were reconstructed with filtered back projection (FBP), hybrid IR (HIR) and model-based IR (MIR) and Agatston scores, calcium volumes and calcium mass scores were determined. Results: Effective dose was 0.9, 0.5, 0.4 and 0.2 mSv, respectively. At 0.5 and 0.4 mSv, differences in Agatston scores with both HIR and MIR compared to FBP at routine dose were small (−0.1 to −2.9%), while at 0.2 mSv, differences in Agatston scores of −12.6 to −14.6% occurred. Reclassification of risk category at reduced dose levels was more frequent with MIR (21–25%) than with HIR (18%). Conclusions: Radiation dose for coronary calcium scoring can be safely reduced to 0.4 mSv using both HIR and MIR, while FBP is not feasible at these dose levels due to excessive noise. Further dose reduction can lead to an underestimation in Agatston score and subsequent reclassification to lower risk categories. Mass scores were unaffected by dose reductions.

  18. Reconstructing historical radionuclide concentrations along the east coast of Ireland using a compartmental model

    Smith, C.N.; Clarke, S.; McDonald, P.; Goshawk, J.A.; Jones, S.R.

    2000-01-01

    A mathematical model is presented that simulates the annually averaged transport of radionuclides, originating from the BNFL reprocessing plant at Sellafield, throughout the Irish Sea. The model, CUMBRIA77, represents the processes of radionuclide transport and dispersion in the marine environment and allows predictions of radionuclide concentration in various environmental media, including biota, to be made throughout the whole of the Irish Sea. In this paper we describe the use of the model to reconstruct the historical activity concentrations of 137Cs and 239+240Pu in a variety of environmental media in the western Irish Sea and along the Irish east coast back to 1950. This reconstruction exercise is of interest because only limited measurements of 137Cs and 239+240Pu activity are available prior to the 1980s. The predictions were compared to the available measured data to validate their accuracy. The results of the reconstruction indicate that activity concentrations of 137Cs in the western Irish Sea follow a similar, though slightly delayed and smoothed, profile to the discharges from the Sellafield site, with concentrations at the time of peak discharge (the mid-1970s) being around an order of magnitude higher than those measured in the 1980s and 1990s. By contrast, the concentrations of 239+240Pu at the time of peak discharges were similar to those presently measured. These differences reflect the distinct marine chemistries of the two nuclides, in particular the higher propensity of plutonium to bind to sediments leading to extended transport times. Despite these differences in behaviour the doses to Irish seafood consumers from 137Cs remain significantly higher than those from 239+240Pu

  19. A GUI visualization system for airborne lidar image data to reconstruct 3D city model

    Kawata, Yoshiyuki; Koizumi, Kohei

    2015-10-01

    A visualization toolbox system with graphical user interfaces (GUIs) was developed for the analysis of LiDAR point cloud data, as a compound object oriented widget application in IDL (Interractive Data Language). The main features in our system include file input and output abilities, data conversion capability from ascii formatted LiDAR point cloud data to LiDAR image data whose pixel value corresponds the altitude measured by LiDAR, visualization of 2D/3D images in various processing steps and automatic reconstruction ability of 3D city model. The performance and advantages of our graphical user interface (GUI) visualization system for LiDAR data are demonstrated.

  20. Reconstruction and modeling protein translocation and compartmentalization in Escherichia coli at the genome-scale

    Liu, Joanne K.; O’Brien, Edward J.; Lerman, Joshua A.

    2014-01-01

    Background: Membranes play a crucial role in cellular functions. Membranes provide a physical barrier, control the trafficking of substances entering and leaving the cell, and are a major determinant of cellular ultra-structure. In addition, components embedded within the membrane participate...... the computation of cellular phenotypes through an integrated computation of proteome composition, abundance, and activity in four cellular compartments (cytoplasm, periplasm, inner and outer membrane). Reconstruction and validation of the model has demonstrated that the iJL1678-ME is capable of capturing...

  1. A new method for three-dimensional laparoscopic ultrasound model reconstruction

    Fristrup, C W; Pless, T; Durup, J

    2004-01-01

    BACKGROUND: Laparoscopic ultrasound is an important modality in the staging of gastrointestinal tumors. Correct staging depends on good spatial understanding of the regional tumor infiltration. Three-dimensional (3D) models may facilitate the evaluation of tumor infiltration. The aim of the study...... accuracy of the new method was tested ex vivo, and the clinical feasibility was tested on a small series of patients. RESULTS: Both electromagnetic tracked reconstructions and the new 3D method gave good volumetric information with no significant difference. Clinical use of the new 3D method showed...

  2. The technique for 3D printing patient-specific models for auricular reconstruction.

    Flores, Roberto L; Liss, Hannah; Raffaelli, Samuel; Humayun, Aiza; Khouri, Kimberly S; Coelho, Paulo G; Witek, Lukasz

    2017-06-01

    Currently, surgeons approach autogenous microtia repair by creating a two-dimensional (2D) tracing of the unaffected ear to approximate a three-dimensional (3D) construct, a difficult process. To address these shortcomings, this study introduces the fabrication of patient-specific, sterilizable 3D printed auricular model for autogenous auricular reconstruction. A high-resolution 3D digital photograph was captured of the patient's unaffected ear and surrounding anatomic structures. The photographs were exported and uploaded into Amira, for transformation into a digital (.stl) model, which was imported into Blender, an open source software platform for digital modification of data. The unaffected auricle as digitally isolated and inverted to render a model for the contralateral side. The depths of the scapha, triangular fossa, and cymba were deepened to accentuate their contours. Extra relief was added to the helical root to further distinguish this structure. The ear was then digitally deconstructed and separated into its individual auricular components for reconstruction. The completed ear and its individual components were 3D printed using polylactic acid filament and sterilized following manufacturer specifications. The sterilized models were brought to the operating room to be utilized by the surgeon. The models allowed for more accurate anatomic measurements compared to 2D tracings, which reduced the degree of estimation required by surgeons. Approximately 20 g of the PLA filament were utilized for the construction of these models, yielding a total material cost of approximately $1. Using the methodology detailed in this report, as well as departmentally available resources (3D digital photography and 3D printing), a sterilizable, patient-specific, and inexpensive 3D auricular model was fabricated to be used intraoperatively. This technique of printing customized-to-patient models for surgeons to use as 'guides' shows great promise. Copyright © 2017 European

  3. Estimating absorption coefficients of colored dissolved organic matter (CDOM) using a semi-analytical algorithm for Southern Beaufort Sea (Canadian Arctic) waters: application to deriving concentrations of dissolved organic carbon from space

    Matsuoka, A.; Hooker, S. B.; Bricaud, A.; Gentili, B.; Babin, M.

    2012-10-01

    A series of papers have suggested that freshwater discharge, including a large amount of dissolved organic matter (DOM), has increased since the middle of the 20th century. In this study, a semi-analytical algorithm for estimating light absorption coefficients of the colored fraction of DOM (CDOM) was developed for Southern Beaufort Sea waters using remote sensing reflectance at six wavelengths in the visible spectral domain corresponding to MODIS ocean color sensor. This algorithm allows to separate colored detrital matter (CDM) into CDOM and non-algal particles (NAP) by determining NAP absorption using an empirical relationship between NAP absorption and particle backscattering coefficients. Evaluation using independent datasets, that were not used for developing the algorithm, showed that CDOM absorption can be estimated accurately to within an uncertainty of 35% and 50% for oceanic and turbid waters, respectively. In situ measurements showed that dissolved organic carbon (DOC) concentrations were tightly correlated with CDOM absorption (r2 = 0.97). By combining the CDOM absorption algorithm together with the DOC versus CDOM relationship, it is now possible to estimate DOC concentrations in the near-surface layer of the Southern Beaufort Sea using satellite ocean color data. DOC concentrations in the surface waters were estimated using MODIS ocean color data, and the estimates showed reasonable values compared to in situ measurements. We propose a routine and near real-time method for deriving DOC concentrations from space, which may open the way to an estimate of DOC budgets for Arctic coastal waters.

  4. Living on the edge: a toy model for holographic reconstruction of algebras with centers

    Donnelly, William; Marolf, Donald; Michel, Ben; Wien, Jason [Department of Physics, University of California,Santa Barbara, CA 93106 (United States)

    2017-04-18

    We generalize the Pastawski-Yoshida-Harlow-Preskill (HaPPY) holographic quantum error-correcting code to provide a toy model for bulk gauge fields or linearized gravitons. The key new elements are the introduction of degrees of freedom on the links (edges) of the associated tensor network and their connection to further copies of the HaPPY code by an appropriate isometry. The result is a model in which boundary regions allow the reconstruction of bulk algebras with central elements living on the interior edges of the (greedy) entanglement wedge, and where these central elements can also be reconstructed from complementary boundary regions. In addition, the entropy of boundary regions receives both Ryu-Takayanagi-like contributions and further corrections that model the ((δArea)/(4G{sub N})) term of Faulkner, Lewkowycz, and Maldacena. Comparison with Yang-Mills theory then suggests that this ((δArea)/(4G{sub N})) term can be reinterpreted as a part of the bulk entropy of gravitons under an appropriate extension of the physical bulk Hilbert space.

  5. Living on the edge: a toy model for holographic reconstruction of algebras with centers

    Donnelly, William; Marolf, Donald; Michel, Ben; Wien, Jason

    2017-01-01

    We generalize the Pastawski-Yoshida-Harlow-Preskill (HaPPY) holographic quantum error-correcting code to provide a toy model for bulk gauge fields or linearized gravitons. The key new elements are the introduction of degrees of freedom on the links (edges) of the associated tensor network and their connection to further copies of the HaPPY code by an appropriate isometry. The result is a model in which boundary regions allow the reconstruction of bulk algebras with central elements living on the interior edges of the (greedy) entanglement wedge, and where these central elements can also be reconstructed from complementary boundary regions. In addition, the entropy of boundary regions receives both Ryu-Takayanagi-like contributions and further corrections that model the ((δArea)/(4G N )) term of Faulkner, Lewkowycz, and Maldacena. Comparison with Yang-Mills theory then suggests that this ((δArea)/(4G N )) term can be reinterpreted as a part of the bulk entropy of gravitons under an appropriate extension of the physical bulk Hilbert space.

  6. Growing skin: A computational model for skin expansion in reconstructive surgery

    Buganza Tepole, Adrián; Joseph Ploch, Christopher; Wong, Jonathan; Gosain, Arun K.; Kuhl, Ellen

    2011-10-01

    The goal of this manuscript is to establish a novel computational model for stretch-induced skin growth during tissue expansion. Tissue expansion is a common surgical procedure to grow extra skin for reconstructing birth defects, burn injuries, or cancerous breasts. To model skin growth within the framework of nonlinear continuum mechanics, we adopt the multiplicative decomposition of the deformation gradient into an elastic and a growth part. Within this concept, we characterize growth as an irreversible, stretch-driven, transversely isotropic process parameterized in terms of a single scalar-valued growth multiplier, the in-plane area growth. To discretize its evolution in time, we apply an unconditionally stable, implicit Euler backward scheme. To discretize it in space, we utilize the finite element method. For maximum algorithmic efficiency and optimal convergence, we suggest an inner Newton iteration to locally update the growth multiplier at each integration point. This iteration is embedded within an outer Newton iteration to globally update the deformation at each finite element node. To demonstrate the characteristic features of skin growth, we simulate the process of gradual tissue expander inflation. To visualize growth-induced residual stresses, we simulate a subsequent tissue expander deflation. In particular, we compare the spatio-temporal evolution of area growth, elastic strains, and residual stresses for four commonly available tissue expander geometries. We believe that predictive computational modeling can open new avenues in reconstructive surgery to rationalize and standardize clinical process parameters such as expander geometry, expander size, expander placement, and inflation timing.

  7. Trajectory Reconstruction and Uncertainty Analysis Using Mars Science Laboratory Pre-Flight Scale Model Aeroballistic Testing

    Lugo, Rafael A.; Tolson, Robert H.; Schoenenberger, Mark

    2013-01-01

    As part of the Mars Science Laboratory (MSL) trajectory reconstruction effort at NASA Langley Research Center, free-flight aeroballistic experiments of instrumented MSL scale models was conducted at Aberdeen Proving Ground in Maryland. The models carried an inertial measurement unit (IMU) and a flush air data system (FADS) similar to the MSL Entry Atmospheric Data System (MEADS) that provided data types similar to those from the MSL entry. Multiple sources of redundant data were available, including tracking radar and on-board magnetometers. These experimental data enabled the testing and validation of the various tools and methodologies that will be used for MSL trajectory reconstruction. The aerodynamic parameters Mach number, angle of attack, and sideslip angle were estimated using minimum variance with a priori to combine the pressure data and pre-flight computational fluid dynamics (CFD) data. Both linear and non-linear pressure model terms were also estimated for each pressure transducer as a measure of the errors introduced by CFD and transducer calibration. Parameter uncertainties were estimated using a "consider parameters" approach.

  8. Dynamic concision for three-dimensional reconstruction of human organ built with virtual reality modelling language (VRML)*

    Yu, Zheng-yang; Zheng, Shu-sen; Chen, Lei-ting; He, Xiao-qian; Wang, Jian-jun

    2005-01-01

    This research studies the process of 3D reconstruction and dynamic concision based on 2D medical digital images using virtual reality modelling language (VRML) and JavaScript language, with a focus on how to realize the dynamic concision of 3D medical model with script node and sensor node in VRML. The 3D reconstruction and concision of body internal organs can be built with such high quality that they are better than those obtained from the traditional methods. With the function of dynamic concision, the VRML browser can offer better windows for man-computer interaction in real-time environment than ever before. 3D reconstruction and dynamic concision with VRML can be used to meet the requirement for the medical observation of 3D reconstruction and have a promising prospect in the fields of medical imaging. PMID:15973760

  9. RECONSTRUCTION OF 3D VECTOR MODELS OF BUILDINGS BY COMBINATION OF ALS, TLS AND VLS DATA

    H. Boulaassal

    2012-09-01

    Full Text Available Airborne Laser Scanning (ALS, Terrestrial Laser Scanning (TLS and Vehicle based Laser Scanning (VLS are widely used as data acquisition methods for 3D building modelling. ALS data is often used to generate, among others, roof models. TLS data has proven its effectiveness in the geometric reconstruction of building façades. Although the operating algorithms used in the processing chain of these two kinds of data are quite similar, their combination should be more investigated. This study explores the possibility of combining ALS and TLS data for simultaneously producing 3D building models from bird point of view and pedestrian point of view. The geometric accuracy of roofs and façades models is different due to the acquisition techniques. In order to take these differences into account, the surfaces composing roofs and façades are extracted with the same algorithm of segmentation. Nevertheless the segmentation algorithm must be adapted to the properties of the different point clouds. It is based on the RANSAC algorithm, but has been applied in a sequential way in order to extract all potential planar clusters from airborne and terrestrial datasets. Surfaces are fitted to planar clusters, allowing edge detection and reconstruction of vector polygons. Models resulting from TLS data are obviously more accurate than those generated from ALS data. Therefore, the geometry of the roofs is corrected and adapted according to the geometry of the corresponding façades. Finally, the effects of the differences between raw ALS and TLS data on the results of the modeling process are analyzed. It is shown that such combination could be used to produce reliable 3D building models.

  10. Indoor Modelling from Slam-Based Laser Scanner: Door Detection to Envelope Reconstruction

    Díaz-Vilariño, L.; Verbree, E.; Zlatanova, S.; Diakité, A.

    2017-09-01

    Updated and detailed indoor models are being increasingly demanded for various applications such as emergency management or navigational assistance. The consolidation of new portable and mobile acquisition systems has led to a higher availability of 3D point cloud data from indoors. In this work, we explore the combined use of point clouds and trajectories from SLAM-based laser scanner to automate the reconstruction of building indoors. The methodology starts by door detection, since doors represent transitions from one indoor space to other, which constitutes an initial approach about the global configuration of the point cloud into building rooms. For this purpose, the trajectory is used to create a vertical point cloud profile in which doors are detected as local minimum of vertical distances. As point cloud and trajectory are related by time stamp, this feature is used to subdivide the point cloud into subspaces according to the location of the doors. The correspondence between subspaces and building rooms is not unambiguous. One subspace always corresponds to one room, but one room is not necessarily depicted by just one subspace, for example, in case of a room containing several doors and in which the acquisition is performed in a discontinue way. The labelling problem is formulated as combinatorial approach solved as a minimum energy optimization. Once the point cloud is subdivided into building rooms, envelop (conformed by walls, ceilings and floors) is reconstructed for each space. The connectivity between spaces is included by adding the previously detected doors to the reconstructed model. The methodology is tested in a real case study.

  11. INDOOR MODELLING FROM SLAM-BASED LASER SCANNER: DOOR DETECTION TO ENVELOPE RECONSTRUCTION

    L. Díaz-Vilariño

    2017-09-01

    Full Text Available Updated and detailed indoor models are being increasingly demanded for various applications such as emergency management or navigational assistance. The consolidation of new portable and mobile acquisition systems has led to a higher availability of 3D point cloud data from indoors. In this work, we explore the combined use of point clouds and trajectories from SLAM-based laser scanner to automate the reconstruction of building indoors. The methodology starts by door detection, since doors represent transitions from one indoor space to other, which constitutes an initial approach about the global configuration of the point cloud into building rooms. For this purpose, the trajectory is used to create a vertical point cloud profile in which doors are detected as local minimum of vertical distances. As point cloud and trajectory are related by time stamp, this feature is used to subdivide the point cloud into subspaces according to the location of the doors. The correspondence between subspaces and building rooms is not unambiguous. One subspace always corresponds to one room, but one room is not necessarily depicted by just one subspace, for example, in case of a room containing several doors and in which the acquisition is performed in a discontinue way. The labelling problem is formulated as combinatorial approach solved as a minimum energy optimization. Once the point cloud is subdivided into building rooms, envelop (conformed by walls, ceilings and floors is reconstructed for each space. The connectivity between spaces is included by adding the previously detected doors to the reconstructed model. The methodology is tested in a real case study.

  12. A single-photon ecat reconstruction procedure based on a PSF model

    Ying-Lie, O.

    1984-01-01

    Emission Computed Axial Tomography (ECAT) has been applied in nuclear medicine for the past few years. Owing to attenuation and scatter along the ray path, adequate correction methods are required. In this thesis, a correction method for attenuation, detector response and Compton scatter has been proposed. The method developed is based on a PSF model. The parameters of the models were derived by fitting experimental and simulation data. Because of its flexibility, a Monte Carlo simulation method has been employed. Using the PSF models, it was found that the ECAT problem can be described by the added modified equation. Application of the reconstruction procedure on simulation data yield satisfactory results. The algorithm tends to amplify noise and distortion in the data, however. Therefore, the applicability of the method on patient studies remain to be seen. (Auth.)

  13. Reconstruction of fire regimes through integrated paleoecological proxy data and ecological modeling.

    Iglesias, Virginia; Yospin, Gabriel I; Whitlock, Cathy

    2014-01-01

    Fire is a key ecological process affecting vegetation dynamics and land cover. The characteristic frequency, size, and intensity of fire are driven by interactions between top-down climate-driven and bottom-up fuel-related processes. Disentangling climatic from non-climatic drivers of past fire regimes is a grand challenge in Earth systems science, and a topic where both paleoecology and ecological modeling have made substantial contributions. In this manuscript, we (1) review the use of sedimentary charcoal as a fire proxy and the methods used in charcoal-based fire history reconstructions; (2) identify existing techniques for paleoecological modeling; and (3) evaluate opportunities for coupling of paleoecological and ecological modeling approaches to better understand the causes and consequences of past, present, and future fire activity.

  14. RECONSTRUCTION OF PENSION FUND PERFORMANCE MODEL AS AN EFFORT TO WORTHY PENSION FUND GOVERNANCE

    Apriyanto Gaguk

    2017-08-01

    Full Text Available This study aims to reconstruct the performance assessment model on Pension Fund by modifying Baldrige Assessment method that is adjusted to the conditions in Dana Pensiun A (Pension Fund A in order to realize Good Pension Fund Governance. This study design uses case study analysis. The research sites were conducted in Dana Pensiun A. The informants in the study included the employer, supervisory board, pension fund management, active and passive pension fund participant as well as financial services authority elements as the regulator. The result of this research is a construction of a comprehensive and profound retirement performance assessment model with attention to aspects of growth and fair distribution. The model includes the parameters of leadership, strategic planning, stakeholders focus, measurement, analysis, and knowledge management, workforce focus, standard operational procedure focus, result, just and fair distribution of wealth and power.

  15. Atmospheric transport and dispersion modeling for the Hanford Environmental Dose Reconstruction Project

    Ramsdell, J.V.

    1991-07-01

    Radiation doses that may have resulted from operations at the Hanford Site are being estimated in the Hanford Environmental Dose Reconstruction (HEDR) Project. One of the project subtasks, atmospheric transport, is responsible for estimating the transport, diffusion and deposition of radionuclides released to the atmosphere. This report discusses modeling transport and diffusion in the atmospheric pathway. It is divided into three major sections. The first section of the report presents the atmospheric modeling approach selected following discussion with the Technical Steering Panel that directs the HEDR Project. In addition, the section discusses the selection of the MESOI/MESORAD suite of atmospheric dispersion models that form the basis for initial calculations and future model development. The second section of the report describes alternative modeling approaches that were considered. Emphasis is placed on the family of plume and puff models that are based on Gaussian solution to the diffusion equations. The final portion of the section describes the performance of various models. The third section of the report discusses factors that bear on the selection of an atmospheric transport modeling approach for HEDR. These factors, which include the physical setting of the Hanford Site and the available meteorological data, serve as constraints on model selection. Five appendices are included in the report. 39 refs., 4 figs., 2 tabs

  16. Reconstruction of 3D tree stem models from low-cost terrestrial laser scanner data

    Kelbe, Dave; Romanczyk, Paul; van Aardt, Jan; Cawse-Nicholson, Kerry

    2013-05-01

    With the development of increasingly advanced airborne sensing systems, there is a growing need to support sensor system design, modeling, and product-algorithm development with explicit 3D structural ground truth commensurate to the scale of acquisition. Terrestrial laser scanning is one such technique which could provide this structural information. Commercial instrumentation to suit this purpose has existed for some time now, but cost can be a prohibitive barrier for some applications. As such we recently developed a unique laser scanning system from readily-available components, supporting low cost, highly portable, and rapid measurement of below-canopy 3D forest structure. Tools were developed to automatically reconstruct tree stem models as an initial step towards virtual forest scene generation. The objective of this paper is to assess the potential of this hardware/algorithm suite to reconstruct 3D stem information for a single scan of a New England hardwood forest site. Detailed tree stem structure (e.g., taper, sweep, and lean) is recovered for trees of varying diameter, species, and range from the sensor. Absolute stem diameter retrieval accuracy is 12.5%, with a 4.5% overestimation bias likely due to the LiDAR beam divergence.

  17. Modeling astronomical adaptive optics performance with temporally filtered Wiener reconstruction of slope data

    Correia, Carlos M.; Bond, Charlotte Z.; Sauvage, Jean-François; Fusco, Thierry; Conan, Rodolphe; Wizinowich, Peter L.

    2017-10-01

    We build on a long-standing tradition in astronomical adaptive optics (AO) of specifying performance metrics and error budgets using linear systems modeling in the spatial-frequency domain. Our goal is to provide a comprehensive tool for the calculation of error budgets in terms of residual temporally filtered phase power spectral densities and variances. In addition, the fast simulation of AO-corrected point spread functions (PSFs) provided by this method can be used as inputs for simulations of science observations with next-generation instruments and telescopes, in particular to predict post-coronagraphic contrast improvements for planet finder systems. We extend the previous results and propose the synthesis of a distributed Kalman filter to mitigate both aniso-servo-lag and aliasing errors whilst minimizing the overall residual variance. We discuss applications to (i) analytic AO-corrected PSF modeling in the spatial-frequency domain, (ii) post-coronagraphic contrast enhancement, (iii) filter optimization for real-time wavefront reconstruction, and (iv) PSF reconstruction from system telemetry. Under perfect knowledge of wind velocities, we show that $\\sim$60 nm rms error reduction can be achieved with the distributed Kalman filter embodying anti- aliasing reconstructors on 10 m class high-order AO systems, leading to contrast improvement factors of up to three orders of magnitude at few ${\\lambda}/D$ separations ($\\sim1-5{\\lambda}/D$) for a 0 magnitude star and reaching close to one order of magnitude for a 12 magnitude star.

  18. 3D imaging acquisition, modeling, and prototyping for facial defects reconstruction

    Sansoni, Giovanna; Trebeschi, Marco; Cavagnini, Gianluca; Gastaldi, Giorgio

    2009-01-01

    A novel approach that combines optical three-dimensional imaging, reverse engineering (RE) and rapid prototyping (RP) for mold production in the prosthetic reconstruction of facial prostheses is presented. A commercial laser-stripe digitizer is used to perform the multiview acquisition of the patient's face; the point clouds are aligned and merged in order to obtain a polygonal model, which is then edited to sculpture the virtual prothesis. Two physical models of both the deformed face and the 'repaired' face are obtained: they differ only in the defect zone. Depending on the material used for the actual prosthesis, the two prototypes can be used either to directly cast the final prosthesis or to fabricate the positive wax pattern. Two case studies are presented, referring to prostetic reconstructions of an eye and of a nose. The results demonstrate the advantages over conventional techniques as well as the improvements with respect to known automated manufacturing techniques in the mold construction. The proposed method results into decreased patient's disconfort, reduced dependence on the anaplasthologist skill, increased repeatability and efficiency of the whole process.

  19. Modeling of Eddy current distribution and equilibrium reconstruction in the SST-1 Tokamak

    Banerjee, Santanu; Sharma, Deepti; Radhakrishnana, Srinivasan; Daniel, Raju; Shankara Joisa, Y.; Atrey, Parveen Kumar; Pathak, Surya Kumar; Singh, Amit Kumar

    2015-01-01

    Toroidal continuity of the vacuum vessel and the cryostat leads to the generation of large eddy currents in these passive structures during the Ohmic phase of the steady state superconducting tokamak SST-1. This reduces the magnitude of the loop voltage seen by the plasma as also delays its buildup. During the ramping down of the Ohmic transformer current (OT), the resultant eddy currents flowing in the passive conductors play a crucial role in governing the plasma equilibrium. Amount of this eddy current and its distribution has to be accurately determined such that this can be fed to the equilibrium reconstruction code as an input. For the accurate inclusion of the effect of eddy currents in the reconstruction, the toroidally continuous conducting structures like the vacuum vessel and the cryostat with large poloidal cross-section and any other poloidal field (PF) coil sitting idle on the machine are broken up into a large number of co-axial toroidal current carrying filaments. The inductance matrix for this large set of toroidal current carrying conductors is calculated using the standard Green's function and the induced currents are evaluated for the OT waveform of each plasma discharge. Consistency of this filament model is cross-checked with the 11 in-vessel and 12 out-vessel toroidal flux loop signals in SST-1. Resistances of the filaments are adjusted to reproduce the experimental measurements of these flux loops in pure OT shots and shots with OT and vertical field (BV). Such shots are taken routinely in SST-1 without the fill gas to cross-check the consistency of the filament model. A Grad-Shafranov (GS) equation solver, named as IPREQ, has been developed in IPR to reconstruct the plasma equilibrium through searching for the best-fit current density profile. Ohmic transformer current (OT), vertical field coil current (BV), currents in the passive filaments along with the plasma pressure (p) and current (I p ) profiles are used as inputs to the IPREQ

  20. A Graph-Based Approach for 3D Building Model Reconstruction from Airborne LiDAR Point Clouds

    Bin Wu

    2017-01-01

    Full Text Available 3D building model reconstruction is of great importance for environmental and urban applications. Airborne light detection and ranging (LiDAR is a very useful data source for acquiring detailed geometric and topological information of building objects. In this study, we employed a graph-based method based on hierarchical structure analysis of building contours derived from LiDAR data to reconstruct urban building models. The proposed approach first uses a graph theory-based localized contour tree method to represent the topological structure of buildings, then separates the buildings into different parts by analyzing their topological relationships, and finally reconstructs the building model by integrating all the individual models established through the bipartite graph matching process. Our approach provides a more complete topological and geometrical description of building contours than existing approaches. We evaluated the proposed method by applying it to the Lujiazui region in Shanghai, China, a complex and large urban scene with various types of buildings. The results revealed that complex buildings could be reconstructed successfully with a mean modeling error of 0.32 m. Our proposed method offers a promising solution for 3D building model reconstruction from airborne LiDAR point clouds.

  1. Analyzing historical land use changes using a Historical Land Use Reconstruction Model: a case study in Zhenlai County, northeastern China

    Yang, Yuanyuan; Zhang, Shuwen; Liu, Yansui; Xing, Xiaoshi; de Sherbinin, Alex

    2017-01-01

    Historical land use information is essential to understanding the impact of anthropogenic modification of land use/cover on the temporal dynamics of environmental and ecological issues. However, due to a lack of spatial explicitness, complete thematic details and the conversion types for historical land use changes, the majority of historical land use reconstructions do not sufficiently meet the requirements for an adequate model. Considering these shortcomings, we explored the possibility of constructing a spatially-explicit modeling framework (HLURM: Historical Land Use Reconstruction Model). Then a three-map comparison method was adopted to validate the projected reconstruction map. The reconstruction suggested that the HLURM model performed well in the spatial reconstruction of various land-use categories, and had a higher figure of merit (48.19%) than models used in other case studies. The largest land use/cover type in the study area was determined to be grassland, followed by arable land and wetland. Using the three-map comparison, we noticed that the major discrepancies in land use changes among the three maps were as a result of inconsistencies in the classification of land-use categories during the study period, rather than as a result of the simulation model. PMID:28134342

  2. Model-based iterative reconstruction technique for radiation dose reduction in chest CT: comparison with the adaptive statistical iterative reconstruction technique

    Katsura, Masaki; Matsuda, Izuru; Akahane, Masaaki; Sato, Jiro; Akai, Hiroyuki; Yasaka, Koichiro; Kunimatsu, Akira; Ohtomo, Kuni

    2012-01-01

    To prospectively evaluate dose reduction and image quality characteristics of chest CT reconstructed with model-based iterative reconstruction (MBIR) compared with adaptive statistical iterative reconstruction (ASIR). One hundred patients underwent reference-dose and low-dose unenhanced chest CT with 64-row multidetector CT. Images were reconstructed with 50 % ASIR-filtered back projection blending (ASIR50) for reference-dose CT, and with ASIR50 and MBIR for low-dose CT. Two radiologists assessed the images in a blinded manner for subjective image noise, artefacts and diagnostic acceptability. Objective image noise was measured in the lung parenchyma. Data were analysed using the sign test and pair-wise Student's t-test. Compared with reference-dose CT, there was a 79.0 % decrease in dose-length product with low-dose CT. Low-dose MBIR images had significantly lower objective image noise (16.93 ± 3.00) than low-dose ASIR (49.24 ± 9.11, P < 0.01) and reference-dose ASIR images (24.93 ± 4.65, P < 0.01). Low-dose MBIR images were all diagnostically acceptable. Unique features of low-dose MBIR images included motion artefacts and pixellated blotchy appearances, which did not adversely affect diagnostic acceptability. Diagnostically acceptable chest CT images acquired with nearly 80 % less radiation can be obtained using MBIR. MBIR shows greater potential than ASIR for providing diagnostically acceptable low-dose CT images without severely compromising image quality. (orig.)

  3. Model-based iterative reconstruction technique for radiation dose reduction in chest CT: comparison with the adaptive statistical iterative reconstruction technique

    Katsura, Masaki; Matsuda, Izuru; Akahane, Masaaki; Sato, Jiro; Akai, Hiroyuki; Yasaka, Koichiro; Kunimatsu, Akira; Ohtomo, Kuni [University of Tokyo, Department of Radiology, Graduate School of Medicine, Bunkyo-ku, Tokyo (Japan)

    2012-08-15

    To prospectively evaluate dose reduction and image quality characteristics of chest CT reconstructed with model-based iterative reconstruction (MBIR) compared with adaptive statistical iterative reconstruction (ASIR). One hundred patients underwent reference-dose and low-dose unenhanced chest CT with 64-row multidetector CT. Images were reconstructed with 50 % ASIR-filtered back projection blending (ASIR50) for reference-dose CT, and with ASIR50 and MBIR for low-dose CT. Two radiologists assessed the images in a blinded manner for subjective image noise, artefacts and diagnostic acceptability. Objective image noise was measured in the lung parenchyma. Data were analysed using the sign test and pair-wise Student's t-test. Compared with reference-dose CT, there was a 79.0 % decrease in dose-length product with low-dose CT. Low-dose MBIR images had significantly lower objective image noise (16.93 {+-} 3.00) than low-dose ASIR (49.24 {+-} 9.11, P < 0.01) and reference-dose ASIR images (24.93 {+-} 4.65, P < 0.01). Low-dose MBIR images were all diagnostically acceptable. Unique features of low-dose MBIR images included motion artefacts and pixellated blotchy appearances, which did not adversely affect diagnostic acceptability. Diagnostically acceptable chest CT images acquired with nearly 80 % less radiation can be obtained using MBIR. MBIR shows greater potential than ASIR for providing diagnostically acceptable low-dose CT images without severely compromising image quality. (orig.)

  4. EXPRESIÓN SEMI-ANALÍTICA PARA EL CÁLCULO DE MATRICES DE CONDUCTIVIDAD DE ELEMENTOS FINITOS EN PROBLEMAS DE CONDUCCIÓN DE CALOR //\tA SEMI-ANALYTICAL EXPRESSION FOR CALCULATING FINITE ELEMENT CONDUCTIVITY MATRICES IN HEAT CONDUCTION PROBLEMS

    Hector Godoy

    2012-12-01

    Full Text Available The heat transfer equation by conduction is not more than a mathematical expression of the energy conservation law for a given solid. Solving the equation which model this problem is generally very dificult or impossible in an analytical way, so it is necessary to make a discrete approximation of the continuous problem. In this paper, we present a methodology applied to the quadrilateral finite elements in problems of heat transfer by conduction, where the components of the thermal conductivity matrix are obtained by a semi-analytical expression and simple algebraic manipulations. This technique has been used successfully in stiness arrays' integrations of bidimensional and tridimensional finite elements, reporting substantial improvements of CPU times compared with the Gaussian integration. // RESUMEN: La ecuación para la transferencia de calor por conducción no es más que una expresión matemática de la ley de conservación de la energía para un sólido dado. La resolución de la ecuación que modela este problema generalmente es muy difícil o imposible de obtener de forma analítica, por ello es necesario efectuar una aproximación discreta del problema continuo. En este trabajo, se presenta una metodología aplicada a elementos finitos cuadriláteros en problemas de transferencia de calor por conducción, donde las componentes de la matriz de conductividad térmica son obtenidas mediante una expresión semi-analítica y manipulaciones algebraicas sencillas. Esta técnica ha sido utilizada exitosamente en integraciones de matrices de rigidez de elementos finitos bidimensionales y tridimensionales,reportando mejoras sustanciales de tiempos de CPU en comparación con la integración Gaussiana.

  5. Low contrast detectability and spatial resolution with model-based iterative reconstructions of MDCT images: a phantom and cadaveric study

    Millon, Domitille; Coche, Emmanuel E. [Universite Catholique de Louvain, Department of Radiology and Medical Imaging, Cliniques Universitaires Saint Luc, Brussels (Belgium); Vlassenbroek, Alain [Philips Healthcare, Brussels (Belgium); Maanen, Aline G. van; Cambier, Samantha E. [Universite Catholique de Louvain, Statistics Unit, King Albert II Cancer Institute, Brussels (Belgium)

    2017-03-15

    To compare image quality [low contrast (LC) detectability, noise, contrast-to-noise (CNR) and spatial resolution (SR)] of MDCT images reconstructed with an iterative reconstruction (IR) algorithm and a filtered back projection (FBP) algorithm. The experimental study was performed on a 256-slice MDCT. LC detectability, noise, CNR and SR were measured on a Catphan phantom scanned with decreasing doses (48.8 down to 0.7 mGy) and parameters typical of a chest CT examination. Images were reconstructed with FBP and a model-based IR algorithm. Additionally, human chest cadavers were scanned and reconstructed using the same technical parameters. Images were analyzed to illustrate the phantom results. LC detectability and noise were statistically significantly different between the techniques, supporting model-based IR algorithm (p < 0.0001). At low doses, the noise in FBP images only enabled SR measurements of high contrast objects. The superior CNR of model-based IR algorithm enabled lower dose measurements, which showed that SR was dose and contrast dependent. Cadaver images reconstructed with model-based IR illustrated that visibility and delineation of anatomical structure edges could be deteriorated at low doses. Model-based IR improved LC detectability and enabled dose reduction. At low dose, SR became dose and contrast dependent. (orig.)

  6. Use of a Three-Dimensional Model to Optimize a MEDPOR Implant for Delayed Reconstruction of a Suprastructure Maxillectomy Defect

    Echo, Anthony; Wolfswinkel, Erik M.; Weathers, William; McKnight, Aisha; Izaddoost, Shayan

    2013-01-01

    The use of a three-dimensional (3-D) model has been well described for craniomaxillofacial reconstruction, especially with the preoperative planning of free fibula flaps. This article reports the application of an innovative 3-D model approach for the calculation of the exact contours, angles, length, and general morphology of a prefabricated MEDPOR 2/3 orbital implant for reconstruction of a suprastructure maxillectomy defect. The 3-D model allowed intraoperative modification of the MEDPOR implant which decreased the risk of iatrogenic harm, contamination while also improving aesthetic results and function. With the aid of preoperative 3-D models, porous polypropylene facial implants can be contoured efficiently intraoperatively to precisely reconstruct complex craniomaxillofacial defects. PMID:24436774

  7. Upon the reconstruction of accidents triggered by tire explosion. Analytical model and case study

    Gaiginschi, L.; Agape, I.; Talif, S.

    2017-10-01

    Accident Reconstruction is important in the general context of increasing road traffic safety. In the casuistry of traffic accidents, those caused by tire explosions are critical under the severity of consequences, because they are usually happening at high speeds. Consequently, the knowledge of the running speed of the vehicle involved at the time of the tire explosion is essential to elucidate the circumstances of the accident. The paper presents an analytical model for the kinematics of a vehicle which, after the explosion of one of its tires, begins to skid, overturns and rolls. The model consists of two concurent approaches built as applications of the momentum conservation and energy conservation principles, and allows determination of the initial speed of the vehicle involved, by running backwards the sequences of the road event. The authors also aimed to both validate the two distinct analytical approaches by calibrating the calculation algorithms on a case study

  8. Reconstruction of Baxter Q-operator from Sklyanin SOV for cyclic representations of integrable quantum models

    Niccoli, G.

    2009-12-01

    In an earlier paper (G. Niccoli and J. Teschner, 2009), the spectrum (eigenvalues and eigenstates) of a lattice regularizations of the Sine-Gordon model has been completely characterized in terms of polynomial solutions with certain properties of the Baxter equation. This characterization for cyclic representations has been derived by the use of the Separation of Variables (SOV) method of Sklyanin and by the direct construction of the Baxter Q-operator family. Here, we reconstruct the Baxter Q-operator and the same characterization of the spectrum by only using the SOV method. This analysis allows us to deduce the main features required for the extension to cyclic representations of other integrable quantum models of this kind of spectrum characterization. (orig.)

  9. Reconstruction of Baxter Q-operator from Sklyanin SOV for cyclic representations of integrable quantum models

    Niccoli, G.

    2009-12-15

    In an earlier paper (G. Niccoli and J. Teschner, 2009), the spectrum (eigenvalues and eigenstates) of a lattice regularizations of the Sine-Gordon model has been completely characterized in terms of polynomial solutions with certain properties of the Baxter equation. This characterization for cyclic representations has been derived by the use of the Separation of Variables (SOV) method of Sklyanin and by the direct construction of the Baxter Q-operator family. Here, we reconstruct the Baxter Q-operator and the same characterization of the spectrum by only using the SOV method. This analysis allows us to deduce the main features required for the extension to cyclic representations of other integrable quantum models of this kind of spectrum characterization. (orig.)

  10. Reconstruction, modeling, animation and digital fabrication of 'architectures on paper'. Two ideal houses by Carlo Mollino

    Roberta Spallone

    2015-07-01

    Full Text Available This paper develops some consideration about the issues raised by the reconstruction of 'architectures on paper' of contemporary masters.Generally archival drawings are patchy and fragmented and refer to different ideative moments and paths of inspiration that lend themselves to numerous and different interpretative readings. Moreover it's necessary a careful analysis of the author's poetics and significance of his work. Digital methods and techniques of representation, ranging from 3D modeling, video producing and digital fabrication, should be carefully selected and adapted to the characteristics identified through the interpretation of the project and what it is intended to communicate. In the cases studies of Mollino's 'ideal houses' were tested the capabilities of BIM modeling for this aims.

  11. Depth Reconstruction from Single Images Using a Convolutional Neural Network and a Condition Random Field Model.

    Liu, Dan; Liu, Xuejun; Wu, Yiguang

    2018-04-24

    This paper presents an effective approach for depth reconstruction from a single image through the incorporation of semantic information and local details from the image. A unified framework for depth acquisition is constructed by joining a deep Convolutional Neural Network (CNN) and a continuous pairwise Conditional Random Field (CRF) model. Semantic information and relative depth trends of local regions inside the image are integrated into the framework. A deep CNN network is firstly used to automatically learn a hierarchical feature representation of the image. To get more local details in the image, the relative depth trends of local regions are incorporated into the network. Combined with semantic information of the image, a continuous pairwise CRF is then established and is used as the loss function of the unified model. Experiments on real scenes demonstrate that the proposed approach is effective and that the approach obtains satisfactory results.

  12. Depth Reconstruction from Single Images Using a Convolutional Neural Network and a Condition Random Field Model

    Dan Liu

    2018-04-01

    Full Text Available This paper presents an effective approach for depth reconstruction from a single image through the incorporation of semantic information and local details from the image. A unified framework for depth acquisition is constructed by joining a deep Convolutional Neural Network (CNN and a continuous pairwise Conditional Random Field (CRF model. Semantic information and relative depth trends of local regions inside the image are integrated into the framework. A deep CNN network is firstly used to automatically learn a hierarchical feature representation of the image. To get more local details in the image, the relative depth trends of local regions are incorporated into the network. Combined with semantic information of the image, a continuous pairwise CRF is then established and is used as the loss function of the unified model. Experiments on real scenes demonstrate that the proposed approach is effective and that the approach obtains satisfactory results.

  13. Stochastic methods of data modeling: application to the reconstruction of non-regular data

    Buslig, Leticia

    2014-01-01

    This research thesis addresses two issues or applications related to IRSN studies. The first one deals with the mapping of measurement data (the IRSN must regularly control the radioactivity level in France and, for this purpose, uses a network of sensors distributed among the French territory). The objective is then to predict, by means of reconstruction model which used observations, maps which will be used to inform the population. The second application deals with the taking of uncertainties into account in complex computation codes (the IRSN must perform safety studies to assess the risks of loss of integrity of a nuclear reactor in case of hypothetical accidents, and for this purpose, codes are used which simulate physical phenomena occurring within an installation). Some input parameters are not precisely known, and the author therefore tries to assess the impact of some uncertainties on simulated values. She notably aims at seeing whether variations of input parameters may push the system towards a behaviour which is very different from that obtained with parameters having a reference value, or even towards a state in which safety conditions are not met. The precise objective of this second part is then to a reconstruction model which is not costly (in terms of computation time) and to perform simulation in relevant areas (strong gradient areas, threshold overrun areas, so on). Two issues are then important: the choice of the approximation model and the construction of the experiment plan. The model is based on a kriging-type stochastic approach, and an important part of the work addresses the development of new numerical techniques of experiment planning. The first part proposes a generic criterion of adaptive planning, and reports its analysis and implementation. In the second part, an alternative to error variance addition is developed. Methodological developments are tested on analytic functions, and then applied to the cases of measurement mapping and

  14. Reconstructing extreme AMOC events through nudging of the ocean surface: a perfect model approach

    Ortega, Pablo; Guilyardi, Eric; Swingedouw, Didier; Mignot, Juliette; Nguyen, Sébastien

    2017-11-01

    While the Atlantic Meridional Overturning Circulation (AMOC) is thought to be a crucial component of the North Atlantic climate, past changes in its strength are challenging to quantify, and only limited information is available. In this study, we use a perfect model approach with the IPSL-CM5A-LR model to assess the performance of several surface nudging techniques in reconstructing the variability of the AMOC. Special attention is given to the reproducibility of an extreme positive AMOC peak from a preindustrial control simulation. Nudging includes standard relaxation techniques towards the sea surface temperature and salinity anomalies of this target control simulation, and/or the prescription of the wind-stress fields. Surface nudging approaches using standard fixed restoring terms succeed in reproducing most of the target AMOC variability, including the timing of the extreme event, but systematically underestimate its amplitude. A detailed analysis of the AMOC variability mechanisms reveals that the underestimation of the extreme AMOC maximum comes from a deficit in the formation of the dense water masses in the main convection region, located south of Iceland in the model. This issue is largely corrected after introducing a novel surface nudging approach, which uses a varying restoring coefficient that is proportional to the simulated mixed layer depth, which, in essence, keeps the restoring time scale constant. This new technique substantially improves water mass transformation in the regions of convection, and in particular, the formation of the densest waters, which are key for the representation of the AMOC extreme. It is therefore a promising strategy that may help to better constrain the AMOC variability and other ocean features in the models. As this restoring technique only uses surface data, for which better and longer observations are available, it opens up opportunities for improved reconstructions of the AMOC over the last few decades.

  15. Reconstruction and analysis of a genome-scale metabolic model for Scheffersomyces stipitis

    Balagurunathan Balaji

    2012-02-01

    Full Text Available Abstract Background Fermentation of xylose, the major component in hemicellulose, is essential for economic conversion of lignocellulosic biomass to fuels and chemicals. The yeast Scheffersomyces stipitis (formerly known as Pichia stipitis has the highest known native capacity for xylose fermentation and possesses several genes for lignocellulose bioconversion in its genome. Understanding the metabolism of this yeast at a global scale, by reconstructing the genome scale metabolic model, is essential for manipulating its metabolic capabilities and for successful transfer of its capabilities to other industrial microbes. Results We present a genome-scale metabolic model for Scheffersomyces stipitis, a native xylose utilizing yeast. The model was reconstructed based on genome sequence annotation, detailed experimental investigation and known yeast physiology. Macromolecular composition of Scheffersomyces stipitis biomass was estimated experimentally and its ability to grow on different carbon, nitrogen, sulphur and phosphorus sources was determined by phenotype microarrays. The compartmentalized model, developed based on an iterative procedure, accounted for 814 genes, 1371 reactions, and 971 metabolites. In silico computed growth rates were compared with high-throughput phenotyping data and the model could predict the qualitative outcomes in 74% of substrates investigated. Model simulations were used to identify the biosynthetic requirements for anaerobic growth of Scheffersomyces stipitis on glucose and the results were validated with published literature. The bottlenecks in Scheffersomyces stipitis metabolic network for xylose uptake and nucleotide cofactor recycling were identified by in silico flux variability analysis. The scope of the model in enhancing the mechanistic understanding of microbial metabolism is demonstrated by identifying a mechanism for mitochondrial respiration and oxidative phosphorylation. Conclusion The genome

  16. Image-based reconstruction of three-dimensional myocardial infarct geometry for patient-specific modeling of cardiac electrophysiology

    Ukwatta, Eranga, E-mail: eukwatt1@jhu.edu; Arevalo, Hermenegild; Pashakhanloo, Farhad; Prakosa, Adityo; Vadakkumpadan, Fijoy [Institute for Computational Medicine, Johns Hopkins University, Baltimore, Maryland 21205 and Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland 21205 (United States); Rajchl, Martin [Department of Computing, Imperial College London, London SW7 2AZ (United Kingdom); White, James [Stephenson Cardiovascular MR Centre, University of Calgary, Calgary, Alberta T2N 2T9 (Canada); Herzka, Daniel A.; McVeigh, Elliot [Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland 21205 (United States); Lardo, Albert C. [Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland 21205 and Division of Cardiology, Johns Hopkins Institute of Medicine, Baltimore, Maryland 21224 (United States); Trayanova, Natalia A. [Institute for Computational Medicine, Johns Hopkins University, Baltimore, Maryland 21205 (United States); Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland 21205 (United States); Department of Biomedical Engineering, Johns Hopkins Institute of Medicine, Baltimore, Maryland 21205 (United States)

    2015-08-15

    Purpose: Accurate three-dimensional (3D) reconstruction of myocardial infarct geometry is crucial to patient-specific modeling of the heart aimed at providing therapeutic guidance in ischemic cardiomyopathy. However, myocardial infarct imaging is clinically performed using two-dimensional (2D) late-gadolinium enhanced cardiac magnetic resonance (LGE-CMR) techniques, and a method to build accurate 3D infarct reconstructions from the 2D LGE-CMR images has been lacking. The purpose of this study was to address this need. Methods: The authors developed a novel methodology to reconstruct 3D infarct geometry from segmented low-resolution (Lo-res) clinical LGE-CMR images. Their methodology employed the so-called logarithm of odds (LogOdds) function to implicitly represent the shape of the infarct in segmented image slices as LogOdds maps. These 2D maps were then interpolated into a 3D image, and the result transformed via the inverse of LogOdds to a binary image representing the 3D infarct geometry. To assess the efficacy of this method, the authors utilized 39 high-resolution (Hi-res) LGE-CMR images, including 36 in vivo acquisitions of human subjects with prior myocardial infarction and 3 ex vivo scans of canine hearts following coronary ligation to induce infarction. The infarct was manually segmented by trained experts in each slice of the Hi-res images, and the segmented data were downsampled to typical clinical resolution. The proposed method was then used to reconstruct 3D infarct geometry from the downsampled images, and the resulting reconstructions were compared with the manually segmented data. The method was extensively evaluated using metrics based on geometry as well as results of electrophysiological simulations of cardiac sinus rhythm and ventricular tachycardia in individual hearts. Several alternative reconstruction techniques were also implemented and compared with the proposed method. Results: The accuracy of the LogOdds method in reconstructing 3D

  17. Reconstruction of the erythemal UV radiation data in Novi Sad (Serbia) using the NEOPLANTA parametric model

    Malinovic-Milicevic, S.; Mihailovic, D. T.; Radovanovic, M. M.

    2015-07-01

    This paper focuses on the development and application of a technique for filling the daily erythemal UV dose data gaps and the reconstruction of the past daily erythemal UV doses in Novi Sad, Serbia. The technique implies developing the empirical equation for estimation of daily erythemal UV doses by means of relative daily sunshine duration under all sky conditions. A good agreement was found between modeled and measured values of erythemal UV doses. This technique was used for filling the short gaps in the erythemal UV dose measurement series (2003-2009) as well as for the reconstruction of the past time-series values (1981-2002). Statistically significant positive erythemal UV dose trend of 6.9 J m-2 per year was found during the period 1981-2009. In relation to the reference period 1981-1989, an increase in the erythemal UV dose of 6.92 % is visible in the period 1990-1999 and the increase of 9.67 % can be seen in the period 2000-2009. The strongest increase in erythemal UV doses has been found for winter and spring seasons.

  18. Coronary stent on coronary CT angiography: Assessment with model-based iterative reconstruction technique

    Lee, Eun Chae; Kim, Yeo Koon; Chun, Eun Ju; Choi, Sang IL [Dept. of of Radiology, Seoul National University Bundang Hospital, Seongnam (Korea, Republic of)

    2016-05-15

    To assess the performance of model-based iterative reconstruction (MBIR) technique for evaluation of coronary artery stents on coronary CT angiography (CCTA). Twenty-two patients with coronary stent implantation who underwent CCTA were retrospectively enrolled for comparison of image quality between filtered back projection (FBP), adaptive statistical iterative reconstruction (ASIR) and MBIR. In each data set, image noise was measured as the standard deviation of the measured attenuation units within circular regions of interest in the ascending aorta (AA) and left main coronary artery (LM). To objectively assess the noise and blooming artifacts in coronary stent, we additionally measured the standard deviation of the measured attenuation and intra-luminal stent diameters of total 35 stents with dedicated software. All image noise measured in the AA (all p < 0.001), LM (p < 0.001, p = 0.001) and coronary stent (all p < 0.001) were significantly lower with MBIR in comparison to those with FBP or ASIR. Intraluminal stent diameter was significantly higher with MBIR, as compared with ASIR or FBP (p < 0.001, p = 0.001). MBIR can reduce image noise and blooming artifact from the stent, leading to better in-stent assessment in patients with coronary artery stent.

  19. Image quality of ct angiography using model-based iterative reconstruction in infants with congenital heart disease: Comparison with filtered back projection and hybrid iterative reconstruction

    Jia, Qianjun, E-mail: jiaqianjun@126.com [Southern Medical University, Guangzhou, Guangdong (China); Department of Radiology, Guangdong General Hospital, Guangdong Academy of Medical Sciences, Guangzhou, Guangdong (China); Department of Catheterization Lab, Guangdong Cardiovascular Institute, Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong General Hospital, Guangdong Academy of Medical Sciences, Guangzhou, Guangdong (China); Zhuang, Jian, E-mail: zhuangjian5413@tom.com [Department of Cardiac Surgery, Guangdong Cardiovascular Institute, Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong General Hospital, Guangdong Academy of Medical Sciences, Guangzhou, Guangdong (China); Jiang, Jun, E-mail: 81711587@qq.com [Department of Radiology, Shenzhen Second People’s Hospital, Shenzhen, Guangdong (China); Li, Jiahua, E-mail: 970872804@qq.com [Department of Catheterization Lab, Guangdong Cardiovascular Institute, Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong General Hospital, Guangdong Academy of Medical Sciences, Guangzhou, Guangdong (China); Huang, Meiping, E-mail: huangmeiping_vip@163.com [Department of Catheterization Lab, Guangdong Cardiovascular Institute, Guangdong Provincial Key Laboratory of South China Structural Heart Disease, Guangdong General Hospital, Guangdong Academy of Medical Sciences, Guangzhou, Guangdong (China); Southern Medical University, Guangzhou, Guangdong (China); Liang, Changhong, E-mail: cjr.lchh@vip.163.com [Department of Radiology, Guangdong General Hospital, Guangdong Academy of Medical Sciences, Guangzhou, Guangdong (China); Southern Medical University, Guangzhou, Guangdong (China)

    2017-01-15

    Purpose: To compare the image quality, rate of coronary artery visualization and diagnostic accuracy of 256-slice multi-detector computed tomography angiography (CTA) with prospective electrocardiographic (ECG) triggering at a tube voltage of 80 kVp between 3 reconstruction algorithms (filtered back projection (FBP), hybrid iterative reconstruction (iDose{sup 4}) and iterative model reconstruction (IMR)) in infants with congenital heart disease (CHD). Methods: Fifty-one infants with CHD who underwent cardiac CTA in our institution between December 2014 and March 2015 were included. The effective radiation doses were calculated. Imaging data were reconstructed using the FBP, iDose{sup 4} and IMR algorithms. Parameters of objective image quality (noise, signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR)); subjective image quality (overall image quality, image noise and margin sharpness); coronary artery visibility; and diagnostic accuracy for the three algorithms were measured and compared. Results: The mean effective radiation dose was 0.61 ± 0.32 mSv. Compared to FBP and iDose{sup 4}, IMR yielded significantly lower noise (P < 0.01), higher SNR and CNR values (P < 0.01), and a greater subjective image quality score (P < 0.01). The total number of coronary segments visualized was significantly higher for both iDose{sup 4} and IMR than for FBP (P = 0.002 and P = 0.025, respectively), but there was no significant difference in this parameter between iDose{sup 4} and IMR (P = 0.397). There was no significant difference in the diagnostic accuracy between the FBP, iDose{sup 4} and IMR algorithms (χ{sup 2} = 0.343, P = 0.842). Conclusions: For infants with CHD undergoing cardiac CTA, the IMR reconstruction algorithm provided significantly increased objective and subjective image quality compared with the FBP and iDose{sup 4} algorithms. However, IMR did not improve the diagnostic accuracy or coronary artery visualization compared with iDose{sup 4}.

  20. Estimating absorption coefficients of colored dissolved organic matter (CDOM) using a semi-analytical algorithm for southern Beaufort Sea waters: application to deriving concentrations of dissolved organic carbon from space

    Matsuoka, A.; Hooker, S. B.; Bricaud, A.; Gentili, B.; Babin, M.

    2013-02-01

    A series of papers have suggested that freshwater discharge, including a large amount of dissolved organic matter (DOM), has increased since the middle of the 20th century. In this study, a semi-analytical algorithm for estimating light absorption coefficients of the colored fraction of DOM (CDOM) was developed for southern Beaufort Sea waters using remote sensing reflectance at six wavelengths in the visible spectral domain corresponding to MODIS ocean color sensor. This algorithm allows the separation of colored detrital matter (CDM) into CDOM and non-algal particles (NAP) through the determination of NAP absorption using an empirical relationship between NAP absorption and particle backscattering coefficients. Evaluation using independent datasets, which were not used for developing the algorithm, showed that CDOM absorption can be estimated accurately to within an uncertainty of 35% and 50% for oceanic and coastal waters, respectively. A previous paper (Matsuoka et al., 2012) showed that dissolved organic carbon (DOC) concentrations were tightly correlated with CDOM absorption in our study area (r2 = 0.97). By combining the CDOM absorption algorithm together with the DOC versus CDOM relationship, it is now possible to estimate DOC concentrations in the near-surface layer of the southern Beaufort Sea using satellite ocean color data. DOC concentrations in the surface waters were estimated using MODIS ocean color data, and the estimates showed reasonable values compared to in situ measurements. We propose a routine and near real-time method for deriving DOC concentrations from space, which may open the way to an estimate of DOC budgets for Arctic coastal waters.

  1. Estimating absorption coefficients of colored dissolved organic matter (CDOM using a semi-analytical algorithm for southern Beaufort Sea waters: application to deriving concentrations of dissolved organic carbon from space

    A. Matsuoka

    2013-02-01

    Full Text Available A series of papers have suggested that freshwater discharge, including a large amount of dissolved organic matter (DOM, has increased since the middle of the 20th century. In this study, a semi-analytical algorithm for estimating light absorption coefficients of the colored fraction of DOM (CDOM was developed for southern Beaufort Sea waters using remote sensing reflectance at six wavelengths in the visible spectral domain corresponding to MODIS ocean color sensor. This algorithm allows the separation of colored detrital matter (CDM into CDOM and non-algal particles (NAP through the determination of NAP absorption using an empirical relationship between NAP absorption and particle backscattering coefficients. Evaluation using independent datasets, which were not used for developing the algorithm, showed that CDOM absorption can be estimated accurately to within an uncertainty of 35% and 50% for oceanic and coastal waters, respectively. A previous paper (Matsuoka et al., 2012 showed that dissolved organic carbon (DOC concentrations were tightly correlated with CDOM absorption in our study area (r2 = 0.97. By combining the CDOM absorption algorithm together with the DOC versus CDOM relationship, it is now possible to estimate DOC concentrations in the near-surface layer of the southern Beaufort Sea using satellite ocean color data. DOC concentrations in the surface waters were estimated using MODIS ocean color data, and the estimates showed reasonable values compared to in situ measurements. We propose a routine and near real-time method for deriving DOC concentrations from space, which may open the way to an estimate of DOC budgets for Arctic coastal waters.

  2. The 3D geological model of the 1963 Vajont rockslide, reconstructed with implicit surface methods

    Bistacchi, Andrea; Massironi, Matteo; Francese, Roberto; Giorgi, Massimo; Taller, Claudio

    2015-04-01

    The Vajont rockslide has been the object of several studies because of its catastrophic consequences and of its particular evolution. Several qualitative or quantitative models have been presented in the last 50 years, but a complete explanation of all the relevant geological and mechanical processes remains elusive. In order to better understand the mechanics and dynamics of the 1963 event, we have reconstructed the first 3D geological model of the rockslide, which allowed us to accurately investigate the rockslide structure and kinematics. The input data for the model consisted in: pre- and post-rockslide geological maps, pre- and post-rockslide orthophotos, pre- and post-rockslide digital elevation models, structural data, boreholes, and geophysical data (2D and 3D seismics and resistivity). All these data have been integrated in a 3D geological model implemented in Gocad®, using the implicit surface modelling method. Results of the 3D geological model include the depth and geometry of the sliding surface, the volume of the two lobes of the rockslide accumulation, kinematics of the rockslide in terms of the vector field of finite displacement, and high quality meshes useful for mechanical and hydrogeological simulations. The latter can include information about the stratigraphy and internal structure of the rock masses and allow tracing the displacement of different material points in the rockslide from the pre-1963-failure to the post-rockslide state. As a general geological conclusion, we may say that the 3D model allowed us to recognize very effectively a sliding surface, whose non-planar geometry is affected by the interference pattern of two regional-scale fold systems. The rockslide is partitioned into two distinct and internally continuous rock masses with a distinct kinematics, which were characterised by a very limited internal deformation during the slide. The continuity of these two large blocks points to a very localized deformation, occurring along

  3. Selective Tree-ring Models: A Novel Method for Reconstructing Streamflow Using Tree Rings

    Foard, M. B.; Nelson, A. S.; Harley, G. L.

    2017-12-01

    Surface water is among the most instrumental and vulnerable resources in the Northwest United States (NW). Recent observations show that overall water quantity is declining in streams across the region, while extreme flooding events occur more frequently. Historical streamflow models inform probabilities of extreme flow events (flood or drought) by describing frequency and duration of past events. There are numerous examples of tree-rings being utilized to reconstruct streamflow in the NW. These models confirm that tree-rings are highly accurate at predicting streamflow, however there are many nuances that limit their applicability through time and space. For example, most models predict streamflow from hydrologically altered rivers (e.g. dammed, channelized) which may hinder our ability to predict natural prehistoric flow. They also have a tendency to over/under-predict extreme flow events. Moreover, they often neglect to capture the changing relationships between tree-growth and streamflow over time and space. To address these limitations, we utilized national tree-ring and streamflow archives to investigate the relationships between the growth of multiple coniferous species and free-flowing streams across the NW using novel species-and site-specific streamflow models - a term we coined"selective tree-ring models." Correlation function analysis and regression modeling were used to evaluate the strengths and directions of the flow-growth relationships. Species with significant relationships in the same direction were identified as strong candidates for selective models. Temporal and spatial patterns of these relationships were examined using running correlations and inverse distance weighting interpolation, respectively. Our early results indicate that (1) species adapted to extreme climates (e.g. hot-dry, cold-wet) exhibit the most consistent relationships across space, (2) these relationships weaken in locations with mild climatic variability, and (3) some

  4. Empirical Reconstruction and Numerical Modeling of the First Geoeffective Coronal Mass Ejection of Solar Cycle 24

    Wood, B. E.; Wu, C.-C.; Howard, R. A.; Socker, D. G.; Rouillard, A. P.

    2011-03-01

    We analyze the kinematics and morphology of a coronal mass ejection (CME) from 2010 April 3, which was responsible for the first significant geomagnetic storm of solar cycle 24. The analysis utilizes coronagraphic and heliospheric images from the two STEREO spacecraft, and coronagraphic images from SOHO/LASCO. Using an empirical three-dimensional (3D) reconstruction technique, we demonstrate that the CME can be reproduced reasonably well at all times with a 3D flux rope shape, but the case for a flux rope being the correct interpretation is not as strong as some events studied with STEREO in the past, given that we are unable to infer a unique orientation for the flux rope. A model with an orientation angle of -80° from the ecliptic plane (i.e., nearly N-S) works best close to the Sun, but a model at 10° (i.e., nearly E-W) works better far from the Sun. Both interpretations require the cross section of the flux rope to be significantly elliptical rather than circular. In addition to our empirical modeling, we also present a fully 3D numerical MHD model of the CME. This physical model appears to effectively reproduce aspects of the shape and kinematics of the CME's leading edge. It is particularly encouraging that the model reproduces the amount of interplanetary deceleration observed for the CME during its journey from the Sun to 1 AU.

  5. EMPIRICAL RECONSTRUCTION AND NUMERICAL MODELING OF THE FIRST GEOEFFECTIVE CORONAL MASS EJECTION OF SOLAR CYCLE 24

    Wood, B. E.; Wu, C.-C.; Howard, R. A.; Socker, D. G.; Rouillard, A. P.

    2011-01-01

    We analyze the kinematics and morphology of a coronal mass ejection (CME) from 2010 April 3, which was responsible for the first significant geomagnetic storm of solar cycle 24. The analysis utilizes coronagraphic and heliospheric images from the two STEREO spacecraft, and coronagraphic images from SOHO/LASCO. Using an empirical three-dimensional (3D) reconstruction technique, we demonstrate that the CME can be reproduced reasonably well at all times with a 3D flux rope shape, but the case for a flux rope being the correct interpretation is not as strong as some events studied with STEREO in the past, given that we are unable to infer a unique orientation for the flux rope. A model with an orientation angle of -80 deg. from the ecliptic plane (i.e., nearly N-S) works best close to the Sun, but a model at 10 deg. (i.e., nearly E-W) works better far from the Sun. Both interpretations require the cross section of the flux rope to be significantly elliptical rather than circular. In addition to our empirical modeling, we also present a fully 3D numerical MHD model of the CME. This physical model appears to effectively reproduce aspects of the shape and kinematics of the CME's leading edge. It is particularly encouraging that the model reproduces the amount of interplanetary deceleration observed for the CME during its journey from the Sun to 1 AU.

  6. Investigating the consistency between proxy-based reconstructions and climate models using data assimilation: a mid-Holocene case study

    A. Mairesse; H. Goosse; P. Mathiot; H. Wanner; S. Dubinkina (Svetlana)

    2013-01-01

    htmlabstractThe mid-Holocene (6 kyr BP; thousand years before present) is a key period to study the consistency between model results and proxy-based reconstruction data as it corresponds to a standard test for models and a reasonable number of proxy-based records is available. Taking advantage of

  7. Atmospheric dispersion and inverse modelling for the reconstruction of accidental sources of pollutants

    Winiarek, Victor

    2014-01-01

    Uncontrolled releases of pollutant in the atmosphere may be the consequence of various situations: accidents, for instance leaks or explosions in an industrial plant, or terrorist attacks such as biological bombs, especially in urban areas. In the event of such situations, authorities' objectives are various: predict the contaminated zones to apply first countermeasures such as evacuation of concerned population; determine the source location; assess the long-term polluted areas, for instance by deposition of persistent pollutants in the soil. To achieve these objectives, numerical models can be used to model the atmospheric dispersion of pollutants. We will first present the different processes that govern the transport of pollutants in the atmosphere, then the different numerical models that are commonly used in this context. The choice between these models mainly depends of the scale and the details one seeks to take into account. We will then present several inverse modeling methods to estimate the emission as well as statistical methods to estimate prior errors, to which the inversion is very sensitive. Several case studies are presented, using synthetic data as well as real data such as the estimation of source terms from the Fukushima accident in March 2011. From our results, we estimate the Cesium-137 emission to be between 12 and 19 PBq with a standard deviation between 15 and 65% and the Iodine-131 emission to be between 190 and 380 PBq with a standard deviation between 5 and 10%. Concerning the localization of an unknown source of pollutant, two strategies can be considered. On one hand parametric methods use a limited number of parameters to characterize the source term to be reconstructed. To do so, strong assumptions are made on the nature of the source. The inverse problem is hence to estimate these parameters. On the other hand nonparametric methods attempt to reconstruct a full emission field. Several parametric and nonparametric methods are

  8. Three-dimensional reconstruction from low-count SPECT data using deformable models

    Cunningham, G.S.; Hanson, K.M.; Battle, X.L.

    1998-03-01

    The authors demonstrate the reconstruction of a 3D, time-varying bolus of radiotracer from first-pass data obtained at the dynamic SPECT imager, FASTSPECT, built by the University of Arizona. The object imaged is a CardioWest Total Artificial Heart. The bolus is entirely contained in one ventricle and its associated inlet and outlet tracts. The model for the radiotracer distribution is a time-varying closed surface parameterized by 162 vertices that are connected to make 960 triangles, with uniform intensity of radiotracer inside. The total curvature of the surface is minimized through the use of a weighted prior in the Bayesian framework. MAP estimates for the vertices, interior intensity and background scatter are produced for diastolic and systolic frames, the only two frames analyzed

  9. Reconstructing 3D Tree Models Using Motion Capture and Particle Flow

    Jie Long

    2013-01-01

    Full Text Available Recovering tree shape from motion capture data is a first step toward efficient and accurate animation of trees in wind using motion capture data. Existing algorithms for generating models of tree branching structures for image synthesis in computer graphics are not adapted to the unique data set provided by motion capture. We present a method for tree shape reconstruction using particle flow on input data obtained from a passive optical motion capture system. Initial branch tip positions are estimated from averaged and smoothed motion capture data. Branch tips, as particles, are also generated within a bounding space defined by a stack of bounding boxes or a convex hull. The particle flow, starting at branch tips within the bounding volume under forces, creates tree branches. The forces are composed of gravity, internal force, and external force. The resulting shapes are realistic and similar to the original tree crown shape. Several tunable parameters provide control over branch shape and arrangement.

  10. Pediatric 320-row cardiac computed tomography using electrocardiogram-gated model-based full iterative reconstruction

    Shirota, Go; Maeda, Eriko; Namiki, Yoko; Bari, Razibul; Abe, Osamu [The University of Tokyo, Department of Radiology, Graduate School of Medicine, Tokyo (Japan); Ino, Kenji [The University of Tokyo Hospital, Imaging Center, Tokyo (Japan); Torigoe, Rumiko [Toshiba Medical Systems, Tokyo (Japan)

    2017-10-15

    Full iterative reconstruction algorithm is available, but its diagnostic quality in pediatric cardiac CT is unknown. To compare the imaging quality of two algorithms, full and hybrid iterative reconstruction, in pediatric cardiac CT. We included 49 children with congenital cardiac anomalies who underwent cardiac CT. We compared quality of images reconstructed using the two algorithms (full and hybrid iterative reconstruction) based on a 3-point scale for the delineation of the following anatomical structures: atrial septum, ventricular septum, right atrium, right ventricle, left atrium, left ventricle, main pulmonary artery, ascending aorta, aortic arch including the patent ductus arteriosus, descending aorta, right coronary artery and left main trunk. We evaluated beam-hardening artifacts from contrast-enhancement material using a 3-point scale, and we evaluated the overall image quality using a 5-point scale. We also compared image noise, signal-to-noise ratio and contrast-to-noise ratio between the algorithms. The overall image quality was significantly higher with full iterative reconstruction than with hybrid iterative reconstruction (3.67±0.79 vs. 3.31±0.89, P=0.0072). The evaluation scores for most of the gross structures were higher with full iterative reconstruction than with hybrid iterative reconstruction. There was no significant difference between full and hybrid iterative reconstruction for the presence of beam-hardening artifacts. Image noise was significantly lower in full iterative reconstruction, while signal-to-noise ratio and contrast-to-noise ratio were significantly higher in full iterative reconstruction. The diagnostic quality was superior in images with cardiac CT reconstructed with electrocardiogram-gated full iterative reconstruction. (orig.)

  11. Pediatric 320-row cardiac computed tomography using electrocardiogram-gated model-based full iterative reconstruction

    Shirota, Go; Maeda, Eriko; Namiki, Yoko; Bari, Razibul; Abe, Osamu; Ino, Kenji; Torigoe, Rumiko

    2017-01-01

    Full iterative reconstruction algorithm is available, but its diagnostic quality in pediatric cardiac CT is unknown. To compare the imaging quality of two algorithms, full and hybrid iterative reconstruction, in pediatric cardiac CT. We included 49 children with congenital cardiac anomalies who underwent cardiac CT. We compared quality of images reconstructed using the two algorithms (full and hybrid iterative reconstruction) based on a 3-point scale for the delineation of the following anatomical structures: atrial septum, ventricular septum, right atrium, right ventricle, left atrium, left ventricle, main pulmonary artery, ascending aorta, aortic arch including the patent ductus arteriosus, descending aorta, right coronary artery and left main trunk. We evaluated beam-hardening artifacts from contrast-enhancement material using a 3-point scale, and we evaluated the overall image quality using a 5-point scale. We also compared image noise, signal-to-noise ratio and contrast-to-noise ratio between the algorithms. The overall image quality was significantly higher with full iterative reconstruction than with hybrid iterative reconstruction (3.67±0.79 vs. 3.31±0.89, P=0.0072). The evaluation scores for most of the gross structures were higher with full iterative reconstruction than with hybrid iterative reconstruction. There was no significant difference between full and hybrid iterative reconstruction for the presence of beam-hardening artifacts. Image noise was significantly lower in full iterative reconstruction, while signal-to-noise ratio and contrast-to-noise ratio were significantly higher in full iterative reconstruction. The diagnostic quality was superior in images with cardiac CT reconstructed with electrocardiogram-gated full iterative reconstruction. (orig.)

  12. Dynamic concision for three-dimensional reconstruction of human organ built with virtual reality modelling language (VRML)*

    Yu, Zheng-yang; Zheng, Shu-sen; Chen, Lei-ting; He, Xiao-qian; Wang, Jian-jun

    2005-01-01

    This research studies the process of 3D reconstruction and dynamic concision based on 2D medical digital images using virtual reality modelling language (VRML) and JavaScript language, with a focus on how to realize the dynamic concision of 3D medical model with script node and sensor node in VRML. The 3D reconstruction and concision of body internal organs can be built with such high quality that they are better than those obtained from the traditional methods. With the function of dynamic c...

  13. LGM permafrost distribution: how well can the latest PMIP multi-model ensembles reconstruct?

    Saito, K.; Sueyoshi, T.; Marchenko, S.; Romanovsky, V.; Otto-Bliesner, B.; Walsh, J.; Bigelow, N.; Hendricks, A.; Yoshikawa, K.

    2013-03-01

    Global-scale frozen ground distribution during the Last Glacial Maximum (LGM) was reconstructed using multi-model ensembles of global climate models, and then compared with evidence-based knowledge and earlier numerical results. Modeled soil temperatures, taken from Paleoclimate Modelling Intercomparison Project Phase III (PMIP3) simulations, were used to diagnose the subsurface thermal regime and determine underlying frozen ground types for the present-day (pre-industrial; 0 k) and the LGM (21 k). This direct method was then compared to the earlier indirect method, which categorizes the underlying frozen ground type from surface air temperature, applied to both the PMIP2 (phase II) and PMIP3 products. Both direct and indirect diagnoses for 0 k showed strong agreement with the present-day observation-based map, although the soil temperature ensemble showed a higher diversity among the models partly due to varying complexity of the implemented subsurface processes. The area of continuous permafrost estimated by the multi-model analysis was 25.6 million km2 for LGM, in contrast to 12.7 million km2 for the pre-industrial control, whereas seasonally, frozen ground increased from 22.5 million km2 to 32.6 million km2. These changes in area resulted mainly from a cooler climate at LGM, but other factors as well, such as the presence of huge land ice sheets and the consequent expansion of total land area due to sea-level change. LGM permafrost boundaries modeled by the PMIP3 ensemble-improved over those of the PMIP2 due to higher spatial resolutions and improved climatology-also compared better to previous knowledge derived from the geomorphological and geocryological evidences. Combinatorial applications of coupled climate models and detailed stand-alone physical-ecological models for the cold-region terrestrial, paleo-, and modern climates will advance our understanding of the functionality and variability of the frozen ground subsystem in the global eco-climate system.

  14. Mirror-Imaged Rapid Prototype Skull Model and Pre-Molded Synthetic Scaffold to Achieve Optimal Orbital Cavity Reconstruction.

    Park, Sung Woo; Choi, Jong Woo; Koh, Kyung S; Oh, Tae Suk

    2015-08-01

    Reconstruction of traumatic orbital wall defects has evolved to restore the original complex anatomy with the rapidly growing use of computer-aided design and prototyping. This study evaluated a mirror-imaged rapid prototype skull model and a pre-molded synthetic scaffold for traumatic orbital wall reconstruction. A single-center retrospective review was performed of patients who underwent orbital wall reconstruction after trauma from 2012 to 2014. Patients were included by admission through the emergency department after facial trauma or by a tertiary referral for post-traumatic orbital deformity. Three-dimensional (3D) computed tomogram-based mirror-imaged reconstruction images of the orbit and an individually manufactured rapid prototype skull model by a 3D printing technique were obtained for each case. Synthetic scaffolds were anatomically pre-molded using the skull model as guide and inserted at the individual orbital defect. Postoperative complications were assessed and 3D volumetric measurements of the orbital cavity were performed. Paired samples t test was used for statistical analysis. One hundred four patients with immediate orbital defect reconstructions and 23 post-traumatic orbital deformity reconstructions were included in this study. All reconstructions were successful without immediate postoperative complications, although there were 10 cases with mild enophthalmos and 2 cases with persistent diplopia. Reoperations were performed for 2 cases of persistent diplopia and secondary touchup procedures were performed to contour soft tissue in 4 cases. Postoperative volumetric measurement of the orbital cavity showed nonsignificant volume differences between the damaged orbit and the reconstructed orbit (21.35 ± 1.93 vs 20.93 ± 2.07 cm(2); P = .98). This protocol was extended to severe cases in which more than 40% of the orbital frame was lost and combined with extensive soft tissue defects. Traumatic orbital reconstruction can be optimized and

  15. A bi-articular model for scapular-humeral rhythm reconstruction through data from wearable sensors.

    Lorussi, Federico; Carbonaro, Nicola; De Rossi, Danilo; Tognetti, Alessandro

    2016-04-23

    Patient-specific performance assessment of arm movements in daily life activities is fundamental for neurological rehabilitation therapy. In most applications, the shoulder movement is simplified through a socket-ball joint, neglecting the movement of the scapular-thoracic complex. This may lead to significant errors. We propose an innovative bi-articular model of the human shoulder for estimating the position of the hand in relation to the sternum. The model takes into account both the scapular-toracic and gleno-humeral movements and their ratio governed by the scapular-humeral rhythm, fusing the information of inertial and textile-based strain sensors. To feed the reconstruction algorithm based on the bi-articular model, an ad-hoc sensing shirt was developed. The shirt was equipped with two inertial measurement units (IMUs) and an integrated textile strain sensor. We built the bi-articular model starting from the data obtained in two planar movements (arm abduction and flexion in the sagittal plane) and analysing the error between the reference data - measured through an optical reference system - and the socket-ball approximation of the shoulder. The 3D model was developed by extending the behaviour of the kinematic chain revealed in the planar trajectories through a parameter identification that takes into account the body structure of the subject. The bi-articular model was evaluated in five subjects in comparison with the optical reference system. The errors were computed in terms of distance between the reference position of the trochlea (end-effector) and the correspondent model estimation. The introduced method remarkably improved the estimation of the position of the trochlea (and consequently the estimation of the hand position during reaching activities) reducing position errors from 11.5 cm to 1.8 cm. Thanks to the developed bi-articular model, we demonstrated a reliable estimation of the upper arm kinematics with a minimal sensing system suitable for

  16. Applications of Bayesian temperature profile reconstruction to automated comparison with heat transport models and uncertainty quantification of current diffusion

    Irishkin, M.; Imbeaux, F.; Aniel, T.; Artaud, J.F.

    2015-01-01

    Highlights: • We developed a method for automated comparison of experimental data with models. • A unique platform implements Bayesian analysis and integrated modelling tools. • The method is tokamak-generic and is applied to Tore Supra and JET pulses. • Validation of a heat transport model is carried out. • We quantified the uncertainties due to Te profiles in current diffusion simulations. - Abstract: In the context of present and future long pulse tokamak experiments yielding a growing size of measured data per pulse, automating data consistency analysis and comparisons of measurements with models is a critical matter. To address these issues, the present work describes an expert system that carries out in an integrated and fully automated way (i) a reconstruction of plasma profiles from the measurements, using Bayesian analysis (ii) a prediction of the reconstructed quantities, according to some models and (iii) a comparison of the first two steps. The first application shown is devoted to the development of an automated comparison method between the experimental plasma profiles reconstructed using Bayesian methods and time dependent solutions of the transport equations. The method was applied to model validation of a simple heat transport model with three radial shape options. It has been tested on a database of 21 Tore Supra and 14 JET shots. The second application aims at quantifying uncertainties due to the electron temperature profile in current diffusion simulations. A systematic reconstruction of the Ne, Te, Ti profiles was first carried out for all time slices of the pulse. The Bayesian 95% highest probability intervals on the Te profile reconstruction were then used for (i) data consistency check of the flux consumption and (ii) defining a confidence interval for the current profile simulation. The method has been applied to one Tore Supra pulse and one JET pulse.

  17. An Optimal DEM Reconstruction Method for Linear Array Synthetic Aperture Radar Based on Variational Model

    Shi Jun

    2015-02-01

    Full Text Available Downward-looking Linear Array Synthetic Aperture Radar (LASAR has many potential applications in the topographic mapping, disaster monitoring and reconnaissance applications, especially in the mountainous area. However, limited by the sizes of platforms, its resolution in the linear array direction is always far lower than those in the range and azimuth directions. This disadvantage leads to the blurring of Three-Dimensional (3D images in the linear array direction, and restricts the application of LASAR. To date, the research on 3D SAR image enhancement has focused on the sparse recovery technique. In this case, the one-to-one mapping of Digital Elevation Model (DEM brakes down. To overcome this, an optimal DEM reconstruction method for LASAR based on the variational model is discussed in an effort to optimize the DEM and the associated scattering coefficient map, and to minimize the Mean Square Error (MSE. Using simulation experiments, it is found that the variational model is more suitable for DEM enhancement applications to all kinds of terrains compared with the Orthogonal Matching Pursuit (OMPand Least Absolute Shrinkage and Selection Operator (LASSO methods.

  18. MRI Reconstructions of Human Phrenic Nerve Anatomy and Computational Modeling of Cryoballoon Ablative Therapy.

    Goff, Ryan P; Spencer, Julianne H; Iaizzo, Paul A

    2016-04-01

    The primary goal of this computational modeling study was to better quantify the relative distance of the phrenic nerves to areas where cryoballoon ablations may be applied within the left atria. Phrenic nerve injury can be a significant complication of applied ablative therapies for treatment of drug refractory atrial fibrillation. To date, published reports suggest that such injuries may occur more frequently in cryoballoon ablations than in radiofrequency therapies. Ten human heart-lung blocs were prepared in an end-diastolic state, scanned with MRI, and analyzed using Mimics software as a means to make anatomical measurements. Next, generated computer models of ArticFront cryoballoons (23, 28 mm) were mated with reconstructed pulmonary vein ostias to determine relative distances between the phrenic nerves and projected balloon placements, simulating pulmonary vein isolation. The effects of deep seating balloons were also investigated. Interestingly, the relative anatomical differences in placement of 23 and 28 mm cryoballoons were quite small, e.g., the determined difference between mid spline distance to the phrenic nerves between the two cryoballoon sizes was only 1.7 ± 1.2 mm. Furthermore, the right phrenic nerves were commonly closer to the pulmonary veins than the left, and surprisingly tips of balloons were further from the nerves, yet balloon size choice did not significantly alter calculated distance to the nerves. Such computational modeling is considered as a useful tool for both clinicians and device designers to better understand these associated anatomies that, in turn, may lead to optimization of therapeutic treatments.

  19. Reconstructing interacting entropy-corrected holographic scalar field models of dark energy in the non-flat universe

    Karami, K; Khaledian, M S [Department of Physics, University of Kurdistan, Pasdaran Street, Sanandaj (Iran, Islamic Republic of); Jamil, Mubasher, E-mail: KKarami@uok.ac.ir, E-mail: MS.Khaledian@uok.ac.ir, E-mail: mjamil@camp.nust.edu.pk [Center for Advanced Mathematics and Physics (CAMP), National University of Sciences and Technology (NUST), Islamabad (Pakistan)

    2011-02-15

    Here we consider the entropy-corrected version of the holographic dark energy (DE) model in the non-flat universe. We obtain the equation of state parameter in the presence of interaction between DE and dark matter. Moreover, we reconstruct the potential and the dynamics of the quintessence, tachyon, K-essence and dilaton scalar field models according to the evolutionary behavior of the interacting entropy-corrected holographic DE model.

  20. Recurrent neural network based hybrid model for reconstructing gene regulatory network.

    Raza, Khalid; Alam, Mansaf

    2016-10-01

    One of the exciting problems in systems biology research is to decipher how genome controls the development of complex biological system. The gene regulatory networks (GRNs) help in the identification of regulatory interactions between genes and offer fruitful information related to functional role of individual gene in a cellular system. Discovering GRNs lead to a wide range of applications, including identification of disease related pathways providing novel tentative drug targets, helps to predict disease response, and also assists in diagnosing various diseases including cancer. Reconstruction of GRNs from available biological data is still an open problem. This paper proposes a recurrent neural network (RNN) based model of GRN, hybridized with generalized extended Kalman filter for weight update in backpropagation through time training algorithm. The RNN is a complex neural network that gives a better settlement between biological closeness and mathematical flexibility to model GRN; and is also able to capture complex, non-linear and dynamic relationships among variables. Gene expression data are inherently noisy and Kalman filter performs well for estimation problem even in noisy data. Hence, we applied non-linear version of Kalman filter, known as generalized extended Kalman filter, for weight update during RNN training. The developed model has been tested on four benchmark networks such as DNA SOS repair network, IRMA network, and two synthetic networks from DREAM Challenge. We performed a comparison of our results with other state-of-the-art techniques which shows superiority of our proposed model. Further, 5% Gaussian noise has been induced in the dataset and result of the proposed model shows negligible effect of noise on results, demonstrating the noise tolerance capability of the model. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Implementation, availability and regulatory status of an OECD accepted Reconstructed Human Epidermis model in Brazil

    Rodrigo De Vecchi

    2018-02-01

    Full Text Available Introduction: In 2014, Brazil has joined the growing list of countries to ban cosmetic products from being tested on animal models. The new legislation comes into force in 2019. As a result, the interest for validated alternative testing methods for safety assessment has been increasing in academia, industry and associations. However, the lack of specific legislation on the use of biological material of human origin for toxicological tests makes the access to alternative in vitro models difficult. Furthermore, importation to Brazil is not possible on timely manner. Method: In this article, we report the implementation process of a Reconstructed Human Epidermis (SkinEthic™ RHE, an alternative model internationally accepted by OECD, through a technology transfer from EPISKIN® Lyon to Brazil. Regulatory evolution has been motivating the implementation and wide use of alternative methods to animal testing in several industry segments including cosmetic and pharmaceutical. Results: Protocol has been shown to be robust and highly reproducible. Quality control parameters (histological analysis, barrier function test and tissue viability were performed on 24 batches assembled in Brazil. SkinEthic™ RHE model use allows the full replacement of animal test methods for skin hazards identification. It has regulatory acceptance for several toxicological endpoints, such as the Draize test for skin irritation and corrosion. It allows the reduction and refining of pre-clinical protocols through tiered strategies. Implementation of SkinEthic™ RHE protocol is just a first and important step towards a new approach of toxicological safety testing in Brazil. Conclusion: The implementation was successfully done and reported here. However, in order to follow completely the new legislation up to 2019, the availability of validated models is essential. Quality control tests done on RHE batches produced in Brazil demonstrate that the model met OECD acceptance

  2. Comparison of adaptive statistical iterative reconstruction (ASiRTM) and model-based iterative reconstruction (VeoTM) for paediatric abdominal CT examinations: an observer performance study of diagnostic image quality

    Hultenmo, Maria; Caisander, Haakan; Mack, Karsten; Thilander-Klang, Anne

    2016-01-01

    The diagnostic image quality of 75 paediatric abdominal computed tomography (CT) examinations reconstructed with two different iterative reconstruction (IR) algorithms-adaptive statistical IR (ASiR TM ) and model-based IR (Veo TM )-was compared. Axial and coronal images were reconstructed with 70 % ASiR with the Soft TM convolution kernel and with the Veo algorithm. The thickness of the reconstructed images was 2.5 or 5 mm depending on the scanning protocol used. Four radiologists graded the delineation of six abdominal structures and the diagnostic usefulness of the image quality. The Veo reconstruction significantly improved the visibility of most of the structures compared with ASiR in all subgroups of images. For coronal images, the Veo reconstruction resulted in significantly improved ratings of the diagnostic use of the image quality compared with the ASiR reconstruction. This was not seen for the axial images. The greatest improvement using Veo reconstruction was observed for the 2.5 mm coronal slices. (authors)

  3. Using a Cellular Automata-Markov Model to Reconstruct Spatial Land-Use Patterns in Zhenlai County, Northeast China

    Yuanyuan Yang

    2015-05-01

    Full Text Available Decadal to centennial land use and land cover change has been consistently singled out as a key element and an important driver of global environmental change, playing an essential role in balancing energy use. Understanding long-term human-environment interactions requires historical reconstruction of past land use and land cover changes. Most of the existing historical reconstructions have insufficient spatial and thematic detail and do not consider various land change types. In this context, this paper explored the possibility of using a cellular automata-Markov model in 90 m × 90 m spatial resolution to reconstruct historical land use in the 1930s in Zhenlai County, China. Then the three-map comparison methodology was employed to assess the predictive accuracy of the transition modeling. The model could produce backward projections by analyzing land use changes in recent decades, assuming that the present land use pattern is dynamically dependent on the historical one. The reconstruction results indicated that in the 1930s most of the study area was occupied by grasslands, followed by wetlands and arable land, while other land categories occupied relatively small areas. Analysis of the three-map comparison illustrated that the major differences among the three maps have less to do with the simulation model and more to do with the inconsistencies among the land categories during the study period. Different information provided by topographic maps and remote sensing images must be recognized.

  4. Palaeotemperature reconstructions of the European permafrost zone during Oxygen Isotope Stage 3 compared with climate model results.

    van Huissteden, J.; Vandenberghe, J.; Pollard, D.

    2003-01-01

    A palaeotemperature reconstruction based on periglacial phenomena in Europe north of approximately 51 °N, is compared with high-resolution regional climate model simulations of the marine oxygen isotope Stage 3 (Stage 3) palaeoclimate. The experiments represent Stage 3 warm (interstadial), Stage 3

  5. Few-view single photon emission computed tomography (SPECT) reconstruction based on a blurred piecewise constant object model

    Wolf, Paul A.; Jørgensen, Jakob Sauer; Schmidt, Taly G.

    2013-01-01

    the assumed blurring model. Generally, increased values of the blurring parameter and TV weighting parameters reduced noise and streaking artifacts, while decreasing spatial resolution. As the number of views decreased from 60 to 9 the accuracy of images reconstructed using the proposed algorithm varied...

  6. Reconstructing late quaternary fluvial process controls in the upper aller valley (north Germany) by means of numerical modeling

    Veldkamp, A.; Berg, van den M.; Dijke, van J.J.; Berg van Saparoea, van den R.M.

    2002-01-01

    The morpho-genetic evolution of the upper Aller valley (Weser basin, North Germany) was reconstructed using geological and geomorphologic data integrated within a numerical process model framework (FLUVER-2). The current relief was shaped by Pre-Elsterian fluvial processes, Elsterian and Saalian ice

  7. Reconstructing Late Quaternary fluvial process controls in the upper Aller Valley (North Germany) by means of numerical modeling

    Veldkamp, A.; Berg, M.W. van den; Dijke, J.J. van; Berg van den; Saparoea, R.M. van

    2002-01-01

    The morpho-genetic evolution of the upper Aller valley (Weser basin, North Germany) was reconstructed using geological and geomorphologic data integrated within a numerical process model framework (FLUVER-2). The current relief was shaped by Pre-Elsterian fluvial processes, Elsterian and Saalian ice

  8. Evaluation of an intact, an ACL-deficient, and a reconstructed human knee joint finite element model.

    Vairis, Achilles; Stefanoudakis, George; Petousis, Markos; Vidakis, Nectarios; Tsainis, Andreas-Marios; Kandyla, Betina

    2016-02-01

    The human knee joint has a three-dimensional geometry with multiple body articulations that produce complex mechanical responses under loads that occur in everyday life and sports activities. Understanding the complex mechanical interactions of these load-bearing structures is of use when the treatment of relevant diseases is evaluated and assisting devices are designed. The anterior cruciate ligament (ACL) in the knee is one of four main ligaments that connects the femur to the tibia and is often torn during sudden twisting motions, resulting in knee instability. The objective of this work is to study the mechanical behavior of the human knee joint and evaluate the differences in its response for three different states, i.e., intact, ACL-deficient, and surgically treated (reconstructed) knee. The finite element models corresponding to these states were developed. For the reconstructed model, a novel repair device was developed and patented by the author in previous work. Static load cases were applied, as have already been presented in a previous work, in order to compare the calculated results produced by the two models the ACL-deficient and the surgically reconstructed knee joint, under the exact same loading conditions. Displacements were calculated in different directions for the load cases studied and were found to be very close to those from previous modeling work and were in good agreement with experimental data presented in literature. The developed finite element model for both the intact and the ACL-deficient human knee joint is a reliable tool to study the kinematics of the human knee, as results of this study show. In addition, the reconstructed human knee joint model had kinematic behavior similar to the intact knee joint, showing that such reconstruction devices can restore human knee stability to an adequate extent.

  9. Statistical model based iterative reconstruction (MBIR) in clinical CT systems: Experimental assessment of noise performance

    Li, Ke; Tang, Jie [Department of Medical Physics, University of Wisconsin-Madison, 1111 Highland Avenue, Madison, Wisconsin 53705 (United States); Chen, Guang-Hong, E-mail: gchen7@wisc.edu [Department of Medical Physics, University of Wisconsin-Madison, 1111 Highland Avenue, Madison, Wisconsin 53705 and Department of Radiology, University of Wisconsin-Madison, 600 Highland Avenue, Madison, Wisconsin 53792 (United States)

    2014-04-15

    Purpose: To reduce radiation dose in CT imaging, the statistical model based iterative reconstruction (MBIR) method has been introduced for clinical use. Based on the principle of MBIR and its nonlinear nature, the noise performance of MBIR is expected to be different from that of the well-understood filtered backprojection (FBP) reconstruction method. The purpose of this work is to experimentally assess the unique noise characteristics of MBIR using a state-of-the-art clinical CT system. Methods: Three physical phantoms, including a water cylinder and two pediatric head phantoms, were scanned in axial scanning mode using a 64-slice CT scanner (Discovery CT750 HD, GE Healthcare, Waukesha, WI) at seven different mAs levels (5, 12.5, 25, 50, 100, 200, 300). At each mAs level, each phantom was repeatedly scanned 50 times to generate an image ensemble for noise analysis. Both the FBP method with a standard kernel and the MBIR method (Veo{sup ®}, GE Healthcare, Waukesha, WI) were used for CT image reconstruction. Three-dimensional (3D) noise power spectrum (NPS), two-dimensional (2D) NPS, and zero-dimensional NPS (noise variance) were assessed both globally and locally. Noise magnitude, noise spatial correlation, noise spatial uniformity and their dose dependence were examined for the two reconstruction methods. Results: (1) At each dose level and at each frequency, the magnitude of the NPS of MBIR was smaller than that of FBP. (2) While the shape of the NPS of FBP was dose-independent, the shape of the NPS of MBIR was strongly dose-dependent; lower dose lead to a “redder” NPS with a lower mean frequency value. (3) The noise standard deviation (σ) of MBIR and dose were found to be related through a power law of σ ∝ (dose){sup −β} with the component β ≈ 0.25, which violated the classical σ ∝ (dose){sup −0.5} power law in FBP. (4) With MBIR, noise reduction was most prominent for thin image slices. (5) MBIR lead to better noise spatial

  10. Reconstruction of individual doses in population evacuated from Pripyat based on imitation stochastic model

    Chumak, V.V.

    1998-01-01

    Imitation stochastic approach to the reconstruction of individual doses of a large group of irradiated population (12632 persons) was used for the first time,evaluation of uncertainty of the obtained results was made. When the error of the dose reconstruction is small, the method is reliable providing a high accuracy of the results

  11. Exploring Normalization and Network Reconstruction Methods using In Silico and In Vivo Models

    Abstract: Lessons learned from the recent DREAM competitions include: The search for the best network reconstruction method continues, and we need more complete datasets with ground truth from more complex organisms. It has become obvious that the network reconstruction methods t...